We have a solution that will enable you to accomplish what you wish. Short story of it is you send in your texts to our api and you get back measurements appraising how profane they each are and depending on these values you decide whether to allow each text. Below are the details:

The solution is made up of a few different cooperating systems:
  1. Gavagai API. This is a text analysis API. You can have a look at https://developer.gavagai.io/ and particularly the tonality endpoint. It allows you to send in texts and get sentiment measurements back for each of them. Out of the box we have 8 different sentiments installed for each of the 46 languages we support: Positivity, Negativity, Fear, Hate, Love, Skepticism, Violence, and Desire. But you probably would like to measure something like profanity so let's have a look at the next component.
  2. Gavagai Explorer. A web application for finding common topics and sentiment in large sets of texts (with graphical presentation capabilities). Suitable for analysis of reviews, survey responses etc. The main focus of this web app is not really relevant for your use case but it has one important tool that you need: the concept modeler. The concept modeler allows you to easily build your own concepts. As I mentioned above, the tonality endpoint support 8 sentiments out of the box; these are actually concepts in our system and using the concept modeler you can create your own.
  3. You also need a connection between your Explorer account and your api account. We set this up for you so that when you make calls to the API, any custom concepts that you have defined in Explorer will also be measured and returned in addition to the standard 8.
So how would you use this in practice? Well you would define your custom concepts in Explorer, let's say "profanity" for each of the languages you wish to support. Then you send in the relevant texts you want to decide if they contain profanity or not into the API. You get back a value for profanity for each text and then perhaps depending on a threshold that you define (something like "if profanity > 0") you can reject the text in question.

The really great thing about all of this is the capability of the concept modeler to help you define the custom concept in an exhaustive and comprehensive way. Let's consider a nice concept example like happiness. You enter a few "seed words" like happy, and glad and the concept modeler gives you suggestions for more relevant words like: joyous, joyful, positive, and many many more. This makes it very easy to define your concept.

So what about pricing? For the api the prices are here: https://developer.gavagai.io/pricing and for the Explorer the minimum monthly charge for an account is €40 (detailed prices are here: https://www.gavagai.io/pricing/). And finally, for the custom concepts they are €20 each per month.