Moderating content for profanity in your Ionic Framework app using Azure Content Moderator

Posted Monday, March 4, 2019 3:43 PM by CoreyRoth

On any consumer-facing platform where users can submit content that will be seen by the public, it's a good idea to have some level of content moderation.  Manual content moderation using humans is tedious and labor-intensive.  Luckily, Azure Content Moderator, part of Azure Cognitive Services, provides us with some AI capability to potentially detect language that might be offensive.  It's actually quite easy to get started with as well.

First, you'll want to create a new Content Moderator in your Azure Portal.  The options are fairly straight forward based upon region, resource groups, and scale.  For this example, I used the F0 Plan which comes with 5000 moderation transactions a month (full pricing details).  This should be more than enough to prove out the concept.

Screen Shot 2019-03-04 at 2.35.21 PM

When your Content Moderator finishes provisioning, go to the Keys tab and make note of Key 1 and Key 2 here.  You'll use this later.  Keep these keys secure so that others don't use your API calls.

In this example, we are using Ionic Framework 4.0 which just hit general availability fairly recently.  We'll build on a simple out-of-the-box app.  Full source code is available at the link at the end of this post.  If you don't have Ionic Framework installed, you can install it from here.   To start a new project in Ionic Framework, issue the following command.

ionic start IonicAzureContentModerator

After the project has been created, you can see it in the browser by running the following command:

ionic serve

This will launch the app in a web browser with live reload.  Now let's add the components we need to test this out.  First, let's create a service to make our call to Azure.  Normally this service would call our own API or function which would proxy the call to Azure Content Moderator.  We do this so that our API key is not stored in the client application.  For simplicity though, we are going to call Azure directly because this is only an example.  To create the service issue the following command in the Ionic CLI.

ionic g service services/ContentModeratorService

This will create a stub for a service. We'll look at what goes in here shortly.  However, since we are going to be making an Http call, we need to add the angular HttpClientModule.  Open app.module.ts and add the following import:

import { HttpClientModule, HttpClient } from '@angular/common/http';

Next add HttpClientModule to the list of imports of @NgModule.

Now let's go back to our service and we'll start by adding our HttpClientModule reference.

import { HttpClient, HttpHeaders } from '@angular/common/http';

We also need to add the HttpClient to our constructor to create an instance.  Finally, we have a simple method called moderateContent which calls the Azure Endpoint for the Content Moderator.  The first step is to assemble a URL to that endpoint.  This URL varies by region so you will need to look up what the URL is based on where you deployed your Content Moderator.  The API has a few parameters including whether to scan for PII as well.  As a result, I've made that a parameter on my function.  Here is what my URL looks like for South Central US.

let apiUrl = `https://southcentralus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessText/Screen?PII=${PII}&classify=true`;

Next, we construct an HttpOptions object to pass the API Key.  It goes in a header by the name of Ocp-Apim-Subscription-Key.  I store the API key in the value Constants.apiKey.  Again, we wouldn't normally store our API key in a client application like this.

const httpOptions = {

headers: new HttpHeaders({

'Ocp-Apim-Subscription-Key': Constants.apiKey

})

};

Finally we make the HTTP POST while passing the content that we want to moderate in the body.

Now, we can build a simple form to collect the user's input and make the call to our service.  I add an ion-textarea to collect the user's input and add a toggle to choose whether or not to scan for PII.  Then I just make use of some ion-badge components to display the results.  Here is what our interface looks like in the browser.

Screen Shot 2019-03-04 at 3.19.50 PM

Here is the HTML code.

Screen Shot 2019-03-04 at 3.20.58 PM

Clicking the button simply executes the moderateContent method in the service we created.  Let's look at the data that comes back when we execute a call. 

Screen Shot 2019-03-04 at 3.27.12 PM

In the Content Moderator response, we receive a Classification object and in that object there are three categories returned: Category1, Category2, and Category3 along with a boolean ReviewRecommended which will return true if the text is likely an issue.  The categories are defined as follows:

  • Catagory1 - refers to potential presence of language that may be considered sexually explicit or adult in certain situations.
  • Category2 - refers to potential presence of language that may be considered sexually suggestive or mature in certain situations.
  • Category3 - refers to potential presence of language that may be considered offensive in certain situations.

Each category contains a decimal value between 0 and 1.  The higher it is, the more likely the content applies to that category.  In this case, the phrase we sent is more of a general profanity as opposed to being sexual of nature. 

In our app, we simply bind these values using the ion-note so that you can easily see how the phrase was interpreted.

Screen Shot 2019-03-04 at 3.35.05 PM

If you use the toggle for PII, you can look for things such as national id numbers, phone numbers, addresses, and IP addresses.  Here is an example of the output.  You can find more information in the docs.

Screen Shot 2019-03-04 at 3.37.20 PM

You can observe the values of potential PII using the PII object.  You'll notice in this case as well that the Classification values were significantly lower because there wasn't any offensive text in the string we sent.

Azure Content Moderator part of Cognitive Services is a great way to test content to see if it's potentially offensive.  While you will still want to incorporate a human review element into any content moderation process. This should help automate some of the process for you.  In addition, Cognitive Services has Content Moderation for images which can identify potentially suggestive images.

If you want to dive deeper, you can take a look at my code sample in GitHub.



Filed under: , ,

Comments

No Comments

Leave a Comment

(required)
(required)
(optional)
(required)