Automatically moderating user content
Using the content moderator API, we can add monitoring to user-generated content. The API is created to assist with flags and to assess and filter offensive and unwanted content.
Types of content moderation APIs
We will quickly go through the key features of the moderation APIs in this section.
Note
A reference to the documentation for all APIs can be found at https://docs.microsoft.com/nb-no/azure/cognitive-services/content-moderator/api-reference.
Image moderation
The image moderation API allows you to moderate images for adult and inappropriate content. It can also extract textual content and detect faces in images.
When using the API to evaluate inappropriate content, the API will take an image as input. Based on the image, it will return a Boolean value, indicating whether the image is appropriate or not. It will also contain a corresponding confidence score between 0 and 1. The Boolean value is set based on a set of default thresholds.
If the image contains...