Flagging and reviewing inappropriate content
Now that we have provided a complete overview of each of the Content Moderator APIs, let’s look at what it will take to make use of the service in production. As mentioned previously, turning on the Content Moderator itself follows the same process, regardless of whether you are using images or text. If you’re asking why I didn’t include video, read the last section, Applying the service for video moderation, to understand the state of Video Content Moderation. By creating the service in the Azure portal, you are simply enabling the compute that will be used to process the work required by the APIs, so looking at the particular settings of the service itself is relatively innocuous. What will be important is how you intend to deploy the service as a larger architecture, and what security, networking, tagging, and other considerations you need to make. The following diagram is a sample Azure Deployment to give you an...