Securing our adversarial playground
In this section, we will highlight security concerns found in AI/ML development and how to address them in practice. We’ll cover how to secure the deployment of the image recognition service we developed in the previous chapter, which uses a pre-trained CIFAR-10 CNN. We will call this ImRecS from now on for brevity.
Our goal is to demonstrate the concepts rather than create a blueprint for production security.
In the previous chapter, we used a simple Python test client for API. To help us demonstrate the service better, we have written a simple web app that allows you to browse and upload your image to test the ImRecS API:

Figure 3.1 – The ImRecS web app
This is what our playground looks like:

Figure 3.2 – Adversarial AI playground – high-level architecture
We use Docker containers to package our web app and API, both of which are hosted on a Linux host...