Summary
In this chapter, we covered essential security concepts and applied them in practice to our adversarial AI playground and the sample CIFAR-10 CNN AI service, ImRecS.
We took you through the journey of hardening an AI solution by strengthening the deployment environment and securing the artifacts we deploy, including the model, source code, third-party libraries, secrets, cryptographic material, and containers. We demonstrated why traditional cybersecurity is not adequate on its own to safeguard from adversarial AI attacks.
In the following chapters, we will delve into more details and practical hands-on exploration of adversarial AI and look at how the various attacks work, how to defend against adversarial AI attacks, and how to integrate these defenses in MLOps by creating MLSecOps. In the next chapter, we will start with poisoning attacks, the adversarial AI attack that attacks model training.