Part 2: Model Development Attacks
In this part, we will cover adversarial attacks targeting model development in AI. You will learn the basics of poisoning attacks to change model behavior and create backdoors. You will learn how to use the Adversarial Robustness Toolbox (ART) to implement different poisoning attacks and implement defenses. We will also look at other approaches to affect a model, such as tampering it with Trojan horses, and we will build an Android app to demonstrate it in action. Finally, we will look at how attackers can use packages, pre-trained models, pickle
serialization, and public datasets to attack model integrity without having direct access to our development environment. You will learn how to mitigate these threats and build a secure data science environment with a private package repository, DevSecOps and vulnerability scanning, and MLOps with MLflow.
This part has the following chapters:
- Chapter 4, Poisoning Attacks
- Chapter 5, Model...