Poisoning Attacks
In the previous chapter, we explored AI security and traditional cybersecurity limitations when defending against adversarial AI. We staged our first adversarial attack against a deployed model and discussed an overview of adversarial AI and its types of attacks.
In this chapter, we will delve deeper into adversarial AI and, more specifically, attacks during the development of an ML model. These are known as poisoning attacks aiming to compromise the model’s integrity. We will cover the following topics:
- The basics of poisoning attacks
- Staging a simple poisoning attack
- Backdoor poisoning attacks
- Hidden-trigger backdoor attacks
- Clean-label attacks
- Advanced poisoning attacks
- Mitigation and defenses
By the end of this chapter, you will be able to do the following:
- Understand poisoning attacks and distinguish their types and approaches
- Develop simple data poisoning attacks using training dataset manipulation...