Part 3: Attacks on Deployed AI
In this part, you will learn how to attack AI after its development and deployment. We will learn what evasion attacks are, the role of carefully crafted payloads called perturbations to evade AI, and popular techniques to generate perturbations. You will use ART to stage evasion attacks in image recognition and TextAttack on NLP. We will also cover privacy attacks, and you will learn approaches to steal models by creating good approximations with model extraction attacks, as well as reconstructing training data from output or using advanced adversarial techniques to infer sensitive data from model responses. We will look at mitigations and defenses, and you will learn both basic and advanced techniques to protect privacy in AI.
This part has the following chapters:
- Chapter 7, Evasion Attacks against Deployed AI
- Chapter 8, Privacy Attacks – Stealing Models
- Chapter 9, Privacy Attacks – Stealing Data
- Chapter 10...