Evasion Attacks against Deployed AI
We looked at adversarial attacks targeting model development and its dependencies in the previous three chapters. In the ever-evolving landscape of AI and ML, adversaries will not always have access to the model development process. As we delve deeper into the intricacies of adversarial AI, our journey brings us to a new frontier—the realm of evasion attacks staged against deployed models. Evason attacks entail sophisticated techniques that adversaries can employ to deceive, manipulate, and ultimately compromise the integrity of ML models. This chapter covers evasion attacks and how to defend against them with hands-on examples. We will use the Adversarial Robustness Toolbox (ART) and TextAttack to assess and enhance model resilience against evasion attacks.
By the end of this chapter, you will have a deeper comprehension of evasion attacks and practical experience in implementing and defending against them, setting a solid foundation for...