What this book covers
Chapter 1, Getting Started with AI, covers key concepts and terms surrounding AI and ML to get us started with adversarial AI.
Chapter 2, Building Our Adversarial Playground, goes through the step-by-step setup of our environment and the creation of some basic models and our sample Image Recognition Service (ImRecS).
Chapter 3, Security and Adversarial AI, discusses how to apply traditional cybersecurity to our sample ImRecS and bypass it with a sample adversarial AI attack.
Chapter 4, Poisoning Attacks, covers poisoning data and models, and how to mitigate them with examples from our ImRecS.
Chapter 5, Model Tampering with Trojan Horses and Model Reprogramming, looks at changing models by embedding code-based Trojan horses and how to defend against them.
Chapter 6, Supply Chain Attacks and Adversarial AI, covers traditional and new AI supply chain risks and mitigations, including building our own private package repository.
Chapter 7, Evasion Attacks against Deployed AI, explores fooling AI systems with evasion attacks and how to defend against them.
Chapter 8, Privacy Attacks – Stealing Models, looks at model extraction attacks to replicate models and how to mitigate these attacks, including watermarking.
Chapter 9, Privacy Attacks – Stealing Data, looks at model inversion and inference attacks to reconstruct or infer sensitive data from model responses.
Chapter 10, Privacy-Preserving AI, discusses techniques for preserving privacy in AI, including anonymization, differential privacy, homomorphic encryption, federated learning, and secure multi-party computations.
Chapter 11, Generative AI – A New Frontier, provides a hands-on introduction to generative AI with a focus on GANs.
Chapter 12, Weaponizing GANs for Deepfakes and Adversarial Attacks, provides an exploration of how to use GANs to support adversarial attacks, including deepfakes, and how to mitigate these attacks.
Chapter 13, LLM Foundations for Adversarial AI, provides a hands-on introduction to LLMs using the OpenAI API and LangChain to create our sample Foodie AI bot with RAG.
Chapter 14, Adversarial Attacks with Prompts, explores prompt injections against LLMs and how to mitigate them
Chapter 15, Poisoning Attacks and LLMs, looks at poisoning attacks with RAG, embeddings, and fine-tuning, using Foodie AI as an example, and appropriate defenses.
Chapter 16, Advanced Generative AI Scenarios, looks at poisoning the open source LLM Mistral with fine-tuning on Hugging Face, model lobotomization, replication, and inversion and inference attacks on LLMs.
Chapter 17, Secure by Design and Trustworthy AI, explores a methodology using standards-based taxonomies, threat modeling, and risk management to build secure AI with a case study combining predictive AI and LLMs.
Chapter 18, AI Security with MLSecOps, looks at MLSecOps patterns with examples of how to apply them using Jenkins, MLflow, and custom Python scripts.
Chapter 19, Maturing AI Security, discusses applying AI security governance and evolving AI security at an enterprise level.