Summary
After reading this chapter, you should understand how attacks can be perpetrated on machine learning models and through evasion attacks in particular. You should know how to perform FGSM, BIM, PGD, C&W, and AP attacks and defend against them with spatial smoothing, adversarial training, and randomized smoothing. Last but not least, you should know how to evaluate and certify adversarial robustness. The next chapter is the last one, and it outlines some ideas on what's next for machine learning interpretation.