It is known that with deep learning methods that have huge numbers of parameters, sometimes more than tens of millions, it becomes more difficult for humans to comprehend what exactly they have learned, except the fact that they perform unexpectedly well in CV and NLP fields. If someone around you feels exceptionally comfortable using deep learning to solve each and every practical problem without a second thought, what we are about to learn in this chapter will help them to realize the potential risks their models are exposed to.
Adversarial examples – attacking deep learning models
What are adversarial examples and how are they created?
Adversarial examples are a kind of sample (often modified based on real data)...