To understand how to perform an adversarial attack on an image, let's understand how regular predictions are made using transfer learning first and then we will figure out how to tweak the input image so that the image's class is completely different, even though we barely changed the input image.
Generating images that can fool a neural network using adversarial attack
Getting ready
Let's go through an example where we will try to identify the class of the object within the image:
- Read the image of a cat
- Preprocess the image so that it can then be passed to an inception network
- Import the pre-trained Inception v3 model
- Predict the class of the object present in the image
- The image will be predicted as a persian...