Implementing an adversarial attack using the Fast Gradient Signed Method
We often think of highly accurate deep neural networks as robust models, but the Fast Gradient Signed Method (FGSM), proposed by no other than the father of GANs himself, Ian Goodfellow, showed otherwise. In this recipe, we'll perform an FGSM attack on a pre-trained model to see how, by introducing seemingly imperceptible changes, we can completely fool a network.
Getting ready
Let's install OpenCV
with pip
.
We'll use it to save the perturbed images using the FGSM method:
$> pip install opencv-contrib-python
Let's begin.
How to do it
After completing the following steps, you'll have successfully performed an adversarial attack:
- Import the dependencies:
import cv2 import tensorflow as tf from tensorflow.keras.applications.nasnet import * from tensorflow.keras.losses import CategoricalCrossentropy
- Define a function to preprocess an image, which entails...