Next, we perform a utility search to define our last densely connected layer in the model. We want this layer as it outputs the class probability scores per output category, which we need to be able to visualize the saliency on the input image. The names of the layer can be found in the summary of the model (model.summary()). We will pass four specific arguments to the visualize_salency() function:
This will return the gradients of our output with respect to our input, which intuitively inform us what pixels have the largest effect on our model's prediction. The gradient variable stores six 224 x 224 images (corresponding to the input size for the ResNet50 architecture), one for each of the six input images of leopards. As we noted, these images are generated by the visualize_salency function, which takes four arguments as input:
- A seed input image...