In the final method, we simply visualize the overall activations associated with a particular output class, without explicitly passing our model an input image. This method can be very intuitive, while being quite aesthetically pleasing. For the purpose of our last experiment, we import yet another pretrained model, the VGG16 network. This network is another deep architecture based on the model that won the ImageNet classification challenge in 2014. Similar to our last example, we switch out the Softmax activation of our last layer with a linear one:
Then, we simply import the activation visualizer object from the visualization module implemented in keras-vis. We plot out the overall activations for the leopard class, by passing the visualize_activation function our model, the output layer, and the index corresponding to our output...