In this section, we will explore what the convolution layers are doing while learning. Understanding what activation layers are learning will not only help us to generate art, but will provide us with some useful insight that gives us greater opportunities for improvements.
First, we will see how to visualize the layers' knowledge, using the paper from Zeiler and Fergus 2013, which really does a great job in revealing what the layers are learning. Then we will look at a few examples from the same paper.
Let's look at the VGG-16 architecture, which we have studied previously in Chapter 3, Transfer Learning and Deep CNN Architectures, which has several convolution layers followed by fully connected layers and a softmax layer:
To train the model, we have to pick one of the layers, let's consider the second convolutional...