Visualizing the activations
Now we can train a neural network. Great. But what exactly is the neural network able to see and understand? That's a difficult question to answer, but as convolutions output an image, we could try to show this. Let's now try to show the activation for the first 10 images of the MINST test dataset:
- First, we need to build a model, derived from our previous model, that reads from the input and gets as output the convolutional layer that we want. The name can be taken from the summary. We will visualize the first convolutional layer,
conv2d_1
:conv_layer = next(x.output for x in model.layers if     x.output.name.startswith(conv_name))act_model = models.Model(inputs=model.input, outputs=[conv_layer])activations = act_model.predict(x_test[0:num_predictions, :, :, :])
- Now, for each test image, we can take all the activations and chain them together to get an image:
col_act = [] for pred_idx, act in enumerate(activations...