How exactly will we do this? Well we start by importing the Model class from the functional API. This lets us define a new model. The key difference in our new model is that this one is capable of giving us back multiple outputs, pertaining to the outputs of intermediate layers. This is achieved by using the layer outputs from a trained CNN (such as our smile detector) and feed it into this new multi-output model. Essentially, our multi-output model will take an input image and return filter-wise activations for each of the eight layers in our smile detector model that we previously trained.
You can also limit the number of layers to visualize through the list slicing notation used on model.layers, shown as follows:
The last line of the preceding code defines the activations variable, by making our multi-output model perform inference on...