Deep learning models are often said to be not interpretable. But there are different techniques that are being explored to interpret what happens inside these models. For images, the features learned by convents are interpretable. We will explore two popular techniques to understand convents.
Understanding what a CNN model learns
Visualizing outputs from intermediate layers
Visualizing the outputs from intermediate layers will help us in understanding how the input image is being transformed across different layers. Often, the output from each layer is called an activation. To do this, we should extract output from intermediate layers, which can be done in different ways. PyTorch provides a method called register_forward_hook...