Summary
In this chapter, we have briefly explored how to explain or interpret the decisions made by deep learning models using PyTorch. Using the handwritten digits classification model as an example, we first uncovered the internal workings of a CNN model's convolutional layers. We demonstrated how to visualize the convolutional filters and feature maps produced by convolutional layers.
We then used a dedicated third-party model interpretability library built on PyTorch, called Captum. We used out-of-the-box implementations provided by Captum for feature attribution techniques, such as saliency, integrated gradients, and deeplift. Using these techniques, we demonstrated how the model is using an input to make predictions and which parts of the input are more important for a model to make predictions.
In the next, and final, chapter of this book, we will learn how to rapidly train and test machine learning models on PyTorch – a skill that is useful for quickly iterating...