It would be great now to identify what we could improve in our network and also how our filters are reacting. It makes intuitive sense that it's possible to map the input pixels and understand which pixel helped determine a certain classification. This can help us understand why our model is not working correctly, as we can see which parts of the image mislead our model, and how it's possible to improve.
We will now see how it's possible to use it to further improve our network. To make it easier to understand and follow we will look at the MNIST dataset.
Let's start with a saliency map, which highlights the importance of each pixel in a classification context and can be regarded as a type of image segmentation. The map will highlight some specific areas in our image that contributed the most to the classification, as shown in the following...