Understanding visual features
Deep learning models are often criticized for not being interpretable. A neural network-based model is often considered to be like a black box because it's difficult for humans to reason out the working of a deep learning model. The transformations of an image over layers by deep learning models are non-linear due to activation functions, so cannot be visualized easily. There are methods that have been developed to tackle the criticism of the non-interpretability by visualizing the layers of the deep network. In this section, we will look at the attempts to visualize the deep layers in an effort to understand how a model works.
Visualization can be done using the activation and gradient of the model. The activation can be visualized using the following techniques:
- Nearest neighbour: A layer activation of an image can be taken and the nearest images of that activation can be seen together.
- Dimensionality reduction: The dimension of the activation can be reduced...