Summary
NN interpretation is a form of a model understanding process that is different from explaining the predictions made by a model. Both manual discovery of real images and optimizing synthetic images to activate highly for the chosen neuron to interpret are techniques that can be applied together to understand the NN. Practically, the interpretation of NNs will be useful when you have goals to reveal the appearance of a particular prediction label or class pattern, gain insight into the factors contributing to the prediction of a specific label in your dataset or all labels in general, and gain a detailed breakdown of the reasons behind a prediction.
There might be hiccups when trying to apply the technique in your use case, so don’t be afraid to experiment with the parameters and components introduced in this chapter in your goal to interpret your NN.
We will explore a different facet of insights that you can obtain from your data and your model in the next chapter...