What is model interpretability?
Deep learning's popularity is mainly because of the sophisticated algorithms such as DNNs it uses to perform complex tasks. If trained popularly, the models are not only accurate but also generalize very well on real-world data. DL, with its ability to extract novel insights using automatic feature extraction and to identify complex relationships in massive datasets showed superior performance compared to the state-of-the-art conventional methods. The promise of these DL models, however, comes with some limitations. With their black-box kind of nature, these DL models face problems in explaining the relationship between inputs and predicted outputs or, in other words, model interpretability. It is humanly impossible to follow the reasoning for a particular prediction using these black-box models. You might be wondering if the DL models perform and generalize well, why you would not trust the model that it is making the right decisions. There can...