Understanding limitations of traditional model interpretation methods
In a nutshell, traditional interpretation methods only cover surface-level questions about your models such as the following:
- In aggregate, do they perform well?
- What changes in hyperparameters may impact predictive performance?
- What latent patterns can you find between the features and their predictive performance?
These questions are very limiting if you are trying to understand not only whether your model works but why and how?
This gap in understanding can lead to unexpected issues with your model that won't necessarily be immediately apparent. Let's consider that models, once deployed, are not static but dynamic. They face different challenges than they did in the "lab" when you were training them. They may face not only performance issues but issues with bias such as imbalance with underrepresented classes, or security with adversarial attacks. Realizing that...