Model explicability versus catastrophic forgetting
Looking at model performance is generally a good way to keep track of your model and it will definitely help you to detect that something, somewhere in the model, has gone wrong. Generally, this will be enough of an alerting mechanism and will help you to manage your models in production.
If you want to understand exactly what has gone wrong, however, you'll need to dig deeper into your model. Looking at performance only is more of a black-box approach, whereas we can also extract things such as trees, coefficients, variable importance, and the like to see what has actually changed inside the model.
There is no one-size-fits-all method for deep diving into models. All model categories have their own specific method for fitting the data, and an inspection of their fit would therefore be strongly dependent on the model itself. In the remainder of this section, however, we will look at two very common structures in machine...