Interpretability
I directed you toward a few interpretability techniques for machine learning models back in Chapter 10, Global Forecasting Models. While some of those, such as SHAP and LIME, can still be applied to deep learning models, none of them considers the temporal aspect by design. This is because all those techniques were developed for more general purposes, such as classification and regression. That being said, there has been some work in interpretability for DL models and time series models. Here, I’ll list a few promising papers that tackle the temporal aspect head-on:
- TimeSHAP: This is a model-agnostic recurrent explainer that builds upon KernelSHAP and extends it to the time series domain. Research paper: https://dl.acm.org/doi/10.1145/3447548.3467166. GitHub: https://github.com/feedzai/timeshap.
- Instance-wise Feature Importance in Time (FIT): This is an interpretability technique that relies on the distribution shift between the predictive distribution...