Chapter 10: Implementing DL Explainability with MLflow
The importance of deep learning (DL) explainability is now well established, as we learned in the previous chapter. In order to implement DL explainability in a real-world project, it is desirable to log the explainer and the explanations as artifacts, just like other model artifacts in the MLflow server, so that we can easily track and reproduce the explanation. The integration of DL explainability tools such as SHAP (https://github.com/slundberg/shap) with MLflow can support different implementation mechanisms, and it is important to understand how these integrations can be used for our DL explainability scenarios. In this chapter, we will explore several ways to integrate the SHAP explanations into MLflow by using different MLflow capabilities. As tools for explainability and DL models are both rapidly evolving, we will also highlight the current limitations and workarounds when using MLflow for DL explainability implementation...