Summary
In this chapter, we first reviewed the existing approaches in the MLflow APIs that could be used for implementing explainability. Two existing MLflow APIs, mlflow.shap
and mlflow.evaluate
, have limitations, thus cannot be used for the complex DL models and pipelines explainability scenarios we need. We then focused on two main approaches to implement SHAP explanations and explainers within the MLflow API framework: mlflow.log_artifact
for logging explanations and mlflow.pyfunc.PythonModel
for logging a SHAP explainer. Using the log_artifact
API can allow us to log Shapley values and explanation plots into the MLflow tracking server. Using mlflow.pyfunc.PythonModel
allows us to log a SHAP explainer as a MLflow pyfunc model, thus opening doors to deploy a SHAP explainer as a web service to create an EaaS endpoint. It also opens doors to use SHAP explainers through the MLflow pyfunc load_model
or spark_udf
API for large-scale offline batch explanation. This enables us to confidently...