Understanding current MLflow explainability integration
MLflow has several ways to support explainability integration. When implementing explainability, we refer to two types of artifacts: explainers and explanations:
- An explainer is an explainability model, and a common one is a SHAP model that could be different kinds of SHAP explainers, such as TreeExplainer, KernelExplainer, and PartitionExplainer (https://shap.readthedocs.io/en/latest/generated/shap.explainers.Partition.html). For computational efficiency, we usually choose PartitionExplainer for DL models.
- An explanation is an artifact that shows some form of output from the explainer, which could be text, numerical values, or plots. Explanations can happen in offline training or testing, or can happen during online production. Thus, we should be able to provide an explainer for offline evaluation or an explainer endpoint for online queries if we want to know why the model provides certain predictions.
Here...