Summary
In this chapter, we explored the boundaries of XAI tools. We found that an XAI tool, though model-agnostic, is not dataset-agnostic! An XAI tool might work well for text classification and not for images. An XAI tool might even work with some text classification datasets and not others.
We first described how a model-agnostic XAI tool cannot be dataset-agnostic. We used the knowledge gathered in the previous chapters to explain the limitations of an XAI tool when it reaches the boundaries of a prediction.
We saw how the interception function developed in Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP, introduced samples containing pseudo-anchors in the IMDb dataset. SHAP then interpreted these values as anchors and provided a SHAP explanation.
We transformed the prediction process of the model built in Chapter 6, AI Fairness with Google's What-If Tool (WIT), into an anchor explanation.
We also used the LIME...