Reviewing model-agnostic explainability
Model-agnostic XAI methods are universal regardless of the model type. They are often used as post hoc explainability after a model is trained. Their goal is to produce explanations faithful to the original models without the need to understand the internal network structure, which provides flexibility in model selection. Hence, model-agnostic methods are more relevant for complex and opaque models where it is difficult to extract the inner workings of the network. Model-agnostic methods are also suitable for comparing model performance since they can be applied to various models.
In Chapter 6 and Chapter 7, we reviewed local interpretable model-agnostic explanations (LIME), a local approximation post hoc perturbation explainability technique. In Chapter 3, you learned how to build an NLP multiclassification model using AutoGluon. This section reviews Kernel SHAP, a popular model-agnostic method that uses LIME and Shapley values. You will...