Chapter 6: Local Model-Agnostic Interpretation Methods
In the previous two chapters, we dealt exclusively with global interpretation methods. This chapter will foray into local interpretation methods, which are there to explain why a single prediction or a group of predictions was made. It will cover how to leverage SHapley Additive exPlanations' (SHAP's) KernelExplainer
and also, another method called Local Interpretable Model-agnostic Explanations (LIME) for local interpretations. We will also explore how to use these methods with both tabular and text data.
These are the main topics we are going to cover in this chapter:
- Leveraging SHAP's
KernelExplainer
for local interpretations with SHAP values - Employing LIME
- Using LIME for natural language processing (NLP)
- Trying SHAP for NLP
- Comparing SHAP with LIME