Summary
In this chapter, we learned how to use SHAP’s KernelExplainer
, as well as its decision and force plot to conduct local interpretations. We carried out a similar analysis using LIME’s instance explainer for both tabular and text data. Lastly, we looked at the strengths and weaknesses of SHAP’s KernelExplainer
and LIME. In the next chapter, we will learn how to create even more human-interpretable explanations of a model’s decisions, such as if X conditions are met, then Y is the outcome.