Visualize global explanations
Previously, we covered the concept of global explanations and SHAP values. But we didn’t demonstrate the many ways we can visualize them. As you will learn, SHAP values are very versatile and can be used to examine much more than feature importance!
But first, we must initialize a SHAP explainer. In the previous chapter, we generated the SHAP values using shap.TreeExplainer
and shap.KernelExplainer
. This time, we will use SHAP’s newer interface, which simplifies the process by saving SHAP values and corresponding data in a single object and much more! Instead of explicitly defining the type of explainer, you initialize it with shap.Explainer(model)
, which returns the callable object. Then, you load your test dataset (X_test
) into the callable Explainer
, and it returns an Explanation
object:
cb_explainer = shap.Explainer(cb_mdl)
cb_shap = cb_explainer(X_test)
In case you are wondering, how did it know what kind of explainer to...