Interpreting models to ensure fairness
In Chapter 8, Privacy, Debugging, and Launching Your Products, we discussed model interpretability as a debugging method. We used LIME to spot the features that the model is overfitting to.
In this section, we will use a slightly more sophisticated method called SHAP (SHapley Additive exPlanation). SHAP combines several different explanation approaches into one neat method. This method lets us generate explanations for individual predictions as well as for entire datasets in order to understand the model better.
You can find SHAP on GitHub at https://github.com/slundberg/shap and install it locally with pip install shap
. Kaggle kernels have SHAP preinstalled.
Note
The example code given here is from the SHAP example notebooks. You can find a slightly extended version of the notebook on Kaggle:
https://www.kaggle.com/jannesklaas/explaining-income-classification-with-keras
SHAP combines seven model interpretation methods, those being LIME, Shapley sampling...