Creating a SHAP explainer
In this section, we will create a SHAP explainer.
We described SHAP in detail in Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP. If you wish, you can go through that chapter again before moving on.
We pass a subset of our training data to the explainer:
# Create a SHAP explainer by passing a subset of our training data
import shap
sample_size = 500
if sample_size > len(train_data.values):
sample_size = len(train_data.values)
explainer = shap.DeepExplainer(model,
train_data.values[:sample_size])
We will now generate a plot of Shapley values.
The plot of Shapley values
In Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP, we learned that Shapley values measure the marginal contribution of a feature to the output of an ML model. We also created a plot. In this case, the SHAP plot contains all of the predictions, not only one prediction at a time.
As in Chapter 4, we...