Explaining ML models using SHAP values
SageMaker Clarify also computes model-agnostic feature attribution based on the concept of Shapley values. Shapley values can be used to determine the contribution each feature makes to model predictions. Feature attribution helps explain how a model makes decisions. Having a quantifiable approach to describe how a model makes decisions enables us to have trust in an ML model that meets regulatory requirements and supports the human decision-making process.
Similar to setting up configurations to run bias analysis jobs using SageMaker Clarify, it takes three configurations to set up a model explainability job: a data configuration, a model configuration, and an explainability configuration. Let's follow the next steps from the same notebook:
- Create a data configuration with the training dataset (matched). This is similar to the data configurations we created before. The code is illustrated in the following snippet:
explainability_data_config...