Chapter 7: Understanding ML Models
Now that we have built a few models using H2O software, the next step before production is to understand how the model is making decisions. This has been termed variously as machine learning interpretability (MLI), explainable artificial intelligence (XAI), model explainability, and so on. The gist of all these terms is that building a model that predicts well is not enough. There is an inherent risk in deploying any model before fully trusting it. In this chapter, we outline a set of capabilities within H2O for explaining ML models.
By the end of this chapter, you will be able to do the following:
- Select an appropriate model metric for evaluating your models.
- Explain what Shapley values are and how they can be used.
- Describe the differences between global and local explainability.
- Use multiple diagnostics to build understanding and trust in a model.
- Use global and local explanations along with model performance metrics...