Explaining models built in H2O
Model performance metrics measured on our test data can tell us how well a model predicts and how fast it predicts. As mentioned in the chapter introduction, knowing that a model predicts well is not a sufficient reason to put it into production. Performance metrics alone cannot provide any insight into why the model is predicting as it is. If we don't understand why the model is predicting well, we have little hope of being able to anticipate conditions that would make the model not work well. The ability to explain a model's reasoning is a critical step prior to promoting it into production. This process can be described as gaining trust in the model.
Explainability is typically divided into global and local components. Global explainability describes how the model works for an entire population. Gaining trust in a model is primarily a function of determining how it works globally. Local explanations operate instead on individual rows...