Summary
In this chapter, we reviewed multiple model performance metrics and learned how to choose one for evaluating a model's predictive performance. We introduced Shapley values through some simple examples to further understand their purpose and use in predictive model evaluation. Within H2O, we used the explain
and explain_row
commands to create global and local explanations for a single model. We learned how to interpret the resulting diagnostics and visualizations to gain trust in a model. For AutoML objects and other lists of models, we generated global and local explanations and saw how to use them alongside model performance metrics to weed out inappropriate candidate models. Putting it all together, we can now evaluate tradeoffs between model performance, scoring speed, and explanations in determining which model to put into production. Finally, we discussed the importance of model documentation and showed how H2O AutoDoc can automatically generate detailed documentation...