Model building and evaluation
Our approach to model building starts with AutoML. Global explainability applied to the AutoML leaderboard either results in picking a candidate model or yields insights that we feed back into a new round of modified AutoML models. This process can be repeated if improvements in modeling or explainability are apparent. If a single model rather than a stacked ensemble is chosen, we can show how an additional random grid search could produce better models. Then, the final candidate model is evaluated.
The beauty of this approach in H2O-3 is that the modeling heavy lifting is done for us automatically with AutoML. Iterating through this process is straightforward, and the improvement cycle can be repeated, as needed, until we have arrived at a satisfactory final model.
We organize the modeling steps as follows:
- Model search and optimization with AutoML.
- Investigate global explainability with the AutoML leaderboard models.
- Select a...