H2O MOJO deep dive
All MOJOs are fundamentally similar from a deployment and scoring standpoint. This is true regardless of the MOJO's origin from an upstream model-building standpoint, that is, regardless of which of H2O's wide diversity of model-building algorithms (for example, Generalized Linear Model, and XGBoost) and techniques (for example, Stacked Ensembles and AutoML) and training dataset sizes (from GBs to TBs) were used to build the final model.
Let's get to know the MOJO in greater detail.
What is a MOJO?
A MOJO stands for Model Object, Optimized. It is exported from your model-building IDE by running the following line of code:
model.download_mojo(path="path/for/my/mojo")
This downloads a uniquely-named .zip
file onto the filesystem of your IDE, to the path you specified. This .zip
file is the MOJO and this is what is deployed. You do not unzip it, but if you are curious, it contains a model.ini
file that describes the MOJO as well...