Using BentoML to serve a model
In this section, we will serve two regression models and pass them to a service as runners. We might need to use more than one regression model to get more accurate predictions using an ensemble of multiple models. Then, we will get predictions from these two models and send the combined response to the user:
- First of all, let’s create the two models and save them using the BentoML API. First, let’s create a RandomForestRegression model using the following code snippet and save it using the BentoML API:
X, y = make_regression(n_features=4, n_informative=2,
random_state=0, shuffle=False)
rf = RandomForestRegressor(max_depth=3, random_state=0)
rf.fit(X, y)
rf_model = bentoml.sklearn.save_model(
name = "rf",
model = rf
)
Then, we create an...