Deploying the model as a service
In this section, you will deploy your model as a REST service. You will see that using the details mentioned in Chapter 7, Model Deployment and Automation, the team can package and deploy the model as a service. This service will then be consumed by users of your model. We highly encourage you to refresh your knowledge from Chapter 7, Model Deployment and Automation before proceeding to this section.
In Chapter 7, Model Deployment and Automation, you have deployed the model with a Predictor
class, which exposes the model as a REST service. You will use the same class here, however, in the flight project, you applied categorical encoding to the data before it was used for model training. This means that you will need to apply the same encoding to the input data at the inferencing time. Recall that, earlier in this chapter, you saved the file as FlightsDelayOrdinalEncoder.pkl
and it is available in the MLflow repository.
The next step is to write...