Starting up a local model registry
Before executing the following sections in this chapter, you will need to set up a centralized model registry and tracking server. We don't need the whole of the Data Science Workbench, so we can go directly to a lighter variant of the workbench built into the model that we will deploy in the following sections. You should be in the root folder of the code for this chapter, available at https://github.com/PacktPublishing/Machine-Learning-Engineering-with-MLflow/tree/master/Chapter09 .
Next, move to the gradflow
directory and start a light version of the environment to serve your model, as follows:
$ cd gradflow $ export MLFLOW_TRACKING_URI=http://localhost:5000 $ make gradflow-light
After having set up our infrastructure for API deployment with MLflow with the model retrieved from the ML registry, we will next move on to the cases where we need to score some batch input data. We will prepare a batch inference job with MLflow for the...