Inference ready models
We have previously worked on a business problem to predict the weather at a port. To build a solution for this business problem, data processing and ML model training were performed, followed by serializing models. Now, in this section, we explore how inference is done on the serialized model. This section's code is available from the attached Jupyter notebook in the chapter's corresponding folder in the book's GitHub repository. Here are the instructions for running the code:
- Log in to the Azure portal again.
- From Recent Resources, select the
MLOps_WS
workspace, and then click on the Launch Studio button. This will direct you to theMLOps_WS
workspace. - In the Manage section, click on the Compute section, and then select the machine created in Chapter 4, Machine Learning Pipelines. Click on the Start button to start the instance. When the VM is ready, click on the JupyterLab link.
- Now, in JupyterLab, navigate to the chapter...