Deploying a model for batch inferencing through the Python SDK
In this section, we are going to deploy an existing model to a managed endpoint for batch inferencing using the Python SDK by following these steps:
- Go to https://ml.azure.com.
- Select your workspace.
- On the workspace user interface on the left side, click Compute:
Figure 7.14 – Compute instance icon
- On the Compute screen, select your compute instance and select Start:
Figure 7.15 – Start compute
Your compute instance will change from Stopped to Starting. Once the compute instance moves from Starting to Running, it is ready for use, so go ahead and clone our repository, which contains some sample notebooks to walk through.
- Click on the Terminal hyperlink under applications.
This will open the terminal on your compute instance. Note that the path will include your user in the directory path. Type the...