In this section, we will deploy the NTM model, run the inference, and interpret the results. Let's get started:
- First, we deploy the trained NTM model as an endpoint, as follows:
ntm_predctr = ntm_estmtr.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
In the preceding code, we call the deploy() method of the SageMaker Estimator object, ntm_estmtr, to create an endpoint. We pass the number and type of instances required to deploy the model. The NTM Docker image is used to create the endpoint. SageMaker takes a few minutes to deploy the model. The following screenshot shows the endpoint that was provisioned:
You can see the endpoint you've created by navigating to the SageMaker service, going to the left navigation pane, looking under the Inference section, and clicking on Endpoints.
...