Deployment strategies and best practices
In this section, we will discuss the relevant deployment strategies and best practices when using the SageMaker hosting services. Let’s start by talking about the different ways we can invoke an existing SageMaker inference endpoint. The solution we’ve been using so far involves the usage of the SageMaker Python SDK to invoke an existing endpoint:
from sagemaker.predictor import Predictor from sagemaker.serializers import JSONSerializer from sagemaker.deserializers import JSONDeserializer endpoint_name = "<INSERT NAME OF EXISTING ENDPOINT>" predictor = Predictor(endpoint_name=endpoint_name) predictor.serializer = JSONSerializer() predictor.deserializer = JSONDeserializer() payload = { ^ "text": "I love reading the book MLE on AWS!" } predictor.predict(payload)
Here, we initialize a Predictor
object and point it to an existing inference endpoint during the initialization step...