Deploying a pre-trained model to a serverless inference endpoint
In the initial chapters of this book, we’ve worked with several serverless services that allow us to manage and reduce costs. If you are wondering whether there’s a serverless option when deploying ML models in SageMaker, then the answer to that would be a sweet yes. When you are dealing with intermittent and unpredictable traffic, using serverless inference endpoints to host your ML model can be a more cost-effective option. Let’s say that we can tolerate cold starts (where a request takes longer to process after periods of inactivity) and we only expect a few requests per day – then, we can make use of a serverless inference endpoint instead of the real-time option. Real-time inference endpoints are best used when we can maximize the inference endpoint. If you’re expecting your endpoint to be utilized most of the time, then the real-time option may do the trick.