Recommended strategies and best practices
Before we end this chapter (and this book), let’s quickly discuss some of the recommended strategies and best practices when using SageMaker Pipelines to prepare automated ML workflows. What improvements can we make to the initial version of our pipeline? Here are some of the possible upgrades we can implement to make our setup more scalable, more secure, and more capable of handling different types of ML and ML engineering requirements:
- Configure and set up autoscaling (automatic scaling) of the ML inference endpoint upon creation to dynamically adjust the number of resources used to handle the incoming traffic (of ML inference requests).
- Allow ML models to also be deployed in serverless and asynchronous endpoints (depending on the value of an additional pipeline input parameter) to help provide additional model deployment options for a variety of use cases.
- Add an additional step (or steps) in the pipeline that automatically...