Summary
In this chapter, we focused on production deployments of ML models, the concepts behind this, and the different features available for deploying in multiple environments with MLflow.
We explained how to prepare Docker images ready for deployment. We also clarified how to interact with Kubernetes and AWS SageMaker to deploy models.
In the next chapter and the following sections of the book, we will focus on using tools to help scale out our MLflow workloads to improve the performance of our models' infrastructure.