Chapter 9: Deployment and Inference with MLflow
In this chapter, you will learn about an end-to-end deployment infrastructure for our Machine Learning (ML) system including the inference component with the use of MLflow. We will then move to deploy our model in a cloud-native ML system (AWS SageMaker) and in a hybrid environment with Kubernetes. The main goal of the exposure to these different environments is to equip you with the skills to deploy an ML model under the varying environmental (cloud-native, and on-premises) constraints of different projects.
The core of this chapter is to deploy the PsyStock model to predict the price of Bitcoin (BTC/USD) based on the previous 14 days of market behavior that you have been working on so far throughout the book. We will deploy this in multiple environments with the aid of a workflow.
Specifically, we will look at the following sections in this chapter:
- Starting up a local model registry
- Setting up a batch inference job...