Chapter 5: Running DL Pipelines in Different Environments
It is critical to have the flexibility of running a deep learning (DL) pipeline in different execution environments such as local or remote, on-premises, or in the cloud. This is because, during different stages of the DL development, there may be different constraints or preferences to either improve the velocity of the development or ensure security compliance. For example, it is desirable to do small-scale model experimentation in a local or laptop environment, while for a full hyperparameter tuning, we need to run the model on a cloud-hosted GPU cluster to get a quick turn-around time. Given the diverse execution environments in both hardware and software configurations, it used to be a challenge to achieve this kind of flexibility within a single framework. MLflow provides an easy-to-use framework to run DL pipelines at scale in different environments. We will learn how to do that in this chapter.
In this chapter, we...