Section 3 – Running Deep Learning Pipelines at Scale
In this section, we will learn how to run deep learning (DL) pipelines in different execution environments and perform hyperparameter tuning, or hyperparameter optimization (HPO), at scale. We will start with an overview of the scenarios and requirements for executing DL pipelines in different environments. We will then learn how to use MLflow's command-line interface (CLI) to run in four different execution scenarios in a distributed environment. From there on, we will learn how to choose the best HPO framework by comparing Ray Tune, Optuna, and HyperOpt for tuning hyperparameters of a DL pipeline. Finally, we will concentrate on how to implement and run HPO for DL at scale using state-of-the-art HPO frameworks such as Ray Tune and MLflow.
This section comprises the following chapters:
- Chapter 5, Running DL Pipelines in Different Environments
- Chapter 6, Running Hyperparameter Tuning at Scale ...