Summary
In this chapter, we covered the fundamentals and challenges of HPO, why it is important for the DL model pipeline, and what a modern HPO framework should support. We compared three popular frameworks – Ray Tune, Optuna, and HyperOpt – and picked Ray Tune as the winner for running state-of-the-art HPO at scale. We saw how to create HPO-ready DL model code using Ray Tune and MLflow and ran our first HPO experiment with Ray Tune and MLflow. Additionally, we covered how to switch to other search and scheduler algorithms once we have our HPO code framework set up, using the Optuna and HyperBand schedulers as an example. The learnings from this chapter will help you to competently carry out large-scale HPO experiments in real-life production environments, allowing you to produce high-performance DL models in a cost-effective way. We have also provided many references in the Further reading section at the end of this chapter to encourage you to study further.
In our...