Creating HPO-ready DL models with Ray Tune and MLflow
To use Ray Tune with MLflow for HPO, let's use the fine-tuning step in our DL pipeline example from Chapter 5, Running DL Pipelines in Different Environments, to see what needs to be set up and what code changes we need to make. Before we start, first, let's review a few key concepts that are specifically relevant to our usage of Ray Tune:
- Objective function: An objective function can be either to minimize or maximize some metric values for a given configuration of hyperparameters. For example, in the DL model training and fine-tuning scenarios, we would like to maximize the F1-score for the accuracy of an NLP text classifier. This objective function needs to be wrapped as a trainable function, where Ray Tune can do HPO. In the following section, we will illustrate how to wrap our NLP text sentiment model.
- Function-based APIs and class-based APIs: A function-based API allows a user to insert Ray Tune statements...