Running the first Ray Tune HPO experiment with MLflow
Now that we have set up Ray Tune, MLflow, and created the HPO run function, we can try to run our first Ray Tune HPO experiment, as follows:
python pipeline/hpo_finetuning_model.py
After a couple of seconds, you will see the following screen, Figure 6.2, which shows that all 10 trials (that is, the values that we set for num_samples
) are running concurrently:
After approximately 12–14 mins, you will see that all the trials have finished and the best hyperparameters will be printed out on the screen, as shown in the following (your results might vary due to the stochastic nature, the limited number of samples, and the use of grid search, which does not guarantee a global optimal):
Best hyperparameters found were: {'lr': 0.025639008922511797, 'batch_size': 64, 'foundation_model&apos...