Understanding automatic HPO for DL pipelines
Automatic HPO has been studied for over two decades since the first known paper on this topic was published in 1995 (https://www.sciencedirect.com/science/article/pii/B9781558603776500451). It has been widely understood that tuning hyperparameters for an ML model can improve the performance of the model – sometimes, dramatically. The rise of DL models in recent years has triggered a new wave of innovation and the development of new frameworks to tackle HPO for DL pipelines. This is because a DL model pipeline imposes many new and large-scale optimization challenges that cannot be easily solved by previous HPO methods. Note that, in contrast to the model parameters that can be learned during the model training process, a set of hyperparameters must be set before training.
Difference between HPO and Transfer Learning's Fine-Tuning
In this book, we have been focusing on one successful DL approach called Transfer Learning...