Hyperparameter tuning for foundation models
Foundation models present some unique challenges for hyperparameter tuning. Let’s try to understand them:
- Model size – Possibly the largest obstacle to tuning foundation models is their sheer size. Many of the classic tuning strategies we looked at previously rely on training the model as many times as possible. When simply holding one copy of the model in memory requires tens of accelerators, the economics around this approach fall apart.
- Volume of downstream tasks – As we’ve seen throughout the book, the sheer volume of candidate downstream tasks for foundation models is enormous. This makes hyperparameter tuning much more complex because the objective metrics for each of these tasks are unique. Picking the right downstream task itself could be a kind of tuning challenge!
- Variety of hyperparameters – At these scales, the relevant hyperparameters aren’t just indicators of the training...