Optimizing hyperparameters with Automatic Model Tuning
Hyperparameters have a huge influence on the training outcome. Just like in chaos theory, tiny variations of a single hyperparameter can cause wild swings in accuracy. In most cases, the "why?" evades us, leaving us perplexed about what to try next.
Over the years, several techniques have been devised to try to solve the problem of selecting optimal hyperparameters:
- Manual Search: This means using our best judgment and experience to select the "best" hyperparameters. Let's face it: this doesn't really work, especially with deep learning and its horde of training and network architecture parameters.
- Grid Search: This entails systematically exploring the hyperparameter space, zooming in on hot spots, and repeating the process. This is much better than a manual search. However, this usually requires training hundreds of jobs. Even with scalable infrastructure, the time and dollar budgets...