Hyperparameter Tuning
Previously, you saw how to deal with a model that is overfitting by using different regularization techniques. These techniques help the model to better generalize to unseen data but, as you have seen, they can also lead to inferior performance and make the model underfit.
With neural networks, data scientists have access to different hyperparameters they can tune to improve the performance of a model. For example, you can try different learning rates and see whether one leads to better results, you can try different numbers of units for each hidden layer of a network, or you can test to see whether different ratios of dropout can achieve a better trade-off between overfitting and underfitting.
However, the choice of one hyperparameter can impact the effect of another one. So, as the number of hyperparameters and values you want to tune grows, the number of combinations to be tested will increase exponentially. It will also take a lot of time to train...