Summary
You started your journey in this chapter with an introduction to the different scenarios of training a model. A model is overfitting when its performance is much better on the training set than the test set. An underfitting model is one that can achieve good results only after training. Finally, a good model achieves good performance on both the training and test sets.
Then, you encountered several regularization techniques that can help prevent a model from overfitting. You first looked at the L1 and L2 regularizations, which add a penalty component to the cost function. This additional penalty helps to simplify the model by reducing the weights of some features. Then, you went through two different techniques specific to neural networks: dropout and early stopping. Dropout randomly drops some units in the model architecture and forces it to consider other features to make predictions. Early stopping is a mechanism that automatically stops the training of a model once the...