Regularizing a neural network with early stopping
Early stopping is a commonly employed approach in deep learning to prevent the overfitting of models. The concept is straightforward yet effective: if the model is overfitting due to prolonged training epochs, we terminate the training prematurely to prevent overfitting. We can utilize this technique on the breast cancer dataset.
Getting ready
In a perfect world, there is no need for regularization. What that means is that for both the train and validation sets, the losses are almost perfectly equal, for any number of epochs, as in Figure 7.3.
Figure 7.3 – Example with no overfitting of train and valid losses as a function of the number of epochs
But it’s not always that perfect. In practice, it may happen that the neural network is learning more and more about the data distribution of the train set at every epoch, at the cost of the generalization to new data. This case is depicted...