Neural networks start overfitting when they iterate too many times over the same small set of training samples. Therefore, a straightforward solution to prevent this problem is to figure out the number of training epochs a model needs. The number should be low enough to stop before the network starts overfitting, but still high enough for the network to learn all it can from this training set.
Cross-validation is the key here to evaluate when training should be stopped. Providing a validation dataset to our optimizer, the latter can measure the performance of the model on images the network has not been directly optimized for. By validating the network, for instance, after each epoch, we can measure whether the training should continue (that is, when the validation accuracy appears to be still increasing) or be stopped (that is, when the validation accuracy stagnates or drops). The latter is called early stopping.
In practice, we usually monitor and plot the validation...