When training a network, we specify the number of epochs we need in advance, without knowing how many epochs will actually be needed. If we specify the number of epochs to be too few compared to what is actually required, we may have to train the network again by specifying more epochs. On the other hand, if we specify too many more epochs than what are actually needed, then this may lead to an overfitting situation and we may have to retrain the network by reducing the number of epochs. This trial and error approach can be very time-consuming for applications where each epoch takes a long time to complete. In such situations, we can make use of callbacks that can help stop the network training at a suitable time.
To illustrate this problem, let's develop a classification model with the CTG data from Chapter 2, Deep Neural Networks for Multi...