It is always a good practice to save the model when the validation score chosen for evaluation improves. For our project, we will be tracking the validation log loss, and will save the model as the validation score improves over the different epochs. This way, after the training, we will save the model weights that provided the best validation score, and not the final model weights from when we stopped the training. The training will continue until the maximum number of epochs defined for the training is reached, or until the validation log loss hasn't reduced for 10 epochs in a row. We will also reduce the learning rate when the validation log loss doesn't improve for 3 epochs. The following code block can be used to perform the learning rate reduction and checkpoint operation:
reduce_lr = keras.callbacks.ReduceLROnPlateau...