Before we start training our model, since we are dealing with some complex components in our model, we use a callback in our model to reduce the learning rate in case there is a plateau in the model's accuracy across successive epochs. This is extremely helpful to change the learning rate of the model on the fly without stopping training:
from keras.callbacks import ReduceLROnPlateau reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.15, patience=2, min_lr=0.000005)
Let's train our model now! We have trained our model to around 30 to 50 epochs and saved the model at around 30 epochs and again at 50 epochs:
BATCH_SIZE = 256 EPOCHS = 30 cap_lens = [(cl-1) for cl in tc_tokens_length] total_size = sum(cap_lens) history = model.fit_generator( dataset_generator(processed_captions...