Models tested on data
Once we've got our models ready, we can check them against our dataset and free-form sentences. During the training, both training tools (train_crossent.py
and train_scst.py
) periodically save the model, which is done in two different situations: when the BLEU score on the test dataset updates the maximum and every 10 epochs. Both kinds of models have the same format (produced by the torch.save()
method) and contain the model's weights. Except the weights, I save the token to integer ID mapping, which will be used by tools to preprocess the phrases.
To experiment with models, two utilities exist: data_test.py
and use_model.py
. data_test.py
loads the model, applies it to all phrases from the given genre, and reports the average BLEU score. Before the testing, phrase pairs are grouped by the first phrase. For example, the following is the result for two models, trained on the comedy genre. The first one was trained by the cross-entropy method and the...