Training and performance
The Original Transformer was trained on a 4.5 million sentence-pair English-German dataset and a 36 million sentence-pair English-French dataset.
The datasets come from the ninth Workshop on Machine Translation (WMT), which can be found at the following link if you wish to explore the WMT datasets: http://www.statmt.org/wmt14/.
The training of the Original Transformer base models took 12 hours for 100,000 steps on a machine with 8 NVIDIA P100 GPUs. The big models took 3.5 days for 300,000 steps.
The Original Transformer outperformed all the previous machine translation models with a BLEU score of 41.8. The result was obtained on the WMT English-to-French dataset.
BLEU stands for Bilingual Evaluation Understudy. It is an algorithm that evaluates the quality of the results of machine translations.
The Google Research and Google Brain team applied optimization strategies to improve the performance of the Transformer. For example, the Adam optimizer was used, but the learning rate varied by first going through warmup states with a linear rate and decreasing the rate afterward.
Different types of regularization techniques, such as residual dropout and dropouts, were applied to the sums of embeddings. Also, the Transformer applies label smoothing to avoid overfitting with overconfident one-hot outputs. It introduces less accurate evaluations and forces the model to train more and better.
Several other transformer model variations have led to other models and usages that we will explore in the subsequent chapters.
Before the end of the chapter, let’s get a feel of the simplicity of ready-to-use transformer models in Hugging Face, for example.