The rise of the Transformer: Attention is All You Need
In December 2017, Vaswani et al. (2017) published their seminal paper, Attention is All You Need. They performed their work at Google Research and Google Brain. I will refer to the model described in Attention is All You Need as the “original Transformer model” throughout this chapter and book.
Appendix I, Terminology of Transformer Models, can help the transition from the classical usage of deep learning words to transformer vocabulary. Appendix I summarizes some of the changes to the classical AI definition of neural network models.
In this section, we will look at the structure of the Transformer model they built. In the following sections, we will explore what is inside each component of the model.
The original Transformer model is a stack of 6 layers. The output of layer l is the input of layer l+1 until the final prediction is reached. There is a 6-layer encoder stack on the left and...