The rise of the Transformer: Attention Is All You Need
In December 2017, Vaswani et al. published their seminal paper, Attention Is All You Need. They performed their work at Google Research and Google Brain. I will refer to the model described in Attention Is All You Need as the "original Transformer model" throughout this chapter and book.
In this section, we will look at the Transformer model they built from the outside. In the following sections, we will explore what is inside each component of the model.
The original Transformer model is a stack of 6 layers. The output of layer l is the input of layer l+1 until the final prediction is reached. There is a 6-layer encoder stack on the left and a 6-layer decoder stack on the right:
Figure 1.2: The architecture of the Transformer
On the left, the inputs enter the encoder side of the Transformer through an attention sub-layer and FeedForward Network (FFN) sub-layer. On the right, the target outputs go...