Summary
In this chapter, we first started by examining the mind-blowing long-distance dependencies that transformer architectures can uncover. Transformers can perform transductions from written and oral sequences to meaningful representations as never before in the history of Natural Language Understanding (NLU).
These two dimensions, the expansion of transduction and the simplification of implementation, are taking artificial intelligence to a level never seen before.
We explored the bold approach of removing RNNs, LSTMs, and CNNs from transduction problems and sequence modeling to build the Transformer architecture. The symmetrical design of the standardized dimensions of the encoder and decoder makes the flow from one sublayer to another nearly seamless.
We saw that beyond removing recurrent network models, transformers introduce parallelized layers that reduce training time. In addition, we discovered other innovations, such as positional encoding and masked multi-headed attention.
The flexible, Original Transformer architecture provides the basis for many other innovative variations that open the way for yet more powerful transduction problems and language modeling.
We will go more in depth into some aspects of the Transformer’s architecture in the following chapters when describing the many variants of the original model.
The arrival of the Transformer marks the beginning of a new generation of ready-to-use artificial intelligence models. For example, Hugging Face and Google Brain make artificial intelligence easy to implement with a few lines of code.
Before continuing to the next chapter, make sure you capture the details of the paradigm shift constituted by the architecture of the Original Transformer. You will then be able to face any present and future transformer model.
In this chapter, we have dived into the architecture of the Original Transformer. Now, we will see what they can do. In Chapter 3, Emergent vs. Downstream Tasks: The Unseen Depths of Transformers, we will explore the wide range of tasks transformer models can perform.