Summary
In this chapter, we saw how the T5 transformer models standardized the input of the encoder and decoder stacks of the Original Transformer. The Original Transformer architecture has an identical structure for each block (or layer) of the encoder and decoder stacks. However, the Original Transformer did not have a standardized input format for NLP tasks.
Raffel et al. (2018) designed a standard input for a wide range of NLP tasks by defining a text-to-text model. They added a prefix to an input sequence, indicating the NLP problem type to solve. This led to a standard text-to-text format. The Text-To-Text Transfer Transformer (T5) was born. This deceptively simple evolution made it possible to use the same model and hyperparameters for a wide range of NLP tasks. The invention of T5 takes the standardization process of transformer models a step further.
We then implemented a T5 model that could summarize any text. We tested the model on texts that were not part of ready...