The rise of text-to-text transformer models
Raffel et al. (2019) set out on a journey as pioneers with one goal: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. The Google team working on this approach emphasized that it would not modify the Original Transformer’s fundamental architecture from the start.
At that point, Raffel et al. (2019) wanted to focus on concepts, not techniques. Therefore, they showed no interest in producing the latest transformer model, as we often see a so-called silver bullet transformer model with n parameters and layers. This time, the T5 team wanted to find out how good transformers could be at understanding a language.
Humans learn a language and then apply that knowledge to a wide range of NLP tasks through transfer learning. The core concept of a T5 model is to find an abstract model that can do things like us. Remember, transformers learn to reproduce human-like responses through statistical pattern...