Uncovering transformer improvements using only the encoder
The first type of architectural advancements based on transformers we will discuss are transformers that utilize only the encoder part of the original transformer using the same multi-head attention layer. The encoder-only line of transformers is adopted generally because there is no masked multi-head attention layer since the next token prediction training setup is not used. In this line of improvements, training goals and setups vary across different data modalities and vary slightly for sequential improvements under the same data modality. However, one concept that stays pretty much constant across different data modalities is the fact that a semi-supervised learning method is used. In the case of transformers, this means that a form of unsupervised learning is executed first and then the straightforward supervised learning method is executed next. Unsupervised learning offers transformers a way to initialize their state...