For those of you who got excited at the title (transformers), this section sadly has nothing to do with Optimus Prime or Bumblebee. In all seriousness now, we have seen that attention mechanisms work well with architectures such as RNNs and CNNs, but they are also powerful enough to be used on their own, as evidenced by Vaswani in 2017, in his paper Attention Is All you Need.
The transformer model is made entirely out of self-attention mechanisms to perform sequence-to-sequence tasks without the need for any form of recurrent unit. Wait, but how? Let's break down the architecture and find out how this is possible.
RNNs take in the encoded input and then decode it in order to map it to a target output. However, the transformer differs here by instead treating the encoding as a set of key-value pairs (K, V) which has dimensions (=n) equal to the length of (the...