ViT – Vision Transformer
Dosovitskiy et al. (2021) summed up the essence of the vision transformer architecture they designed in the title of their paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
The paper’s title sums the process up: an image can be converted into patches of 16x16 words.
Let’s first go through a high-level view of the architecture of ViT before looking into the code.
The basic architecture of ViT
ViT can process an image as patches of words. In this section, we will go through the process in three steps:
- Splitting the image into patches with a feature extractor.
- Building a vocabulary of image patches with the feature extractor.
- The patches become the input of the transformer encoder-only model. The model embeds the input. It will produce an output of raw logits that the pipeline functions will convert into the final probabilities.
The first step is to SPLIT the...