Summary
In this chapter, we measured the impact of the tokenization and subsequent data encoding process on transformer models. A transformer model can only attend to tokens from the embedding and positional encoding sub-layers of a stack. It does not matter if the model is an encoder-decoder, encoder-only, or decoder-only model. It does not matter if the dataset seems good enough to train.
If the tokenization process fails, even partly, the transformer model we are running will miss critical tokens.
We first saw that for standard language tasks, raw datasets might be enough to train a transformer.
However, we discovered that even if a pretrained tokenizer has gone through a billion words, it only creates a dictionary with a small portion of the vocabulary it comes across. Like us, a tokenizer captures the essence of the language it is learning and only "remembers"
the most important words if these words are also frequently used. This approach works well for...