Matching Tokenizers and Datasets
When studying transformer models, we tend to focus on the models' architecture and the datasets provided to train them. We have explored the original Transformer, fine-tuned a BERT-like model, trained a RoBERTa model, trained a GPT-2 model, and implemented a T5 model. We have also gone through the main benchmark tasks and datasets.
We trained a RoBERTa tokenizer and used tokenizers to encode data. However, we did not explore the limits of tokenizers to evaluate how they fit the models we build. Artificial intelligence is data-driven. Raffel et al. (2019), like all of the authors cited in this book, spent time preparing datasets for transformer models.
In this chapter, we will go through some of the limits of tokenizers that hinder the quality of downstream transformer tasks. Do not take pretrained tokenizers at face value. You might have a specific dictionary of words you are using (advanced medical language, for example) with words that...