Exploring Sentence and Wordpiece tokenizers to understand the efficiency of subword tokenizers for transformers
Transformer models commonly use BPE and Wordpiece tokenization. In this section, we will understand why choosing a subword tokenizer over other tokenizers significantly impacts transformer models. The goal of this section will thus be to first review some of the main word and Sentence tokenizers. We will continue and implement subword tokenizers. But, first, we will detect if the tokenizer is a BPE or a Wordpiece.Then, we'll create a function to display the token-ID mappings.Finally, we'll analyze and control the quality of token-ID mappings.The first step is to review some of the main word and Sentence tokenizers.
Word and sentence tokenizers
Choosing a tokenizer depends on the objectives of the NLP project. Although subword tokenizers are more efficient for transformer models, word and Sentence tokenizers provide useful functionality. Sentence and word tokenizers...