Tokenization is the process of converting text into tokens. These tokens can be paragraphs, sentences, and common individual words, and are commonly based at the word level. NLTK comes with a number of tokenizers that will be demonstrated in this recipe.
Performing tokenization
How to do it
The code for this example is in the 07/02_tokenize.py file. This extends the sentence splitter to demonstrate five different tokenization techniques. The first sentence in the file will be the only one tokenized so that we keep the amount of output to a reasonable amount:
- The first step is to simply use the built-in Python string .split() method. This results in the following:
print(first_sentence.split())
['We', 'are...