A peek into Natural Language Processing (NLP)
This section is not strictly related to machine learning, but it contains some machine learning results in the area of Natural Language Processing. Python has many toolkits to process text data, but the most powerful and complete toolkit is NLTK, the Natural Language Tool Kit.
In the following sections, we'll explore its core functionalities. We will work on the English language; for other languages, you will first need to download the language corpora (note that sometimes, languages have no free open source corpora for NLTK).
Word tokenization
Tokenization is the action of splitting the text in words. Chunking the whitespace seems very easy, but it's not, because text contains punctuation and contractions. Let's start with an example:
In: my_text = "The coolest job in the next 10 years will be statisticians. People think I'm joking, but who would've guessed that computer engineers would've been the coolest job of the 1990s?" simple_tokens = my_text...