Text Wrangling and Preprocessing
Almost 80% of NLP is data preprocessing. When we do topic modeling using TF-IDF, LDA, LSA, or similar models, we need to prepare the texts. Without text preprocessing, the quality of the model outcome will suffer, and latent information may be buried in the ocean of texts. The well-known phrase garbage in, garbage out (GIGO) refers to this. In this chapter, we will learn the key steps in NLP preprocessing: tokenization, lowercase conversion, stop word removal, punctuation removal, stemming, and lemmatization. The first two of these are very basic, so we will spend more time on the rest. We will learn how to code these steps in spaCy, NLTK, and Gensim. Later, we will build a pipeline for NLP preprocessing applicable to any NLP preprocessing in the future.
Specifically, we will cover the following topics:
- Steps in NLP preprocessing
- Coding with spaCy
- Coding with NLTK
- Coding with Gensim
- Building a pipeline with spaCy
By...