Preface
Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing parts of speech to complex topics such as topic modeling, text classification, and visualization.
Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts-of-speech tagging to help you to prepare your data. You will then learn about ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution; discover different ways of representing the semantics using bag of words, TF-IDF, word embeddings, and BERT; and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques.
As you advance, you will also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book covers visualizations of text data.
Finally, this book introduces Transformer-based models and how to utilize them to perform another set of novel NLP tasks. These encoder-decoder-based models are deep neural-network-based models and have been trained on large text corpora. These models have performed or exceeded the state of the art on various NLP tasks. Especially novel are the decoder-based generative models, which have the capability to generate text based on the context provided to them. Some of these models have reasoning capabilities built into them. These models will take NLP into the next era and make it a part of mainstream technology applications.
By the end of this NLP book, you will have developed the skills to use a powerful set of tools for text processing.