In the previous chapters, we covered some basic NLP steps, such as tokenization, stoplist removal, and feature creation, by creating a Term Frequency - Inverse Document Frequency (TF-IDF) matrix with which we performed a supervised learning task of predicting the sentiment of movie reviews. In this chapter, we are going to extend our previous example to now include the amazing power of word vectors, popularized by Google researchers, Tomas Mikolov and Ilya Sutskever, in their paper, Distributed Representations of Words and Phrases and their Compositionality.
We will start with a brief overview of the motivation behind word vectors, drawing on our understanding of the previous NLP feature extraction techniques, and we'll then explain the concept behind the family of algorithms that represent the word2vec framework (indeed, word2vec is...