Summary
I hope this chapter provided systematic guidance for you to understand Word2Vec. In this chapter, we started with the distributional hypothesis, which says if words are semantically similar, they tend to show up in similar contexts and with similar distributions. Word2Vec is almost the quantification of the distributional hypothesis. Word2Vec captures the similarities of words/concepts in vector form. Because vectors imply a measure of distance, Word2Vec enables us to measure the similarities of words or concepts.
We also learned the advantages of Word2Vec over Bag of Words (BOW) and Term Frequency-Inverse Document Frequency (TF-IDF). Word2Vec can also capture the compositional relationships between words. Word2Vec can reduce dimensionality by presenting the high-dimensional space of words in lower-dimensional word vectors. We are also informed that Word2Vec has been applied in many real-world recommendation systems.
We learned how to use a pretrained Word2Vec model and...