Word embedding is a learned representation of a word wherein each word is represented using a vector in n-dimensional space. Words with similar meanings should have similar representations. These representations can also help in identifying synonyms, antonyms, and various other relationships between words. We mentioned that embeddings can be built to correspond to individual words; however, this idea can be extended to develop embeddings for individual sentences, documents, characters, and so on. Word2vec captures relationships in text; consequently, similar words have similar representations. Let's try to understand what type of semantic information Word2vec can actually encapsulate.
We will look at a few examples to understand what relationships and analogies can be captured by a Word2vec model. A very frequently used example deals with the embedding of King, Man, Queen, and Woman. Once a Word2vec model is built properly and the embedding from it is...