Word2Vec is one of the widely used embedding techniques in the area of NLP. This model creates real-valued vectors from input text by looking at the contextual information the input word appears in. So, you will find out that similar words will be mentioned in very similar contexts, and hence the model will learn that those two words should be placed close to each other in the particular embedding space.
From the statements in the following diagram, the model will learn that the words love and adore share very similar contexts and should be placed very close to each other in the resulting vector space. The context of like could be a bit similar as well to the word love, but it won't be as close to love as the word adore:
The Word2Vec model also relies on semantic features of input sentences; for example, the two words adore...