Word2vec was developed by Tomas Mikolov, et al. at Google in 2013 as a response to making the neural-network-based training of the embedding more efficient, and since then it has become the de facto standard for developing pretrained word embedding.
Word2vec introduced the following two different learning models to learn the word embedding:
- CBOW: Learns the embedding by predicting the current word based on its context.
- Continuous Skip-Gram: The continuous Skip-Gram model learns by predicting the surrounding words given a current word.
Both CBOW and Skip-Gram methods of learning are focused on learning the words given their local usage context, where the context of the word itself is defined by a window of neighboring words. This window is a configurable parameter of the...