In the previous recipes, we decided on our textual embeddings before training the model. With neural networks, we can make the embedding values part of the training procedure. The first such method we will explore is called Skip-Gram embedding.
Working with Skip-Gram embeddings
Getting ready
Prior to this recipe, we have not considered the order of words to be relevant in creating word embeddings. In early 2013, Tomas Mikolov and other researchers at Google authored a paper about creating word embeddings that addressed this issue (https://arxiv.org/abs/1301.3781), and they named their method word2vec.
The basic idea is to create word embeddings that capture the relational aspect of words. We seek to understand how various...