Word embedding is an NLP technique for representing words and documents using a dense vector representation compared to the bag of word techniques, which used a large sparse vector representation. Embeddings are a class of NLP methods that aim to project the semantic meaning of words into a geometric space. This is accomplished by linking a numeric vector to each word in a dictionary so that the distance between any two vectors captures the part of the semantic relationship between the two associated words. The geometric space formed by these vectors is called an embedding space.
The two most popular techniques for learning word embeddings are global vectors for word representation (GloVe) and word to vector representation (Word2vec).
In the following sections, we will be processing sample documents through the neural network with and without the embedding layer...