The Embedding Layer
In Chapter 4, Deep Learning for Text – Embeddings, we discussed that we can't feed text directly into a neural network, and therefore need good representations. We discussed that embeddings (low-dimensional, dense vectors) are a great way of representing text. To pass the embeddings into the neural network's layers, we need to employ the embedding layer.
The functionality of the embedding layer is two-fold:
- For any input term, perform a lookup and return its word embedding/vector
- During training, learn these word embeddings
The part about looking up is straightforward – the word embeddings are stored as a matrix of the V × D
dimensionality, where V
is the vocabulary size (the number of unique terms considered) and D
is the length/dimensionality of each vector. The following figure illustrates the embedding layer. The input term, "life
", is passed to the embedding layer, which performs a lookup and returns...