Deep learning text classification model architectures commonly consist of the following three components connected in sequence:
- Embedding layer
- Deep representation component
- Fully connected part
We will discuss each of them in the following topics.
Deep learning text classification model architectures commonly consist of the following three components connected in sequence:
We will discuss each of them in the following topics.
Given a sequence of word IDs as input, the embedding layer transforms these into an output list of dense word vectors. The word vectors capture the semantics of the words, as we have seen in Chapter 3, Semantic Embedding using Shallow Models. In the deep learning frameworks such as TensorFlow, this part is usually handled by an embedding lookup layer which stores a lookup table to map the words represented...