In this section, we will make use of convolutional and LSTM layers in the same network. The convolutional recurrent network architecture can be captured in the form of a simple flowchart:
Here, we can see that the flowchart contains embedding, convolutional 1D, maximum pooling, LSTM, and dense layers. Note that the embedding layer is always the first layer in the network and is commonly used for applications involving text data. The main purpose of the embedding layer is to find a mapping of each unique word, which in our example is 500, and turn it into a vector that is smaller in size, which we will specify using output_dim. In the convolutional layer, we will use the relu activation function. Similarly, the activation functions that will be used for the LSTM and dense layers will be tanh and softmax, respectively.
We can use the following...