In this section, we will look at a new neural network topology that is gaining momentum in the area of NLP.
Schuster and Paliwal have introduced Bidirectional Recurrent Neural Networks (BRNN) in 1997. BRNNs help increase the amount of input information available to the network. Multilayer perceptrons (MLPs) and time delay neural networks (TDNNs) are known to have limitations on the input data flexibility. RNNs also require their input data to be fixed. More advanced topologies like RNNs also have restrictions as the future input information cannot be predicted from the current state. BRNNs, on the contrary, do not need their input data to be fixed. Their future input information is reachable from the current state. The idea of BRNNs is to connect two hidden layers of opposite directions to the same output. With this structure, the output layer is able to get...