Recurrent Neural Networks
The second type of neural network is an RNN. RNNs are widely used in time series analysis, such as NLP. The concept of an RNN came about in the 1980s, but it’s not until recently that it gained its momentum in DL.
As we can see, in traditional feedforward neural networks such as CNNs, a node in the neural network only counts the current input and does not memorize the precious inputs. Therefore, it cannot handle time series data, which needs the previous inputs. For example, to predict the next word of a sentence, the previous words will be required to do the inference. By introducing a hidden state, which remembers some information about the sequence, RNNs solved this issue.
Different from feedforward networks, RNNs are a type of neural network where the output from the previous step is fed as the input to the current step; using a loop structure to keep the information allows the neural network to take the sequence of input. As shown in Figure...