Introduction
Neural networks are the building blocks of all deep learning models. In traditional neural networks, all the inputs and outputs are independent. However, there are instances where a particular output is dependent on the previous output of the system. Consider the stock price of a company as an example – the output at the end of any given day is related to the output of the previous day. Similarly, in Natural Language Processing (NLP), the final words in a sentence are dependent on the previous words in the sentence. A special type of neural network, called a Recurrent Neural Network (RNN), is used to solve these types of problems where the network needs to remember previous outputs. This chapter introduces and explores the concepts and applications of RNNs. It also explains how RNNs are different from standard feedforward neural networks. You will also gain an understanding of what the vanishing gradient problem is and a Long-Short-Term-Memory (LSTM) network. This chapter also...