Backpropagation through time (BPTT)
You have already learnt that the primary requirement of RNNs is to distinctly classify the sequential inputs. The backpropagation of error and gradient descent primarily help to perform these tasks.
In case of feed forward neural networks, backpropagation moves in the backward direction from the final error outputs, weights, and inputs of each hidden layer. Backpropagation assigns the weights responsible for generating the error, by calculating their partial derivatives: where E denotes the error and w is the respective weights. The derivatives are applied on the learning rate, and the gradient decreases to update the weights so as to minimize the error rate.
However, a RNN, without using backpropagation directly, uses an extension of it, termed as backpropagation through time (BPTT). In this section, we will discuss BPTT to explain how the training works for RNNs.
Error computation
The backpropagation through time (BPTT) learning algorithm is a natural...