Thinking about backpropagation and RNNs
As you remember from Chapter 8, Recurrent Neural Networks, the basic equation for an RNN is , the final prediction is at step t, the correct value is yt, and the error E is the cross-entropy. Here U, V, W are learning parameters used for the RNNs' equations. These equations can be visualized as in Figure 16 where we unroll the recurrency. The core idea is that total error is just the sum of the errors at each time step.
If we used SGD, we need to sum the errors and the gradients at each timestep for one given training example:
Figure 16: Recurrent neural network unrolled with equations
We are not going to write all the tedious math behind all the gradients, but rather focus only on a few peculiar cases. For instance, with math computations similar to the one made in the previous chapters, it can be proven by using the chain rule that the gradient for V depends only on the value at the current timestep s3, y3 and :
However...