Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Training RNNs for Time Series Forecasting

Save for later
  • 7 min read
  • 22 Nov 2017

article-image
[box type="note" align="" class="" width=""]This tutorial has been taken from the book Practical Time Series Analysis by Dr. PKS Prakash and Dr. Avishek Pal.[/box]

RNNs are notoriously difficult to be trained. Vanilla RNNs suffer from vanishing and exploding gradients that give erratic results during training. As a result, RNNs have difficulty in learning long-range dependencies. For time series forecasting, going too many timesteps back in the past would be problematic. To address this problem, Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), which are special types of RNNs, have been introduced. We will use LSTM and GRU to develop the time series forecasting models.

Before that, let's review how RNNs are trained using Backpropagation Through Time (BPTT), a variant of the backpropagation algorithm. We will find out how vanishing and exploding gradients arise during BPTT.

Let's consider the computational graph of the RNN that we have been using for the time series forecasting. The gradient computation is shown in the following figure:

training-rnns-time-series-forecasting-img-0

Figure: Back-Propagation Through Time for deep recurrent neural network with p timesteps

For weights U, there is one path to compute the partial derivative, which is given as follows:

training-rnns-time-series-forecasting-img-1

However, due to the sequential structure of the RNN, there are multiple paths connecting the weights and the loss and the loss. Hence, the partial derivative is the sum of partial derivatives along the individual paths that start at the loss node and ends at every timestep node in the computational graph:

training-rnns-time-series-forecasting-img-2

training-rnns-time-series-forecasting-img-3

The technique of computing the gradients for the weights by summing over paths connecting the loss node and every timestep node is backpropagation through time, which is a special case of the original backpropagation algorithm. The problem of vanishing gradients in long-range RNNs is due to the multiplicative terms in the BPTT gradient computations. Now let's examine a multiplicative term from one of the preceding equations.

The gradients along the computation path connecting the loss node and ith timestep is

training-rnns-time-series-forecasting-img-4

This chain of gradient multiplication is ominously long to model long-range dependencies and this is where the problem of vanishing gradient arises.

The activation function for the internal state [0,1) is either a tanh or sigmoid. The first derivative of tanh is training-rnns-time-series-forecasting-img-5which is bound in (0,¼]. In the sigmoid function, the first-order derivative is training-rnns-time-series-forecasting-img-6which is bound in si . Hence, the gradients W  are positive fractions. For long-range timesteps, multiplying these fractional gradients diminishes the final product to zero and there is no gradient flow from a long-range timestep. Due to the negligibly low values of the gradients, the weights do not update and hence the neurons are said to be saturated.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

It is noteworthy that U, ∂si/∂s(i-1), and ∂si/∂W are matrices and therefore the partial derivatives t and st are computed on matrices. The final output is computed through matrix multiplications and additions. The first derivative on matrices is called Jacobian. If any one element of the Jacobian matrix is a fraction, then for a long-range RNN, we would see a vanishing gradient. On the other hand, if an element of the Jacobian is greater than one, the training process suffers from exploding gradients.

Solving the long-range dependency problem

We have seen in the previous section that it is difficult for vanilla RNNs to effectively learn long-range dependencies due to vanishing and exploding gradients. To address this issue, Long Short Term Memory network was developed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber. Gated Recurrent Unit was introduced in 2014 and gives a simpler version of LSTM. Let's review how LSTM and GRU solve the problem of learning long-range dependencies.

Long Short Term Memory (LSTM)

LSTM introduces additional computations in each timestep. However, it can still be treated as a black box unit that, for timestep (ht), returns the state internal (ft) and this is forwarded to the next timestep. However, internally, these vectors are computed differently. LSTM introduces three new gates: the input (ot) forget (gt), and output (ct) gates. Every timestep also has an internal hidden state (ht) and internal memory
training-rnns-time-series-forecasting-img-7These new units are computed as follows:

training-rnns-time-series-forecasting-img-8

Now, let's understand the computations. The gates training-rnns-time-series-forecasting-img-9are generated through sigmoid activations ht that limits their values within ft. Hence, these act as gates by letting out only a fraction of the value when multiplied by another variable. The input gate ot controls the fraction of the newly computed input to keep. The forget gate ct determines the effect of the previous time step and the output gate ft controls how much of the internal state to let out. The internal hidden state is calculated from the input to the current timestep and output of the previous timestep. Note that this is the same as computing the internal state in a vanilla RNN. ht is the internal memory unit of the current timestep and considers the memory from the previous step but downsized by a fraction st and the effect of the internal hidden state but mixed with the input gate ot. Finally,  zt would be passed to the next timestep and computed from the current internal memory and the output gate rt. The input, forget, and output gates are used to selectively include the previous memory and the current hidden state that is computed in the same manner as in vanilla RNNs. The gating mechanism of LSTM allows memory transfer from over long-range timesteps.

Gated Recurrent Units (GRU)

GRU is simpler than LSTM and has only two internal gates, namely, the update gate (zt) and the reset gate (rt). The computations of the update and reset gates are as follows:

zt = σ(Wzxt + Uzst-1)

rt = σ(Wrxt + Urst-1)

The state st of the timestep t is computed using the input xt, state st-1 from the previous timestep, the update, and the reset gates:

training-rnns-time-series-forecasting-img-10

The update being computed by a sigmoid function determines how much of the previous step's memory is to be retained in the current timestep. The reset gate controls how to combine the previous memory with the current step's input.

Compared to LSTM, which has three gates, GRU has two. It does not have the output gate and the internal memory, both of which are present in LSTM. The update gate in GRU determines how to combine the previous memory with the current memory and combines the functionality achieved by the input and forget gates of LSTM. The reset gate, which combines the effect of the previous memory and the current input, is directly applied to the previous memory. Despite a few differences in how memory is transmitted along the sequence, the gating mechanisms in both LSTM and GRU are meant to learn long-range dependencies in data.

So, which one do we use - LSTM or GRU?

Both LSTM and GRU are capable of handling memory over long RNNs. However, a common question is which one to use? LSTM has been long preferred as the first choice for language models, which is evident from their extensive use in language translation, text generation, and sentiment classification. GRU has the distinct advantage of fewer trainable weights as compared to LSTM. It has been applied to tasks where LSTM has previously dominated. However, empirical studies show that neither approach outperforms the other in all tasks. Tuning model hyperparameters such as the dimensionality of the hidden units improves the predictions of both. A common rule of thumb is to use GRU in cases having less training data as it requires less number of trainable weights. LSTM proves to be effective in case of large datasets such as the ones used to develop language translation models.

This article is an excerpt from the book Practical Time Series Analysis. For more techniques, grab the book now!