Recurrent networks
All the neural network models that we analyzed in the previous chapter have a common feature. Once the training process is completed, the weights are frozen, and the output depends only on the input sample. Clearly, this is the expected behavior of a classifier, but there are many scenarios where a prediction must take into account the history of the input values. A time series is a classic example (review Chapter 10, Introduction to Time-Series Analysis, for further details). Let's suppose that we need to predict the temperature for the next week. If we try to use only the last known x(t) value and an MLP trained to predict x(t + 1), it's impossible to take into account temporal conditions, such as the season, the history of the season over the years, the position in the season, and so on.
The regressor will be able to associate the output that yields the minimum average error but, in real-life situations, this isn't enough. The only reasonable...