Introducing recurrent neural networks and LSTMs
When we work with deep learning models, the first thing we typically do, after exploring our data, is choose a network architecture. In Keras and KNIME, we do this by designing, layer by layer, the shape of our network – perhaps starting with an input layer followed by a few dense layers and then an output layer. When these network architectures also take information from previous execution states, they are known to be recurrent.
In this section, we will recap on RNNs, discuss how LSTM units build on top of them, and show you how to use them to build a forecasting model for demand prediction.
Recapping recurrent neural networks
As you may recall from the previous chapter, as shown in the following diagram, RNNs build on top of simple feedforward architectures by using the previous execution’s output as an additional input. Beyond that additional input, the interior of the standard recurrent neural unit operates...