The basic idea behind recurrent neural networks is the vectorization of data. If you look at figure Fixed sized inputs of neural networks, which represents a traditional neural network, each node in the network accepts a scalar value and generates another scalar value. Another way to view this architecture is that each layer in the network accepts a vector as its input and generates another vector as its output. Figure Neural network horizontally rolled up and figure Neural network vertically rolled up illustrate this representation:
![](https://static.packt-cdn.com/products/9781785880360/graphics/assets/94f52fb4-011f-4f5e-8634-d10de1cc115c.png)
Neural network horizontally rolled up
![](https://static.packt-cdn.com/products/9781785880360/graphics/assets/9eec1205-b37a-4a22-8bf9-366e5591dc81.png)
Neural network vertically rolled up
The figure Neural network vertically rolled up is a simple RNN representation, which is a one-to-one RNN; one input is mapped to one output using one hidden layer.