Previous Versions of Neural Networks
Around 40 years ago, it became clear that Feed Forward Neural Networks (FFNNs) could not capture time-variable dependencies, which are essential for capturing the time-variable properties of a signal. Modeling time-variable dependencies is very important in many applications involving real-world data, such as speech and video, in which data has time-variable properties. Also, human biological neural networks have a recurrent relationship, so it is the most obvious direction to take. How could this recurrent relationship be added to existing feedforward networks?
One of the first attempts to achieve this was done by adding delay elements, and the network was called the Time-Delay Neural Network, or TDNN for short.
In this network, as the following figure shows, the delay elements are added to the network and the past inputs are given to the network along with the current timestep as the input to the network. This definitely has an advantage over the traditional...