Recurrent neural networks(RNNs)
In this section, we will discuss the architecture of the RNN. We will talk about how time is unfolded for the recurrence relation, and used to perform the computation in RNNs.
Unfolding recurrent computations
This section will explain how unfolding a recurrent relation results in sharing of parameters across a deep network structure, and converts it into a computational model.
Let us consider a simple recurrent form of a dynamical system:
In the preceding equation, s (t) represents the state of the system at time t, and θ is the same parameter shared across all the iterations.
This equation is called a recurrent equation, as the computation of s (t) requires the value returned by s (t-1) , the value of s (t-1) will require the value of s (t-2) , and so on.
This is a simple representation of a dynamic system for understanding purpose. Let us take one more example, where the dynamic system is driven by an external signal x (t) , and produces output y (t) :
RNNs...