What are RNNs?
Before we understand RNNs, let’s refresh our memory and revisit how FNNs and CNNs work. In a typical FNN, you have an input layer, multiple hidden layers, and an output layer. After all the data is fed into the input layer, the information passes to the hidden layer. Then, the dot product of the input value and weight of each node is summed up, along with the bias term, which is turned into an activation function at each of the three nodes (Figure 6.1). The activation function can be binary, sigmoid, ReLu, LeakyReLu, or something else, as you learned in Chapter 4, Deep Learning for Genomics. Depending on the type of activation function, the value of the single node in the hidden layer is outputted:
Figure 6.1 – A multi-dimensional input type FNN
The number of nodes in the output layer depends on the problem and the required output. For example, if you are trying to classify a DNA sequence based on mutations in each of the 10 different...