Bidirectional RNNs are based on the idea that the output at time t may depend on previous and future elements in the sequence. To realize this, the output of two RNN must be mixed--one executes the process in a direction and the second runs the process in the opposite direction.
The network splits neurons of a regular RNN into two directions, one for positive time direction (forward states), and another for negative time direction (backward states).
By this structure, the output layer can get information from past and future states.
The unrolled architecture of B-RNN is depicted in the following figure:
Let's see now, how to implement a B-RNN for an image classification problem. We begin by importing the needed library, notice that rnn and rnn_cell are TensorFlow libraries:
import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
The network...