Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Recurrent Neural Networks with Python Quick Start Guide

You're reading from   Recurrent Neural Networks with Python Quick Start Guide Sequential learning and language modeling with TensorFlow

Arrow left icon
Product type Paperback
Published in Nov 2018
Publisher Packt
ISBN-13 9781789132335
Length 122 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Simeon Kostadinov Simeon Kostadinov
Author Profile Icon Simeon Kostadinov
Simeon Kostadinov
Arrow right icon
View More author details
Toc

Understanding how recurrent neural networks work

With the use of a memory state, the RNN architecture perfectly addresses every sequence-based problem. In this section of the chapter, we will go over a full explanation of how this works. You will obtain knowledge about the general characteristics of a neural network as well as what makes RNNs special. This section emphasizes on the theoretical side (including mathematical equations), but I can assure you that once you grasp the fundamentals, any practical example will go smoothly.

To make the explanations understandable, let's discuss the task of generating text and, in particular, producing a new chapter based on one of my favorite book series, The Hunger Games, by Suzanne Collins.

Basic neural network overview

At the highest level, a neural network, which solves supervised problems, works as follows:

  1. Obtain training data (such as images for image recognition or sentences for generating text)
  2. Encode the data (neural networks work with numbers so a numeric representation of the data is required)
  3. Build the architecture of your neural network model
  4. Train the model until you are satisfied with the results
  5. Evaluate your model by making a fresh new prediction

Let's see how these steps are applied for an RNN.

Obtaining data

For the problem of generating a new book chapter based on the book series The Hunger Games, you can extract the text from all books in The Hunger Games series (The Hunger Games, Mockingjay, and Catching Fire) by copying and pasting it. To do that, you need to find the books, content online.

Encoding the data

We use word embeddings (https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/) for this purpose. Word embedding is a collective name of all techniques where words or phrases from a vocabulary are mapped to vectors of real numbers. Some methods include one-hot encoding, word2vec, and GloVe. You will learn more about them in the forthcoming chapters.

Building the architecture 

Each neural network consists of three sets of layers—input, hidden, and output. There is always one input and one output layer. If the neural network is deep, it has multiple hidden layers:

The difference between an RNN and the standard feedforward network comes in the cyclical hidden states. As seen in the following diagram, recurrent neural networks use cyclical hidden states. This way, data propagates from one time step to another, making each one of these steps dependent on the previous:

A common practice is to unfold the preceding diagram for better and more fluent understanding. After rotating the illustration vertically and adding some notations and labels, based on the example we picked earlier (generating a new chapter based on The Hunger Games books), we end up with the following diagram:

This is an unfolded RNN with one hidden layer. The identically looking sets of (input + hidden RNN unit + output) are actually the different time steps (or cycles) in the RNN. For example, the combination of  + RNN +  illustrates what is happening at time step  . At each time step, these operations perform as follows:

  1. The network encodes the word at the current time step (for example, t-1) using any of the word embedding techniques and produces a vector  (The produced vector can be  or  depending on the specific time step)
  2. Then, , the encoded version of the input word I at time step t-1, is plugged into the RNN cell (located in the hidden layer). After several equations (not displayed here but happening inside the RNN cell), the cell produces an output  and a memory state . The memory state is the result of the input  and the previous value of that memory state . For the initial time step, one can assume that  is a zero vector
  3. Producing the actual word (volunteer) at time step t-1 happens after decoding the output  using a text corpus specified at the beginning of the training 
  4. Finally, the network moves multiple time steps forward until reaching the final step where it predicts the word

You can see how each one of {…, , , , …} holds information about all the previous inputs. This makes RNNs very special and really good at predicting the next unit in a sequence. Let's now see what mathematical equations sit behind the preceding operations.

Text corpus—an array of all words in the example vocabulary.

Training the model

All the magic in this model lies behind the RNN cells. In our simple example, each cell presents the same equations, just with a different set of variables. A detailed version of a single cell looks like this:

First, let's explain the new terms that appear in the preceding diagram:

  • Weights (, , ): A weight is a matrix (or a number) that represents the strength of the value it is applied to. For example, determines how much of the input should be considered in the following equations.
    If  consists of high values, then should have significant influence on the end result. The weight values are often initialized randomly or with a distribution (such as normal/Gaussian distribution). It is important to be noted that  ,  ,  and  are the same for each step. Using the backpropagation algorithm, they are being modified with the aim of  producing accurate predictions
  • Biases (, ): An offset vector (different for each layer), which adds a change to the value of the output 
  • Activation function (tanh): This determines the final value of the current memory state  and the output . Basically, the activation functions map the resultant values of several equations similar to the following ones into a desired range: (-1, 1) if we are using the tanh function, (0, 1) if we are using sigmoid function, and (0, +infinity) if we are using ReLu (https://ai.stackexchange.com/questions/5493/what-is-the-purpose-of-an-activation-function-in-neural-networks)

Now, let's go over the process of computing the variables. To calculate  and , we can do the following:

As you can see, the memory state  is a result of the previous value  and the input . Using this formula helps in retaining information about all the previous states.

The input  is a one-hot representation of the word volunteer. Recall from before that one-hot encoding is a type of word embedding. If the text corpus consists of 20,000 unique words and volunteer is the 19th word, then  is a 20,000-dimensional vector where all elements are 0 except the one at the 19th position, which has a value of 1, which suggests that we only taking into account this particular word.

The sum between , , and  is passed to the tanh activation function, which squashes the result between -1 and 1 using the following formula:

In this, e = 2.71828 (Euler's number) and z is any real number.

The output  at time step t is calculated using  and the softmax function. This function can be categorized as an activation with the exception that its primary usage is at the output layer when a probability distribution is needed. For example, predicting the correct outcome in a classification problem can be achieved by picking the highest probable value from a vector where all the elements sum up to 1. Softmax produces this vector, as follows:

In this, e = 2.71828 (Euler's number) and z is a K-dimensional vector. The formula calculates probability for the value at the ith position in the vector z.

After applying the softmax function,  becomes a vector of the same dimension as  (the corpus size 20,000) with all its elements having a total sum of 1. With that in mind, finding the predicted word from the text corpus becomes straightforward.

Evaluating the model

Once an assumption for the next word in the sequence is made, we need to assess how good this prediction is. To do that, we need to compare the predicted word  with the actual word from the training data (let's call it  ). This operation can be accomplished using a loss (cost) function. These types of functions aim to find the error between predicted and actual values. Our choice will be the cross-entropy loss function, which looks like this:

Since we are not going to give a detailed explanation of this formula, you can treat it as a black box. If you are curious about how it works, I recommend reading the article Improving the way neural networks work by Michael Nielson (http://neuralnetworksanddeeplearning.com/chap3.html#introducing_the_cross-entropy_cost_function). A useful thing to know is that the cross-entropy function performs really well on classification problems.

After computing the error, we came to one of the most complex and, at the same time, powerful techniques in deep learning, called backpropagation.

In simple terms, we can state that the backpropagation algorithm traverses backward through all (or several) time steps while updating the weights and biases of the network. After repeating this procedure, and a certain amount of training steps, the network learns the correct parameters and will be able to yield better predictions.

To clear out any confusion, training and time steps are completely different terms. In one time step, we get a single element from the sequence and predict the next one. A training step is composed of multiple time steps where the number of time steps depends on how large the sequence for this training step is. In addition, time steps are only used in RNNs, but training ones are a general neural network concept.

After each training step, we can see that the value from the loss function decreases. Once it crosses a certain threshold, we can state that the network has successfully learned to predict new words in the text.

The last step is to generate the new chapter. This can happen by choosing a random word as a start (such as: games) and then predicting the next words using the preceding formulas with the pre-trained weights and biases. Finally, we should end up with somewhat meaningful text.

You have been reading a chapter from
Recurrent Neural Networks with Python Quick Start Guide
Published in: Nov 2018
Publisher: Packt
ISBN-13: 9781789132335
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime