Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow 2 and Keras

You're reading from   Deep Learning with TensorFlow 2 and Keras Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781838823412
Length 646 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Neural Network Foundations with TensorFlow 2.0 2. TensorFlow 1.x and 2.x FREE CHAPTER 3. Regression 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Generative Adversarial Networks 7. Word Embeddings 8. Recurrent Neural Networks 9. Autoencoders 10. Unsupervised Learning 11. Reinforcement Learning 12. TensorFlow and Cloud 13. TensorFlow for Mobile and IoT and TensorFlow.js 14. An introduction to AutoML 15. The Math Behind Deep Learning 16. Tensor Processing Unit 17. Other Books You May Enjoy
18. Index

Sentiment analysis

What is the code we used to test colab? It is an example of sentiment analysis developed on top of the IMDb dataset. The IMDb dataset contains the text of 50,000 movie reviews from the Internet Movie Database. Each review is either positive or negative (for example, thumbs up or thumbs down). The dataset is split into 25,000 reviews for training and 25,000 reviews for testing. Our goal is to build a classifier that is able to predict the binary judgment given the text. We can easily load IMDb via tf.keras and the sequences of words in the reviews have been converted to sequences of integers, where each integer represents a specific word in a dictionary. We also have a convenient way of padding sentences to max_len, so that we can use all sentences, whether short or long, as inputs to a neural network with an input vector of fixed size (we will look at this requirement in more detail in Chapter 8, Recurrent Neural Networks):

import tensorflow as tf
from tensorflow.keras import datasets, layers, models, preprocessing
import tensorflow_datasets as tfds
max_len = 200
n_words = 10000
dim_embedding = 256
EPOCHS = 20
BATCH_SIZE = 500
def load_data():
        # Load data.
        (X_train, y_train), (X_test, y_test) = datasets.imdb.load_data(num_words=n_words)
        # Pad sequences with max_len.
        X_train = preprocessing.sequence.pad_sequences(X_train, maxlen=max_len)
        X_test = preprocessing.sequence.pad_sequences(X_test, maxlen=max_len)
        return (X_train, y_train), (X_test, y_test)

Now let's build a model. We are going to use a few layers that will be explained in detail in Chapter 8, Recurrent Neural Networks. For now, let's assume that the Embedding() layer will map the sparse space of words contained in the reviews into a denser space. This will make computation easier. In addition, we will use a GlobalMaxPooling1D() layer, which takes the maximum value of either feature vector from each of the n_words features. In addition, we have two Dense() layers. The last one is made up of one single neuron with a sigmoid activation function for making the final binary estimation:

def build_model():
    model = models.Sequential()
    # Input: - eEmbedding Layer.
    # The model will take as input an integer matrix of size (batch,     # input_length).
    # The model will output dimension (input_length, dim_embedding).
    # The largest integer in the input should be no larger
    # than n_words (vocabulary size).
        model.add(layers.Embedding(n_words, 
        dim_embedding, input_length=max_len))
        model.add(layers.Dropout(0.3))
    # Takes the maximum value of either feature vector from each of     # the n_words features.
    model.add(layers.GlobalMaxPooling1D())
    model.add(layers.Dense(128, activation='relu'))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(1, activation='sigmoid'))
    return model

Now we need to train our model, and this piece of code is very similar to what we did with MNIST. Let's see:

(X_train, y_train), (X_test, y_test) = load_data()
model = build_model()
model.summary()
model.compile(optimizer = "adam", loss = "binary_crossentropy",
 metrics = ["accuracy"]
)
score = model.fit(X_train, y_train,
 epochs = EPOCHS,
 batch_size = BATCH_SIZE,
 validation_data = (X_test, y_test)
)
score = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
print("\nTest score:", score[0])
print('Test accuracy:', score[1])

Let's see the network and then run a few iterations:

Figure 36: The results of the network following a few iterations

As shown in the following image, we reach the accuracy of 85%, which is not bad at all for a simple network:

Figure 37: Testing the accuracy of a simple network

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime