Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Science  with Python

You're reading from   Data Science with Python Combine Python with machine learning principles to discover hidden patterns in raw data

Arrow left icon
Product type Paperback
Published in Jul 2019
Publisher Packt
ISBN-13 9781838552862
Length 426 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Rohan Chopra Rohan Chopra
Author Profile Icon Rohan Chopra
Rohan Chopra
Mohamed Noordeen Alaudeen Mohamed Noordeen Alaudeen
Author Profile Icon Mohamed Noordeen Alaudeen
Mohamed Noordeen Alaudeen
Aaron England Aaron England
Author Profile Icon Aaron England
Aaron England
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

About the Book 1. Introduction to Data Science and Data Pre-Processing FREE CHAPTER 2. Data Visualization 3. Introduction to Machine Learning via Scikit-Learn 4. Dimensionality Reduction and Unsupervised Learning 5. Mastering Structured Data 6. Decoding Images 7. Processing Human Language 8. Tips and Tricks of the Trade 1. Appendix

Chapter 6: Decoding Images

Activity 17: Predict if an Image Is of a Cat or a Dog

Solution:

  1. If you look at the name of the images in the dataset, you will find that the images of dogs start with dog followed by '.' and then a number, for example – "dog.123.jpg". Similarly, the images of cats start with cat. So, let's create a function to get the label from the name of the file:

    def get_label(file):

        class_label = file.split('.')[0]

        if class_label == 'dog': label_vector = [1,0]

        elif class_label == 'cat': label_vector = [0,1]

        return label_vector

    Then, create a function to read, resize, and preprocess the images:

    import os

    import numpy as np

    from PIL import Image

    from tqdm import tqdm

    from random import shuffle

     

    SIZE = 50

     

    def get_data():

        data = []

        files = os.listdir(PATH)

     

        for image in tqdm(files):

            label_vector = get_label(image)

            img = Image.open(PATH + image).convert('L')

            img = img.resize((SIZE,SIZE))

            data.append([np.asarray(img),np.array(label_vector)])

            

        shuffle(data)

        return data

    SIZE here refers to the dimension of the final square image we will input to the model. We resize the image to have the length and breadth equal to SIZE.

    Note

    When running os.listdir(PATH), you will find that all the images of cats come first, followed by images of dogs.

  2. To have the same distribution of both the classes in the training and testing sets, we will shuffle the data.
  3. Define the size of the image and read the data. Split the loaded data into training and testing sets:

    data = get_data()

    train = data[:7000]

    test = data[7000:]

    x_train = [data[0] for data in train]

    y_train = [data[1] for data in train]

    x_test = [data[0] for data in test]

    y_test = [data[1] for data in test]

  4. Transform the lists to numpy arrays and reshape the images to a format that Keras will accept:

    y_train = np.array(y_train)

    y_test = np.array(y_test)

    x_train = np.array(x_train).reshape(-1, SIZE, SIZE, 1)

    x_test = np.array(x_test).reshape(-1, SIZE, SIZE, 1)

  5. Create a CNN model that makes use of regularization to perform training:

    from keras.models import Sequential

    from keras.layers import Dense, Dropout, Conv2D, MaxPool2D, Flatten, BatchNormalization

    model = Sequential()

    Add the convolutional layers:

    model.add(Conv2D(48, (3, 3), activation='relu', padding='same', input_shape=(50,50,1)))

    model.add(Conv2D(48, (3, 3), activation='relu'))

    Add the pooling layer:

    model.add(MaxPool2D(pool_size=(2, 2)))

  6. Add the batch normalization layer along with a dropout layer using the following code:

    model.add(BatchNormalization())

    model.add(Dropout(0.10))

  7. Flatten the 2D matrices into 1D vectors:

    model.add(Flatten())

  8. Use dense layers as the final layers for the model:

    model.add(Dense(512, activation='relu'))

    model.add(Dropout(0.5))

    model.add(Dense(2, activation='softmax'))

  9. Compile the model and then train it using the training data:

    model.compile(loss='categorical_crossentropy',

                  optimizer='adam',

                  metrics = ['accuracy'])

    Define the number of epochs you want to train the model for:

    EPOCHS = 10

    model_details = model.fit(x_train, y_train,

                        batch_size = 128,

                        epochs = EPOCHS,

                        validation_data= (x_test, y_test),

                        verbose=1)

  10. Print the model's accuracy on the test set:

    score = model.evaluate(x_test, y_test)

    print("Accuracy: {0:.2f}%".format(score[1]*100))

    Figure 6.39: Model accuracy on the test set
    Figure 6.39: Model accuracy on the test set
  11. Print the model's accuracy on the training set:

    score = model.evaluate(x_train, y_train)

    print("Accuracy: {0:.2f}%".format(score[1]*100))

Figure 6.40: Model accuracy on the train set
Figure 6.40: Model accuracy on the train set

The test set accuracy for this model is 70.4%. The training set accuracy is really high, at 96%. This means that the model has started to overfit. Improving the model to get the best possible accuracy is left for you as an exercise. You can plot the incorrectly predicted images using the code from previous exercises to get a sense of how well the model performs:

import matplotlib.pyplot as plt

y_pred = model.predict(x_test)

incorrect_indices = np.nonzero(np.argmax(y_pred,axis=1) != np.argmax(y_test,axis=1))[0]

labels = ['dog', 'cat']

image = 5

plt.imshow(x_test[incorrect_indices[image]].reshape(50,50), cmap=plt.get_cmap('gray'))

plt.show()

print("Prediction: {0}".format(labels[np.argmax(y_pred[incorrect_indices[image]])]))

Figure 6.41: Incorrect prediction of a dog by the regularized CNN model
Figure 6.41: Incorrect prediction of a dog by the regularized CNN model

Activity 18: Identifying and Augmenting an Image

Solution:

  1. Create functions to get the images and the labels of the dataset:

    from PIL import Image

    def get_input(file):

        return Image.open(PATH+file)

     

    def get_output(file):

        class_label = file.split('.')[0]

        if class_label == 'dog': label_vector = [1,0]

        elif class_label == 'cat': label_vector = [0,1]

        return label_vector

  2. Create functions to preprocess and augment images:

    SIZE = 50

    def preprocess_input(image):

        # Data preprocessing

        image = image.convert('L')

        image = image.resize((SIZE,SIZE))

        

        

        # Data augmentation

        random_vertical_shift(image, shift=0.2)

        random_horizontal_shift(image, shift=0.2)

        random_rotate(image, rot_range=45)

        random_horizontal_flip(image)

        

        return np.array(image).reshape(SIZE,SIZE,1)

  3. Implement the augmentation functions to randomly execute the augmentation when passed an image and return the image with the result.

    This is for horizontal flip:

    import random

    def random_horizontal_flip(image):

        toss = random.randint(1, 2)

        if toss == 1:

            return image.transpose(Image.FLIP_LEFT_RIGHT)

        else:

            return image

    This is for rotation:

    def random_rotate(image, rot_range):

        value = random.randint(-rot_range,rot_range)

        return image.rotate(value)

    This is for image shift:

    import PIL

    def random_horizontal_shift(image, shift):

        width, height = image.size

        rand_shift = random.randint(0,shift*width)

        image = PIL.ImageChops.offset(image, rand_shift, 0)

        image.paste((0), (0, 0, rand_shift, height))

        return image

     def random_vertical_shift(image, shift):

        width, height = image.size

        rand_shift = random.randint(0,shift*height)

        image = PIL.ImageChops.offset(image, 0, rand_shift)

        image.paste((0), (0, 0, width, rand_shift))

        return image

  4. Finally, create the generator that will generate images batches to be used to train the model:

    import numpy as np

    def custom_image_generator(images, batch_size = 128):

        while True:

            # Randomly select images for the batch

            batch_images = np.random.choice(images, size = batch_size)

            batch_input = []

            batch_output = []

            

            # Read image, perform preprocessing and get labels

            for file in batch_images:

                # Function that reads and returns the image

                input_image = get_input(file)

                # Function that gets the label of the image

                label = get_output(file)

                # Function that pre-processes and augments the image

                image = preprocess_input(input_image)

     

                batch_input.append(image)

                batch_output.append(label)

     

            batch_x = np.array(batch_input)

            batch_y = np.array(batch_output)

     

            # Return a tuple of (images,labels) to feed the network

            yield(batch_x, batch_y)

  5. Create functions to load the test dataset's images and labels:

    def get_label(file):

        class_label = file.split('.')[0]

        if class_label == 'dog': label_vector = [1,0]

        elif class_label == 'cat': label_vector = [0,1]

        return label_vector

    This get_data function is similar to the one we used in Activity 1. The modification here is that we get the list of images to be read as an input parameter, and we return a tuple of images and their labels:

    def get_data(files):

        data_image = []

        labels = []

        for image in tqdm(files):

            

            label_vector = get_label(image)

            

     

            img = Image.open(PATH + image).convert('L')

            img = img.resize((SIZE,SIZE))

            

            

            labels.append(label_vector)

            data_image.append(np.asarray(img).reshape(SIZE,SIZE,1))

            

        data_x = np.array(data_image)

        data_y = np.array(labels)

            

        return (data_x, data_y)

  6. Now, create the test train split and load the test dataset:

    import os

    files = os.listdir(PATH)

    random.shuffle(files)

    train = files[:7000]

    test = files[7000:]

    validation_data = get_data(test)

  7. Create the model and perform training:

    from keras.models import Sequential

    model = Sequential()

    Add the convolutional layers

    from keras.layers import Input, Dense, Dropout, Conv2D, MaxPool2D, Flatten, BatchNormalization

    model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(50,50,1)))

    model.add(Conv2D(32, (3, 3), activation='relu'))

    Add the pooling layer:

    model.add(MaxPool2D(pool_size=(2, 2)))

  8. Add the batch normalization layer along with a dropout layer:

    model.add(BatchNormalization())

    model.add(Dropout(0.10))

  9. Flatten the 2D matrices into 1D vectors:

    model.add(Flatten())

  10. Use dense layers as the final layers for the model:

    model.add(Dense(512, activation='relu'))

    model.add(Dropout(0.5))

     

    model.add(Dense(2, activation='softmax'))

  11. Compile the model and train it using the generator that you created:

    EPOCHS = 10

    BATCH_SIZE = 128

    model.compile(loss='categorical_crossentropy',

                  optimizer='adam',

                  metrics = ['accuracy'])

    model_details = model.fit_generator(custom_image_generator(train, batch_size = BATCH_SIZE),

                        steps_per_epoch = len(train) // BATCH_SIZE,

                        epochs = EPOCHS,

                        validation_data= validation_data,

                        verbose=1)

The test set accuracy for this model is 72.6%, which is an improvement on the model in Activity 21. You will observe that the training accuracy is really high, at 98%. This means that this model has started to overfit, much like the one in Activity 21. This could be due to a lack of data augmentation. Try changing the data augmentation parameters to see if there is any change in accuracy. Alternatively, you can modify the architecture of the neural network to get better results. You can plot the incorrectly predicted images to get a sense of how well the model performs.

import matplotlib.pyplot as plt

y_pred = model.predict(validation_data[0])

incorrect_indices = np.nonzero(np.argmax(y_pred,axis=1) != np.argmax(validation_data[1],axis=1))[0]

labels = ['dog', 'cat']

image = 7

plt.imshow(validation_data[0][incorrect_indices[image]].reshape(50,50), cmap=plt.get_cmap('gray'))

plt.show()

print("Prediction: {0}".format(labels[np.argmax(y_pred[incorrect_indices[image]])]))

Figure 6.42: Incorrect prediction of a cat by the data augmentation CNN model
Figure 6.42: Incorrect prediction of a cat by the data augmentation CNN model
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image