Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Advanced Deep Learning with TensorFlow 2 and Keras

You're reading from   Advanced Deep Learning with TensorFlow 2 and Keras Apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more

Arrow left icon
Product type Paperback
Published in Feb 2020
Publisher Packt
ISBN-13 9781838821654
Length 512 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Rowel Atienza Rowel Atienza
Author Profile Icon Rowel Atienza
Rowel Atienza
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Introducing Advanced Deep Learning with Keras 2. Deep Neural Networks FREE CHAPTER 3. Autoencoders 4. Generative Adversarial Networks (GANs) 5. Improved GANs 6. Disentangled Representation GANs 7. Cross-Domain GANs 8. Variational Autoencoders (VAEs) 9. Deep Reinforcement Learning 10. Policy Gradient Methods 11. Object Detection 12. Semantic Segmentation 13. Unsupervised Learning Using Mutual Information 14. Other Books You May Enjoy
15. Index

To get the most out of this book

  • Deep learning and Python: The reader should have a fundamental knowledge of deep learning and its implementation in Python. While previous experience in using Keras to implement deep learning algorithms is important, it is not required. Chapter 1, Introducing Advanced Deep Learning with Keras, offers a review of deep learning concepts and their implementation in tf.keras.
  • Math: The discussions in this book assume that the reader is familiar with calculus, linear algebra, statistics, and probability at college level.
  • GPU: The majority of the tf.keras implementations in this book require a GPU. Without a GPU, it is not practical to execute many of the code examples because of the time involved (many hours to days). The examples in this book use reasonable amounts of data as much as possible in order to minimize the use of high-performance computers. The reader is expected to have access to at least NVIDIA GTX 1060.
  • Editor: The example code in this book was edited using vim in Ubuntu Linux 18.04 LTS and MacOS Catalina. Any Python-aware text editor is acceptable.
  • TensorFlow 2: The code examples in this book are written using the Keras API of TensorFlow 2 or tf2. Please ensure that the NVIDIA GPU driver and tf2 are both properly installed.
  • GitHub: We learn by example and experimentation. Please git pull or fork the code bundle for the book from its GitHub repository. After getting the code, examine it. Run it. Change it. Run it again. Do creative experiments by tweaking the code. It is the only way to appreciate all the theory explained in the chapters. Giving a star on the book's GitHub repository https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras is also highly appreciated.

Download the example code files

The code bundle for the book is hosted on GitHub at:

https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide color images of figures used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781838821654_ColorImages.pdf.

Conventions used

The code in this book is in Python. More specifically, Python 3. For example:

A block of code is set as follows:

def build_generator(inputs, image_size):
    """Build a Generator Model
    Stack of BN-ReLU-Conv2DTranpose to generate fake images
    Output activation is sigmoid instead of tanh in [1].
    Sigmoid converges easily.
    Arguments:
        inputs (Layer): Input layer of the generator 
            the z-vector)
        image_size (tensor): Target size of one side
            (assuming square image)
    Returns:
        generator (Model): Generator Model
    """
    image_resize = image_size // 4
    # network parameters 
    kernel_size = 5
    layer_filters = [128, 64, 32, 1]
    x = Dense(image_resize * image_resize * layer_filters[0])(inputs)
    x = Reshape((image_resize, image_resize, layer_filters[0]))(x)
    for filters in layer_filters:
        # first two convolution layers use strides = 2
        # the last two use strides = 1
        if filters > layer_filters[-2]:
            strides = 2
        else:
            strides = 1
        x = BatchNormalization()(x)
        x = Activation('relu')(x)
        x = Conv2DTranspose(filters=filters,
                            kernel_size=kernel_size,
                            strides=strides,
                            padding='same')(x)
    x = Activation('sigmoid')(x)
    generator = Model(inputs, x, name='generator')
    return generator

When we wish to draw your attention to a particular part of a code block,the relevant lines or items are set in bold:

# generate fake images
fake_images = generator.predict([noise, fake_labels])
# real + fake images = 1 batch of train data
x = np.concatenate((real_images, fake_images))
# real + fake labels = 1 batch of train data labels
labels = np.concatenate((real_labels, fake_labels))

Whenever possible, docstrings are is included. At the very least, text comments are used to minimize space usage.

Any command-line code execution is written as follows:

python3 dcgan-mnist-4.2.1.py

The above example has the following layout: algorithm-dataset-chapter.section.number.py. The command-line example is DCGAN on the MNIST dataset in Chapter 4, Generative Adversarial Networks (GANs) second section and first listing. In some cases, the explicit command line to execute is not written but it is assumed to be:

python3 name-of-the-file-in-listing

The file name of the code example is included in the Listing caption. This book uses Listing to identify code examples in the text.

Bold: Indicates a new term, an important word, or words that you see on the screen, for example, in menus or dialog boxes, also appear in the text like this. For example: StackedGAN has two additional loss functions, Conditional and Entropy.

Warnings or important notes appear like this.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime