Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow 2 and Keras

You're reading from   Deep Learning with TensorFlow 2 and Keras Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781838823412
Length 646 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Neural Network Foundations with TensorFlow 2.0 2. TensorFlow 1.x and 2.x FREE CHAPTER 3. Regression 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Generative Adversarial Networks 7. Word Embeddings 8. Recurrent Neural Networks 9. Autoencoders 10. Unsupervised Learning 11. Reinforcement Learning 12. TensorFlow and Cloud 13. TensorFlow for Mobile and IoT and TensorFlow.js 14. An introduction to AutoML 15. The Math Behind Deep Learning 16. Tensor Processing Unit 17. Other Books You May Enjoy
18. Index

What this book covers

The intent of this book is to discuss the TensorFlow 2.0 features and libraries, to present an overview of Supervised and Unsupervised Machine learning models, and to provide a comprehensive analysis of Deep Learning and Machine Learning models. Practical usage examples for Cloud, Mobile, and large production environments are provided throughout.

Chapter 1, Neural Network Foundations with TensorFlow 2.0, this chapter will provide a step-by-step introduction to neural networks. You will learn how to use tf.keras layers in TensorFlow 2 to build simple neural network models. Perceptron, Multi-layer Perceptrons, Activation functions, and Dense Networks will be discussed. Finally, the chapter provides an intuitive introduction to backpropagation.

Chapter 2, TensorFlow 1.x and 2.x, this chapter will compare TensorFlow 1.x and TensorFlow 2.0 programming models. You will learn how to use TensorFlow 1.x lower-level computational graph APIs, and how to use tf.keras higher-level APIs. New functionalities such as eager computation, Autograph, tf.Datasets, and distributed training will be covered. Brief comparisons between tf.keras with Estimators and between tf.keras and Keras will be provided.

Chapter 3, Regression, this chapter will focus on the most popular ML technique: regression. You will learn how to use TensorFlow 2.0 estimators to build simple and multiple regression models. You will learn to use logistic regression to solve a multi-class classification problem.

Chapter 4, Convolutional Neural Networks, this chapter will introduce Convolutional Neural Networks (CNNs) and their applications to image processing. You will learn how to use TensorFlow 2.0 to build simple CNNs to recognize handwritten characters in the MNIST dataset, and how to classify CIFAR images. Finally, you will understand how to use pretrained networks such as VGG16 and Inception.

Chapter 5, Advanced Convolutional Neural Networks, this chapter discusses advanced applications of CNNs to image, video, audio, and text processing. Examples of image processing (Transfer Learning, DeepDream), audio processing (WaveNet), and text processing (Sentiment Analysis, Q&A) will be discussed in detail.

Chapter 6, Generative Adversarial Networks, this chapter will focus on the recently discovered Generative Adversarial Networks (GANs). We will start with the first proposed GAN model and use it to forge MNIST characters. The chapter will use deep convolutional GANs to create celebrity images. The chapter discusses the various GAN architectures like SRGAN, InfoGAN, and CycleGAN. The chapter covers a range of cool GAN applications. Finally, the chapter concludes with a TensorFlow 2.0 implementation of CycleGAN to convert winter-summer images.

Chapter 7, Word Embeddings, this chapter will describe what word embeddings are, with specific reference to two traditional popular embeddings: Word2vec and GloVe. It will cover the core ideas behind these two embeddings and how to generate them from your own corpus, as well as how to use them in your own networks for Natural Language Processing (NLP) applications. The chapter will then cover various extensions to the basic embedding approach, such as using character trigrams instead of words (fastText), retaining word context by replacing static embeddings with a neural network (ELMO, Google Universal Sentence Encoder), sentence embeddings (InferSent, SkipThoughts), and using pretrained language models for embeddings (ULMFit, BERT).

Chapter 8, Recurrent Neural Networks, this chapter describes the basic architecture of Recurrent Neural Networks (RNNs), and how it is well suited for sequence learning tasks such as those found in NLP. It will cover various types of RNN, LSTM, Gated Recurrent Unit (GRU), Peephole LSTM, and bidirectional LSTM. It will go into more depth as to how an RNN can be used as a language model. It will then cover the seq2seq model, a type of RNN-based encoder-decoder architecture originally used in machine translation. It will then cover Attention mechanisms as a way of enhancing the performance of seq2seq architectures, and finally will cover the Transformer architecture (BERT, GPT-2), which is based on the Attention is all you need paper.

Chapter 9, Autoencoders, this chapter will describe autoencoders, a class of neural networks that attempt to recreate the input as its target. It will cover different varieties of autoencoders like sparse autoencoders, convolutional autoencoders, and denoising autoencoders. The chapter will train a denoising autoencoder to remove noise from input images. It will demonstrate how autoencoders can be used to create MNIST digits. Finally, it will also cover the steps involved in building an LSTM autoencoder to generate sentence vectors.

Chapter 10, Unsupervised Learning, the chapter delves into the unsupervised learning models. It will cover techniques required for clustering and dimensionality reduction like PCA, k-means, and self-organized maps. It will go into the details of Boltzmann Machines and their implementation using TensorFlow. The concepts covered will be extended to build Restricted Boltzmann Machines (RBMs).

Chapter 11, Reinforcement Learning, this chapter will focus upon reinforcement learning. It will start with the Q-learning algorithm. Starting with the Bellman Ford equation, the chapter will cover concepts like discounted rewards, exploration and exploitation, and discount factors. It will explain policy-based and model-based reinforcement learning. Finally, a Deep Q-learning Network (DQN) will be built to play an Atari game.

Chapter 12, TensorFlow and Cloud, this chapter discusses the cloud environment and how to utilize it for training and deploying your model. It will cover the steps needed to set up Amazon Web Services (AWS) for DL. The steps needed to set up Google Cloud Platform for DL applications will also be covered. It will also cover how to set up Microsoft Azure for DL applications. The chapter will include various cloud services that allow you to run the Jupyter Notebook directly on the cloud. Finally, the chapter will conclude with an introduction to TensorFlow Extended.

Chapter 13, TensorFlow for Mobile and IoT and TensorFlow.js, this chapter focuses on developing deep learning based applications for the web, mobile devices and IoT. The chapter discusses TensorFlow Lite and explores how it can be used to deploy models on Android devices. The chapter also discusses in detail Federated learning for distributed learning across thousands of mobile devices. The chapter finally introduces TensorFlow.js and how it can be used with vanilla JavaScript or Node.js to develop Web applications.

Chapter 14, An Introduction to AutoML, this chapter introduces you to the exciting field of AutoML. It talks about automatic data preparation, automatic feature engineering, and automatic model generation. The chapter also introduces AutoKeras and Google Cloud Platform AutoML with its multiple solutions for Table, Vision, Text, Translation, and for Video processing.

Chapter 15, The Math behind Deep Learning, this chapter, as the title implies, discusses the math behind deep learning. In the chapter, we'll get "under the hood" and see what's going on when we perform deep learning. The chapter begins with a brief history regarding the origins of deep learning programming and backpropagation. Next, it introduces some mathematical tools and derivations, which help us in understanding the concepts to be covered. The remainder of the chapter details backpropagation and some of its applications within CNNs and RNNs.

Chapter 16, Tensor Processing Unit, this chapter introduces the Tensor Processing Unit (TPU), a special chip developed at Google for ultra-fast execution of neural network mathematical operations. In this chapter we are going to compare CPUs and GPUs with the three generations of TPUs and with Edge TPUs. The chapter will include code examples of using TPUs.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime