What this book covers
Chapter 1, Getting Started with TensorFlow 2.x, covers the main objects and concepts in TensorFlow. We introduce tensors, variables, and placeholders. We also show how to work with matrices and various mathematical operations in TensorFlow. At the end of the chapter, we show how to access the data sources used in the rest of the book.
Chapter 2, The TensorFlow Way, establishes how to connect all the algorithm components from Chapter 1, Getting Started with TensorFlow, into a computational graph in multiple ways to create a simple classifier. Along the way, we cover computational graphs, loss functions, backpropagation, and training with data.
Chapter 3, Keras, focuses on the high-level TensorFlow API named Keras. After having introduced the layers that are the building blocks of the models, we will cover the Sequential, Functional, and Sub-Classing APIs to create Keras models.
Chapter 4, Linear Regression, focuses on using TensorFlow for exploring various linear regression techniques, such as Lasso and Ridge, ElasticNet, and logistic regression. We conclude extending linear models with Wide & Deep. We show how to implement each model using estimators.
Chapter 5, Boosted Trees, discusses the TensorFlow implementation of boosted trees – one of the most popular models for tabular data. We demonstrate the functionality by addressing a business problem of predicting hotel booking cancellations.
Chapter 6, Neural Networks, covers how to implement neural networks in TensorFlow, starting with the operational gates and activation function concepts. We then show a shallow neural network and how to build up various different types of layers. We end the chapter by teaching a TensorFlow neural network to play tic tac toe.
Chapter 7, Predicting with Tabular Data, this chapter extends the previous one by demonstrating how to use TensorFlow for tabular data. We show how to process data handling missing values, binary, nominal, ordinal, and date features. We also introduce activation functions like GELU and SELU (particularly effective for deep architectures) and the correct usage of cross-validation in order to validate your architecture and parameters when you do not have enough data available.
Chapter 8, Convolutional Neural Networks, expands our knowledge of neural networks by illustrating how to use images with convolutional layers (and other image layers and functions). We show how to build a shortened CNN for MNIST digit recognition and extend it to color images in the CIFAR-10 task. We also illustrate how to extend prior-trained image recognition models for custom tasks. We end the chapter by explaining and demonstrating the StyleNet/neural style and DeepDream algorithms in TensorFlow.
Chapter 9, Recurrent Neural Networks, introduces a powerful architecture type (RNN) that has been instrumental in achieving state-of-the-art results on different modes of sequential data; applications presented include time-series prediction and text sentiment analysis.
Chapter 10, Transformers, is dedicated to Transformers – a new class of deep learning models that have revolutionized the field of Natural Language Processing (NLP). We demonstrate how to leverage their strength for both generative and discriminative tasks.
Chapter 11, Reinforcement Learning with TensorFlow and TF-Agents, presents the TensorFlow library dedicated to reinforcement learning. The structured approach allows us to handle problems ranging from simple games to content personalization in e-commerce.
Chapter 12, Taking TensorFlow to Production, gives tips and examples on moving TensorFlow to a production environment and how to take advantage of multiple processing devices (for example, GPUs) and setting up TensorFlow distributed on multiple machines. We also show the various uses of TensorBoard, and how to view computational graph metrics and charts. We end the chapter by showing an example of setting up an RNN model on TensorFlow serving an API.