Chapter 1, Getting Started with TensorFlow 2.0, provides a quick bird's-eye view of the architectural and API-level changes in TensorFlow 2.0. It covers TensorFlow 2.0 installation and setup, compares how it has changed compared to TensorFlow 1.x (such as Keras APIs and layer APIs), and also presents the addition of rich extensions such as TensorFlow Probability, Tensor2Tensor, Ragged Tensors, and the newly available custom training logic for loss functions.
Chapter 2, Keras Default Integration and Eager Execution, goes deeper into high-level TensorFlow 2.0 APIs using Keras. It presents a detailed perspective of how graphs are evaluated in TensorFlow 1.x compared to TensorFlow 2.0. It explains lazy evaluation and eager execution and how they are different in TensorFlow 2.0, and it also shows how to use Keras model subclassing to incorporate TensorFlow 2.0 lower APIs for custom-built models.
Chapter 3, Designing and Constructing Input Data Pipelines, gives an overview of how to build complex input data pipelines for ingesting large training and inference datasets in most common formats, such as CSV, images, and text using TFRecords and tf.data.Dataset. It gives a general explanation of protocol buffers and protocol messages and how are they implemented using tf.Example. It also explains the best practices of using tf.data.Dataset with regard to the shuffling, prefetching, and batching of data, and provides recommendations for building data pipelines.
Chapter 4, Model Training and Use of TensorBoard, covers an overall model training pipeline to enable you to build, train, and validate state-of-the-art models. It talks about how to integrate input data pipelines, create tf.keras models, run training in a distributed manner, and run validations to fine-tune hyperparameters. It explains how to export TensorFlow models for deployment or inferencing, and it outlines the usage of TensorBoard, the changes to it in TensorFlow 2.0, and how to use it for debugging and profiling a model's speed and performance.
Chapter 5, Model Inference Pipelines – Multi-platform Deployments, shows us some deployment strategies for using the trained model to build software applications at scale in a live production environment. Models trained in TensorFlow 2.0 can be deployed on platforms such as servers and web browsers using a variety of programming languages, such as Python and JavaScript.
Chapter 6, AIY Projects and TensorFlow Lite, shows us how to deploy models trained in TensorFlow 2.0 on low-powered embedded systems such as edge devices and mobile systems including Android, iOS, the Raspberry Pi, Edge TPUs, and the NVIDIA Jetson Nano. It also contains details about training and deploying models on Google's AIY kits.
Chapter 7, Migrating From TensorFlow 1.x to 2.0, shows us the conceptual differences between TensorFlow 1.x and TensorFlow 2.0, the compatibility criteria between them, and ways to migrate between them, syntactically and semantically. It also shows several examples of syntactic and semantic migration from TensorFlow 1.x to TensorFlow 2.0, and contains references and future information.