Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn TensorFlow Enterprise

You're reading from   Learn TensorFlow Enterprise Build, manage, and scale machine learning workloads seamlessly using Google's TensorFlow Enterprise

Arrow left icon
Product type Paperback
Published in Nov 2020
Publisher Packt
ISBN-13 9781800209145
Length 314 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
KC Tung KC Tung
Author Profile Icon KC Tung
KC Tung
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1 – TensorFlow Enterprise Services and Features
2. Chapter 1: Overview of TensorFlow Enterprise FREE CHAPTER 3. Chapter 2: Running TensorFlow Enterprise in Google AI Platform 4. Section 2 – Data Preprocessing and Modeling
5. Chapter 3: Data Preparation and Manipulation Techniques 6. Chapter 4: Reusable Models and Scalable Data Pipelines 7. Section 3 – Scaling and Tuning ML Works
8. Chapter 5: Training at Scale 9. Chapter 6: Hyperparameter Tuning 10. Section 4 – Model Optimization and Deployment
11. Chapter 7: Model Optimization 12. Chapter 8: Best Practices for Model Training and Performance 13. Chapter 9: Serving a TensorFlow Model 14. Other Books You May Enjoy

What this book covers

Chapter 1, Overview of TensorFlow Enterprise, illustrates how to set up and run TensorFlow Enterprise in a Google Cloud Platform (GCP) environment. This will give you initial hands-on experience in seeing how TensorFlow Enterprise integrates with other data services in GCP.

Chapter 2, Running TensorFlow Enterprise in Google AI Platform, describes how to use GCP to set up and run TensorFlow Enterprise. As a differentiated TensorFlow distribution, TensorFlow Enterprise can be found on several (but not all) GCP platforms. It is important to use these platforms in order to ensure that the correct distribution is provisioned. 

Chapter 3, Data Preparation and Manipulation Techniques, illustrates how to deal with raw data and format it to uniquely suit consumption by a TensorFlow model training process. We will look at a number of essential TensorFlow Enterprise APIs that convert raw data into Protobuf format for efficient streaming, which is a recommended workflow for feeding data into a training process.

Chapter 4, Reusable Models and Scalable Data Pipelines, describes the different ways in which a TensorFlow Enterprise model may be built or reused. These options provide the flexibility to suit different situational requirements for building, training, and deploying TensorFlow models. Equipped with this knowledge, you will be able to make informed choices and understand the trade-offs among different model development strategies.

Chapter 5, Training at Scale, illustrates the use of TensorFlow Enterprise distributed training strategies to scale your model training to a cluster (either GPU or TPU). This will enable you to build a model development and training process that is robust and take advantage of all the hardware at your disposal.

Chapter 6, Hyperparameter Tuning, focuses on hyperparameter tuning as this is a necessary part of model training, especially when building your own model. TensorFlow Enterprise now provides high-level APIs for advanced hyperparameter space search algorithms. Through this chapter, you will learn how to leverage the distributed computing power at your disposal to reduce the training time required for hyperparameter tuning.

Chapter 7, Model Optimization, explores the concept of how lean and mean your model is. Does your model run as efficiently as possible? If your use case requires the model to run with limited resources (memory, model size, or data type), such as in the case of edge or mobile devices, then it's time to consider model runtime optimization. This chapter discusses the latest means of model optimization through the TensorFlow Lite framework. After this chapter, you will be able to optimize a trained TensorFlow Enterprise model to be as lightweight as possible for inferencing.

Chapter 8, Best Practices for Model Training and Performance, focuses on two aspects of model training that are universal: data ingestion and overfitting. First, it is necessary to build a data ingestion pipeline that works regardless of the size and complexity of the training data. In this chapter, best practices and recommendations for using TensorFlow Enterprise data preprocessing pipelines are explained and demonstrated. Second, in dealing with overfitting, standard practices of regularization as well as some recently released regularizations by the TensorFlow team are discussed.

Chapter 9, Serving a TensorFlow Model, describes the fundamentals of model inferencing as a web service. You will learn how to serve a TensorFlow model using TensorFlow Serving by building a Docker image of the model. In this chapter, you will begin by learning how to make use of saved models in your local environment first. Then you will build a Docker image of the model using TensorFlow Serving as the base image. Finally, you will serve this model as a web service through the RESTful API exposed by your Docker container.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime