Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learn TensorFlow Enterprise

You're reading from   Learn TensorFlow Enterprise Build, manage, and scale machine learning workloads seamlessly using Google's TensorFlow Enterprise

Arrow left icon
Product type Paperback
Published in Nov 2020
Publisher Packt
ISBN-13 9781800209145
Length 314 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
KC Tung KC Tung
Author Profile Icon KC Tung
KC Tung
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1 – TensorFlow Enterprise Services and Features
2. Chapter 1: Overview of TensorFlow Enterprise FREE CHAPTER 3. Chapter 2: Running TensorFlow Enterprise in Google AI Platform 4. Section 2 – Data Preprocessing and Modeling
5. Chapter 3: Data Preparation and Manipulation Techniques 6. Chapter 4: Reusable Models and Scalable Data Pipelines 7. Section 3 – Scaling and Tuning ML Works
8. Chapter 5: Training at Scale 9. Chapter 6: Hyperparameter Tuning 10. Section 4 – Model Optimization and Deployment
11. Chapter 7: Model Optimization 12. Chapter 8: Best Practices for Model Training and Performance 13. Chapter 9: Serving a TensorFlow Model 14. Other Books You May Enjoy

Using the Google Cloud GPU through AI Platform

Having worked through the previous section for utilizing Cloud TPU with AI Platform, we are ready to do the same with the GPU. As it turns out, the formats of training script and invocation commands are very similar. With the exception of a few more parameters and slight differences in the distributed strategy definition, everything else remains the same.

There are several distributed strategies (https://www.tensorflow.org/guide/distributed_training#types_of_strategies) currently available. For a TensorFlow Enterprise distribution in Google AI Platform, MirroredStrategy and TPUStrategy are the only two that are fully supported. All the others are experimental. Therefore, in this section's example, we will use MirroredStrategy. This strategy creates copies of all the variables in the model on each GPU. As these variables are updated at each gradient decent step, the value updates are copied to each GPU synchronously. By default...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime