Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Accelerate Model Training with PyTorch 2.X

You're reading from   Accelerate Model Training with PyTorch 2.X Build more accurate models by boosting the model training process

Arrow left icon
Product type Paperback
Published in Apr 2024
Publisher Packt
ISBN-13 9781805120100
Length 230 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Maicon Melo Alves Maicon Melo Alves
Author Profile Icon Maicon Melo Alves
Maicon Melo Alves
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Paving the Way FREE CHAPTER
2. Chapter 1: Deconstructing the Training Process 3. Chapter 2: Training Models Faster 4. Part 2: Going Faster
5. Chapter 3: Compiling the Model 6. Chapter 4: Using Specialized Libraries 7. Chapter 5: Building an Efficient Data Pipeline 8. Chapter 6: Simplifying the Model 9. Chapter 7: Adopting Mixed Precision 10. Part 3: Going Distributed
11. Chapter 8: Distributed Training at a Glance 12. Chapter 9: Training with Multiple CPUs 13. Chapter 10: Training with Multiple GPUs 14. Chapter 11: Training with Multiple Machines 15. Index 16. Other Books You May Enjoy

Training with Multiple Machines

We’ve finally arrived at the last mile of our performance improvement journey. In this last stage, we will broaden our horizons and learn how to distribute the training process across multiple machines or servers. So, instead of using four or eight devices, we can use dozens or hundreds of computing resources to train our models.

An environment comprised of multiple connected servers is usually called a computing cluster or simply a cluster. Such environments are shared among multiple users and have technical particularities such as a high bandwidth and low latency network.

In this chapter, we’ll describe the characteristics of computing clusters that are more relevant to the distributed training process. After that, we will learn how to distribute the training process among multiple machines using Open MPI as the launcher and NCCL as the communication backend.

Here is what you will learn as part of this chapter:

  • The most...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image