Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Distributed Machine Learning with Python

You're reading from   Distributed Machine Learning with Python Accelerating model training and serving with distributed systems

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781801815697
Length 284 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Guanhua Wang Guanhua Wang
Author Profile Icon Guanhua Wang
Guanhua Wang
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Section 1 – Data Parallelism
2. Chapter 1: Splitting Input Data FREE CHAPTER 3. Chapter 2: Parameter Server and All-Reduce 4. Chapter 3: Building a Data Parallel Training and Serving Pipeline 5. Chapter 4: Bottlenecks and Solutions 6. Section 2 – Model Parallelism
7. Chapter 5: Splitting the Model 8. Chapter 6: Pipeline Input and Layer Split 9. Chapter 7: Implementing Model Parallel Training and Serving Workflows 10. Chapter 8: Achieving Higher Throughput and Lower Latency 11. Section 3 – Advanced Parallelism Paradigms
12. Chapter 9: A Hybrid of Data and Model Parallelism 13. Chapter 10: Federated Learning and Edge Devices 14. Chapter 11: Elastic Model Training and Serving 15. Chapter 12: Advanced Techniques for Further Speed-Ups 16. Other Books You May Enjoy

Vanilla model parallelism is inefficient

As mentioned in a huge number of papers from academia and technical reports from the industry, vanilla model parallelism is very inefficient regarding GPU computation and memory utilization. To illustrate why vanilla model parallelism is not efficient, let's look at a simple DNN model, which is shown in Figure 6.1:

Figure 6.1 – A simple NLP model with three layers

As shown in Figure 6.1, given the training input, we pass it into our three-layer NLP model. The layers are denoted as Layer 1, Layer 2, and Layer 3. After the forward propagation, the model will generate some output.

Now let's assume we use three GPUs. Each GPU only holds one layer of the original model. It is shown in Figure 6.2:

Figure 6.2 – Model partition on three GPUs

In Figure 6.2, we have GPU1 holding Layer 1 of the model. Similarly, we have GPU2 holding Layer 2 and GPU3 holding Layer 3.

Now, we...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime