Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with PyTorch Lightning

You're reading from   Deep Learning with PyTorch Lightning Swiftly build high-performance Artificial Intelligence (AI) models using Python

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781800561618
Length 366 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Dheeraj Arremsetty Dheeraj Arremsetty
Author Profile Icon Dheeraj Arremsetty
Dheeraj Arremsetty
Kunal Sawarkar Kunal Sawarkar
Author Profile Icon Kunal Sawarkar
Kunal Sawarkar
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1: Kickstarting with PyTorch Lightning
2. Chapter 1: PyTorch Lightning Adventure FREE CHAPTER 3. Chapter 2: Getting off the Ground with the First Deep Learning Model 4. Chapter 3: Transfer Learning Using Pre-Trained Models 5. Chapter 4: Ready-to-Cook Models from Lightning Flash 6. Section 2: Solving using PyTorch Lightning
7. Chapter 5: Time Series Models 8. Chapter 6: Deep Generative Models 9. Chapter 7: Semi-Supervised Learning 10. Chapter 8: Self-Supervised Learning 11. Section 3: Advanced Topics
12. Chapter 9: Deploying and Scoring Models 13. Chapter 10: Scaling and Managing Training 14. Other Books You May Enjoy

Scaling up training

Scaling up training requires us to speed up the training process for large amounts of data and utilize GPUs and TPUs better. In this section, we will cover some of the tips on how to efficiently use provisions in PyTorch Lightning to accomplish this.

Speeding up model training using a number of workers

How can the PyTorch Lightning framework help speed up model training? One useful parameter to know is num_workers, which comes from PyTorch, and PyTorch Lightning builds on top of it by giving advice about the number of workers.

Solution

The PyTorch Lightning framework offers a number of provisions for speeding up model training, such as the following:

  • You can set a non-zero value for the num_workers argument to speed up model training. The following code snippet provides an example of this:
    import torch.utils.data as data
    ...
    dataloader = data.DataLoader(num_workers=4, ...)

The optimal num_workers value depends on the batch size and configuration...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image