Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applied Machine Learning and High-Performance Computing on AWS

You're reading from   Applied Machine Learning and High-Performance Computing on AWS Accelerate the development of machine learning applications following architectural best practices

Arrow left icon
Product type Paperback
Published in Dec 2022
Publisher Packt
ISBN-13 9781803237015
Length 382 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Trenton Potgieter Trenton Potgieter
Author Profile Icon Trenton Potgieter
Trenton Potgieter
Shreyas Subramanian Shreyas Subramanian
Author Profile Icon Shreyas Subramanian
Shreyas Subramanian
Farooq Sabir Farooq Sabir
Author Profile Icon Farooq Sabir
Farooq Sabir
Mani Khanuja Mani Khanuja
Author Profile Icon Mani Khanuja
Mani Khanuja
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1: Introducing High-Performance Computing
2. Chapter 1: High-Performance Computing Fundamentals FREE CHAPTER 3. Chapter 2: Data Management and Transfer 4. Chapter 3: Compute and Networking 5. Chapter 4: Data Storage 6. Part 2: Applied Modeling
7. Chapter 5: Data Analysis 8. Chapter 6: Distributed Training of Machine Learning Models 9. Chapter 7: Deploying Machine Learning Models at Scale 10. Chapter 8: Optimizing and Managing Machine Learning Models for Edge Deployment 11. Chapter 9: Performance Optimization for Real-Time Inference 12. Chapter 10: Data Visualization 13. Part 3: Driving Innovation Across Industries
14. Chapter 11: Computational Fluid Dynamics 15. Chapter 12: Genomics 16. Chapter 13: Autonomous Vehicles 17. Chapter 14: Numerical Optimization 18. Index 19. Other Books You May Enjoy

Reducing the memory footprint of DL models

Once we have trained the model, we need to deploy the model to get predictions, which are then used to provide business insights. Sometimes, our model can be bigger than the size of the single GPU memory available on the market today. In that case, you have two options – either to reduce the memory footprint of the model or use distributed deployment techniques. Therefore, in this section, we will discuss the following techniques to reduce the memory footprint of the model:

  • Pruning
  • Quantization
  • Model compilation

Let’s dive deeper into each of these techniques, starting with pruning.

Pruning

Pruning is the technique of eliminating weights and parameters within a DL model that have little or no impact on the performance of the model but a significant impact on the inference speed and size of the model. The idea behind pruning methods is to make the model’s memory and power efficient, reducing...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime