Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Decoding Large Language Models

You're reading from   Decoding Large Language Models An exhaustive guide to understanding, implementing, and optimizing LLMs for NLP applications

Arrow left icon
Product type Paperback
Published in Oct 2024
Publisher Packt
ISBN-13 9781835084656
Length 396 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Irena Cronin Irena Cronin
Author Profile Icon Irena Cronin
Irena Cronin
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: The Foundations of Large Language Models (LLMs)
2. Chapter 1: LLM Architecture FREE CHAPTER 3. Chapter 2: How LLMs Make Decisions 4. Part 2: Mastering LLM Development
5. Chapter 3: The Mechanics of Training LLMs 6. Chapter 4: Advanced Training Strategies 7. Chapter 5: Fine-Tuning LLMs for Specific Applications 8. Chapter 6: Testing and Evaluating LLMs 9. Part 3: Deployment and Enhancing LLM Performance
10. Chapter 7: Deploying LLMs in Production 11. Chapter 8: Strategies for Integrating LLMs 12. Chapter 9: Optimization Techniques for Performance 13. Chapter 10: Advanced Optimization and Efficiency 14. Part 4: Issues, Practical Insights, and Preparing for the Future
15. Chapter 11: LLM Vulnerabilities, Biases, and Legal Implications 16. Chapter 12: Case Studies – Business Applications and ROI 17. Chapter 13: The Ecosystem of LLM Tools and Frameworks 18. Chapter 14: Preparing for GPT-5 and Beyond 19. Chapter 15: Conclusion and Looking Forward 20. Index 21. Other Books You May Enjoy

Quantization – doing more with less

Quantization is a model optimization technique that converts the precision of the numbers used in a model from higher precision formats, such as 32-bit floating-point, to lower precision formats, such as 8-bit integers. The main goals of quantization are to reduce the model size and to make it run faster during inference, which is the process of making predictions using the model.

When quantizing an LLM, several key benefits and considerations come into play, which we will discuss next.

Model size reduction

Model size reduction via quantization is an essential technique for adapting LLMs to environments with limited storage and memory. The process involves several key aspects:

  • Bit precision: Traditional LLMs often use 32-bit floating-point numbers to represent the weights in their neural networks. Quantization reduces these to lower-precision formats, such as 16-bit, 8-bit, or even fewer bits. The reduction in bit precision...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image