Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Essential Guide to LLMOps

You're reading from   Essential Guide to LLMOps Implementing effective strategies for Large Language Models in deployment and continuous improvement

Arrow left icon
Product type Paperback
Published in Jul 2024
Publisher Packt
ISBN-13 9781835887509
Length 190 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Ryan Doan Ryan Doan
Author Profile Icon Ryan Doan
Ryan Doan
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Part 1: Foundations of LLMOps FREE CHAPTER
2. Chapter 1: Introduction to LLMs and LLMOps 3. Chapter 2: Reviewing LLMOps Components 4. Part 2: Tools and Strategies in LLMOps
5. Chapter 3: Processing Data in LLMOps Tools 6. Chapter 4: Developing Models via LLMOps 7. Chapter 5: LLMOps Review and Compliance 8. Part 3: Advanced LLMOps Applications and Future Outlook
9. Chapter 6: LLMOps Strategies for Inference, Serving, and Scalability 10. Chapter 7: LLMOps Monitoring and Continuous Improvement 11. Chapter 8: The Future of LLMOps and Emerging Technologies 12. Index 13. Other Books You May Enjoy

Inference, serving, and scalability

In the realm of LLMs, the topics of inference, serving, and scalability are crucial for efficient operation and optimal user experience. These aspects cover how the model’s insights are delivered (inference), how they are served to the end users (serving), and how the system adapts to varying loads (scalability).

Online and batch inference

Inference can be mainly categorized into online and batch processing. Online inference refers to the real-time processing of individual queries, where responses are generated instantly. On the other hand, batch inference deals with processing large volumes of queries at once, which is more efficient for tasks that don’t require immediate responses.

For instance, for a conversational AI chatbot used by a large retail company, online inference plays a crucial role. The chatbot is tasked with interacting with customers in real time, answering their queries, resolving issues, and providing product...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime