Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Essential Guide to LLMOps

You're reading from   Essential Guide to LLMOps Implementing effective strategies for Large Language Models in deployment and continuous improvement

Arrow left icon
Product type Paperback
Published in Jul 2024
Publisher Packt
ISBN-13 9781835887509
Length 190 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Ryan Doan Ryan Doan
Author Profile Icon Ryan Doan
Ryan Doan
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Part 1: Foundations of LLMOps
2. Chapter 1: Introduction to LLMs and LLMOps FREE CHAPTER 3. Chapter 2: Reviewing LLMOps Components 4. Part 2: Tools and Strategies in LLMOps
5. Chapter 3: Processing Data in LLMOps Tools 6. Chapter 4: Developing Models via LLMOps 7. Chapter 5: LLMOps Review and Compliance 8. Part 3: Advanced LLMOps Applications and Future Outlook
9. Chapter 6: LLMOps Strategies for Inference, Serving, and Scalability 10. Chapter 7: LLMOps Monitoring and Continuous Improvement 11. Chapter 8: The Future of LLMOps and Emerging Technologies 12. Index 13. Other Books You May Enjoy

Model pre-training and fine-tuning

The processes of pre-training and fine-tuning are fundamental in the life cycle of LLMOps. These steps are pivotal in preparing models, especially transformer-based ones, to understand and generate language effectively.

Pre-training

Let’s run through the pre-training process of the sentence “the recent advancements in AI” for a transformer model. This sentence is first tokenized into ["the", "recent", "advance", "ments", "in", ...] and then applied to the vocabulary mapping we previously created – that is, {"the": 0, "recent": 1, "advance": 2, "ments": 3, "in": 4, ...}. Each token gets converted into its corresponding ID based on the vocabulary mapping:

["the", "recent", "advance", "ments", "in", ...] [0, 1, 2, 3, 4, ...]

In models similar to Llama 2, which typically...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image