Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Decoding Large Language Models

You're reading from   Decoding Large Language Models An exhaustive guide to understanding, implementing, and optimizing LLMs for NLP applications

Arrow left icon
Product type Paperback
Published in Oct 2024
Publisher Packt
ISBN-13 9781835084656
Length 396 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Irena Cronin Irena Cronin
Author Profile Icon Irena Cronin
Irena Cronin
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: The Foundations of Large Language Models (LLMs)
2. Chapter 1: LLM Architecture FREE CHAPTER 3. Chapter 2: How LLMs Make Decisions 4. Part 2: Mastering LLM Development
5. Chapter 3: The Mechanics of Training LLMs 6. Chapter 4: Advanced Training Strategies 7. Chapter 5: Fine-Tuning LLMs for Specific Applications 8. Chapter 6: Testing and Evaluating LLMs 9. Part 3: Deployment and Enhancing LLM Performance
10. Chapter 7: Deploying LLMs in Production 11. Chapter 8: Strategies for Integrating LLMs 12. Chapter 9: Optimization Techniques for Performance 13. Chapter 10: Advanced Optimization and Efficiency 14. Part 4: Issues, Practical Insights, and Preparing for the Future
15. Chapter 11: LLM Vulnerabilities, Biases, and Legal Implications 16. Chapter 12: Case Studies – Business Applications and ROI 17. Chapter 13: The Ecosystem of LLM Tools and Frameworks 18. Chapter 14: Preparing for GPT-5 and Beyond 19. Chapter 15: Conclusion and Looking Forward 20. Index 21. Other Books You May Enjoy

Comparative analysis – Transformer versus RNN models

When comparing Transformer models to RNN models, we’re contrasting two fundamentally different approaches to processing sequence data, each with its unique strengths and challenges. This section will provide a comparative analysis of these two types of models:

  • Performance on long sequences: Transformers generally outperform RNNs on tasks involving long sequences because of their ability to attend to all parts of the sequence simultaneously
  • Training speed and efficiency: Transformers can be trained more efficiently on hardware accelerators such as GPUs and TPUs due to their parallelizable architecture
  • Flexibility and adaptability: Transformers have shown greater flexibility and have been successfully applied to a wider range of tasks beyond sequence processing, including image recognition and playing games
  • Data requirements: RNNs can sometimes be more data-efficient, requiring less data to reach good...
You have been reading a chapter from
Decoding Large Language Models
Published in: Oct 2024
Publisher: Packt
ISBN-13: 9781835084656
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image