Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Decoding Large Language Models

You're reading from   Decoding Large Language Models An exhaustive guide to understanding, implementing, and optimizing LLMs for NLP applications

Arrow left icon
Product type Paperback
Published in Oct 2024
Publisher Packt
ISBN-13 9781835084656
Length 396 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Irena Cronin Irena Cronin
Author Profile Icon Irena Cronin
Irena Cronin
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: The Foundations of Large Language Models (LLMs)
2. Chapter 1: LLM Architecture FREE CHAPTER 3. Chapter 2: How LLMs Make Decisions 4. Part 2: Mastering LLM Development
5. Chapter 3: The Mechanics of Training LLMs 6. Chapter 4: Advanced Training Strategies 7. Chapter 5: Fine-Tuning LLMs for Specific Applications 8. Chapter 6: Testing and Evaluating LLMs 9. Part 3: Deployment and Enhancing LLM Performance
10. Chapter 7: Deploying LLMs in Production 11. Chapter 8: Strategies for Integrating LLMs 12. Chapter 9: Optimization Techniques for Performance 13. Chapter 10: Advanced Optimization and Efficiency 14. Part 4: Issues, Practical Insights, and Preparing for the Future
15. Chapter 11: LLM Vulnerabilities, Biases, and Legal Implications 16. Chapter 12: Case Studies – Business Applications and ROI 17. Chapter 13: The Ecosystem of LLM Tools and Frameworks 18. Chapter 14: Preparing for GPT-5 and Beyond 19. Chapter 15: Conclusion and Looking Forward 20. Index 21. Other Books You May Enjoy

Recurrent neural networks (RNNs) and their limitations

RNNs are a class of artificial neural networks that were designed to handle sequential data. They are particularly well-suited to tasks where the input data is temporally correlated or has a sequential nature, such as time series analysis, NLP, and speech recognition.

Overview of RNNs

Here are some essential aspects of how RNNs function:

  • Sequence processing: Unlike feedforward neural networks, RNNs have loops in them, allowing information to persist. This is crucial for sequence processing, where the current output depends on both the current input and the previous inputs and outputs.
  • Hidden states: RNNs maintain hidden states that capture temporal information. The hidden state is updated at each step of the input sequence, carrying forward information from previously seen elements in the sequence.
  • Parameters sharing: RNNs share parameters across different parts of the model. This means that they apply the...
You have been reading a chapter from
Decoding Large Language Models
Published in: Oct 2024
Publisher: Packt
ISBN-13: 9781835084656
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image