Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Data-Driven Applications with LlamaIndex

You're reading from   Building Data-Driven Applications with LlamaIndex A practical guide to retrieval-augmented generation (RAG) to enhance LLM applications

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835089507
Length 368 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrei Gheorghiu Andrei Gheorghiu
Author Profile Icon Andrei Gheorghiu
Andrei Gheorghiu
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1:Introduction to Generative AI and LlamaIndex FREE CHAPTER
2. Chapter 1: Understanding Large Language Models 3. Chapter 2: LlamaIndex: The Hidden Jewel - An Introduction to the LlamaIndex Ecosystem 4. Part 2: Starting Your First LlamaIndex Project
5. Chapter 3: Kickstarting Your Journey with LlamaIndex 6. Chapter 4: Ingesting Data into Our RAG Workflow 7. Chapter 5: Indexing with LlamaIndex 8. Part 3: Retrieving and Working with Indexed Data
9. Chapter 6: Querying Our Data, Part 1 – Context Retrieval 10. Chapter 7: Querying Our Data, Part 2 – Postprocessing and Response Synthesis 11. Chapter 8: Building Chatbots and Agents with LlamaIndex 12. Part 4: Customization, Prompt Engineering, and Final Words
13. Chapter 9: Customizing and Deploying Our LlamaIndex Project 14. Chapter 10: Prompt Engineering Guidelines and Best Practices 15. Chapter 11: Conclusion and Additional Resources 16. Index 17. Other Books You May Enjoy

Understanding how LlamaIndex uses prompts

In terms of mechanics, a RAG-based application follows exactly the same rules and principles of interaction that a simple user would use in a chat session with an LLM. A major difference comes from the fact that RAG is actually a kind of prompt engineer on steroids. Behind the scenes, for almost every indexing, retrieval, metadata extraction, or final response synthesis operation, the RAG framework programmatically produces prompts. These prompts are enriched with context and then sent to the LLM.

In LlamaIndex, for each type of operation that requires an LLM, there is a default prompt that is used as a template. Take TitleExtractor as an example. This is one of the metadata extractors that we already talked about in Chapter 4, Ingesting Data into Our RAG Workflow. The TitleExtractor class uses two predefined prompt templates to get titles from text nodes inside documents. It does this in two steps:

  1. It gets potential titles from...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime