Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Data-Driven Applications with LlamaIndex

You're reading from   Building Data-Driven Applications with LlamaIndex A practical guide to retrieval-augmented generation (RAG) to enhance LLM applications

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835089507
Length 368 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrei Gheorghiu Andrei Gheorghiu
Author Profile Icon Andrei Gheorghiu
Andrei Gheorghiu
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1:Introduction to Generative AI and LlamaIndex FREE CHAPTER
2. Chapter 1: Understanding Large Language Models 3. Chapter 2: LlamaIndex: The Hidden Jewel - An Introduction to the LlamaIndex Ecosystem 4. Part 2: Starting Your First LlamaIndex Project
5. Chapter 3: Kickstarting Your Journey with LlamaIndex 6. Chapter 4: Ingesting Data into Our RAG Workflow 7. Chapter 5: Indexing with LlamaIndex 8. Part 3: Retrieving and Working with Indexed Data
9. Chapter 6: Querying Our Data, Part 1 – Context Retrieval 10. Chapter 7: Querying Our Data, Part 2 – Postprocessing and Response Synthesis 11. Chapter 8: Building Chatbots and Agents with LlamaIndex 12. Part 4: Customization, Prompt Engineering, and Final Words
13. Chapter 9: Customizing and Deploying Our LlamaIndex Project 14. Chapter 10: Prompt Engineering Guidelines and Best Practices 15. Chapter 11: Conclusion and Additional Resources 16. Index 17. Other Books You May Enjoy

Estimating the potential cost of building and querying Indexes

In a similar manner to metadata extractors, Indexes pose issues related to costs and data privacy. That is because, as we have seen in this chapter, most Indexes rely on LLMs to some extent – during building and/or querying.

Repeatedly calling LLMs to process large volumes of text can quickly break your budget if you’re not paying attention to your potential costs. For example, if you are building a TreeIndex or KeywordTableIndex from thousands of documents, those constant LLM invocations during Index construction will carry a significant cost. Embeddings can also rely on calls to external models; therefore, the VectorStoreIndex is another important source of costs. In my experience, prevention and prediction are the best ways to avoid nasty surprises and keep your expenses low.

Just like with metadata extraction, I’d start first by observing and applying some best practices:

  • Use Indexes...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime