Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building LLM Powered  Applications

You're reading from   Building LLM Powered Applications Create intelligent apps and agents with large language models

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781835462317
Length 342 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Valentina Alto Valentina Alto
Author Profile Icon Valentina Alto
Valentina Alto
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Introduction to Large Language Models FREE CHAPTER 2. LLMs for AI-Powered Applications 3. Choosing an LLM for Your Application 4. Prompt Engineering 5. Embedding LLMs within Your Applications 6. Building Conversational Applications 7. Search and Recommendation Engines with LLMs 8. Using LLMs with Structured Data 9. Working with Code 10. Building Multimodal Applications with LLMs 11. Fine-Tuning Large Language Models 12. Responsible AI 13. Emerging Trends and Innovations 14. Other Books You May Enjoy
15. Index

What this book covers

Chapter 1, Introduction to Large Language Models, provides an introduction to and deep dive into LLMs, a powerful set of deep learning neural networks in the domain of generative AI. It introduces the concept of LLMs, their differentiators from classical machine learning models, and the relevant jargon. It also discusses the architecture of the most popular LLMs, moving on to explore how LLMs are trained and consumed and compare base LLMs with fine-tuned LLMs. By the end of this chapter, you will have the foundations of what LLMs are and their positioning in the landscape of AI, creating the basis for the subsequent chapters.

Chapter 2, LLMs for AI-Powered Applications, explores how LLMs are revolutionizing the world of software development, leading to a new era of AI-powered applications. By the end of this chapter, you will have a clearer picture of how LLMs can be embedded in different application scenarios, with the help of new AI orchestrator frameworks that are currently available in the AI development market.

Chapter 3, Choosing an LLM for Your Application, highlights how different LLMs may have different architectures, sizes, training data, capabilities, and limitations. Choosing the right LLM for your application is not a trivial decision as it can significantly impact the performance, quality, and cost of your solution. In this chapter, we will navigate the process of choosing the right LLM for your application. We will discuss the most promising LLMs in the market, the main criteria and tools to use when comparing LLMs, and the various trade-offs between size and performance. By the end of this chapter, you should have a clear understanding of how to choose the right LLM for your application and how to use it effectively and responsibly.

Chapter 4, Prompt Engineering, explains how prompt engineering is a crucial activity while designing LLM-powered applications since prompts have a massive impact on the performance of LLMs. In fact, there are several techniques that can be implemented to not only to refine your LLM’s responses but also reduce risks associated with hallucination and biases. In this chapter, we will cover the emerging techniques in the field of prompt engineering, from basic approaches up to advanced frameworks. By the end of this chapter, you will have the foundations to build functional and solid prompts for your LLM-powered applications, which will also be relevant in the upcoming chapters.

Chapter 5, Embedding LLMs within Your Applications, discusses a new set of components introduced into the landscape of software development with the advent of developing applications with LLMs. To make it easier to orchestrate LLMs and their related components in an application flow, several AI frameworks have emerged, of which LangChain is one of the most widely used. In this chapter, we will take a deep dive into LangChain and how to use it, and learn how to call open-source LLM APIs into code via Hugging Face Hub and manage prompt engineering. By the end of this chapter, you will have the technical foundations to start developing your LLM-powered applications using LangChain and open-source Hugging Face models.

Chapter 6, Building Conversational Applications, allows us to embark on the hands-on section of this book with your first concrete implementation of LLM-powered applications. Throughout this chapter, we will cover a step-by-step implementation of a conversational application, using LangChain and its components. We will configure the schema of a simple chatbot, adding a memory component, non-parametric knowledge, and tools to make the chatbot “agentic.” By the end of this chapter, you will be able to set up your own conversational application project with just a few lines of code.

Chapter 7, Search and Recommendation Engines with LLMs, explores how LLMs can enhance recommendation systems, using both embeddings and generative models. We will discuss the definition and evolution of recommendation systems, learn how generative AI is impacting this field of research, and understand how to build recommendation systems with LangChain. By the end of this chapter, you will be able to create your own recommendation application and leverage state-of-the-art LLMs using LangChain as the framework.

Chapter 8, Using LLMs with Structured Data, covers a great capability of LLMs: the ability to handle structured, tabular data. We will see how, with plug-ins and an agentic approach, we can use LLMs as a natural language interface between us and our structured data, reducing the gap between the business user and the structured information. To demonstrate this, we will build a database copilot with LangChain. By the end of this chapter, you will be able to build your own natural language interface for your data estate, combining unstructured with structured sources.

Chapter 9, Working with Code, covers another great capability of LLMs: working with programming languages. In the previous chapter, we’ve already seen a glimpse of this capability, when we asked our LLM to generate SQL queries against a SQL Database. In this chapter, we are going to examine in which other ways LLMs can be used with code, from “simple” code understanding and generation to the building of applications that behave as if they were an algorithm. By the end of this chapter, you will be able to build LLM-powered applications for your coding projects, as well as build LLM-powered applications with natural language interfaces to work with code.

Chapter 10, Building Multimodal Applications with LLMs, goes beyond LLMs, introducing the concept of multi-modality while building agents. We will see the logic behind the combination of foundation models in different AI domains – language, images, audio – into one single agent that can adapt to a variety of tasks. You will learn how to build a multi-modal agent with single-modal LLMs using LangChain. By the end of this chapter, you will be able to build your own multi-modal agent, providing it with the tools and LLMs needed to perform various AI tasks.

Chapter 11, Fine-Tuning Large Language Models, covers the technical details of fine-tuning LLMs, from the theory behind it to hands-on implementation with Python and Hugging Face. We will delve into how you can prepare your data to fine-tune a base model on your data, as well as discuss hosting strategies for your fine-tuned model. By the end of this chapter, you will be able to fine-tune an LLM on your own data so that you can build domain-specific applications powered by that LLM.

Chapter 12, Responsible AI, introduces the fundamentals of the discipline behind the mitigation of the potential harms of LLMs – and AI models in general – that is, responsible AI. This is important because LLMs open the doors to a new set of risks and biases to be taken into account while developing LLM-powered applications.

We will then move on to the risks associated with LLMs and how to prevent or, at the very least, mitigate them using proper techniques. By the end of this chapter, you will have a deeper understanding of how to prevent LLMs from making your application potentially harmful.

Chapter 13, Emerging Trends and Innovations, explores the latest advancements and future trends in the field of generative AI.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime