To get the most out of this book
This book aims to provide a solid theoretical foundation of what LLMs are, their architecture, and why they are revolutionizing the field of AI. It adopts a hands-on approach, providing you with a step-by-step guide to implementing LLMs-powered apps for specific tasks and using powerful frameworks like LangChain. Furthermore, each example will showcase the usage of a different LLM, so that you can appreciate their differentiators and when to use the proper model for a given task.
Overall, the book combines theoretical concepts with practical applications, making it an ideal resource for anyone who wants to gain a solid foundation in LLMs and their applications in NLP. The following pre-requisites will help you to get the most out of this book:
- A basic understanding of the math behind neural networks (linear algebra, neurons and parameters, and loss functions)
- A basic understanding of ML concepts, such as training and test sets, evaluation metrics, and NLP
- A basic understanding of Python
Download the example code files
The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/Building-LLM-Powered-Applications. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Download the color images
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781835462317.
Conventions used
There are a number of text conventions used throughout this book.
CodeInText:
Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. For example: “I set the two variables system_message
and instructions
.”
A block of code is set as follows:
[default]
$pip install openai == 0.28
import os
import openai
openai.api_key = os.environment.get('OPENAI_API_KEY')
response = openai.ChatCompletion.create(
model="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": instructions},
]
)
Any command-line input or output is written as follows:
{'text': "Terrible movie. Nuff Said.[…]
'label': 0}
Bold: Indicates a new term, an important word, or words that you see on the screen. For instance, words in menus or dialog boxes appear in the text like this. For example: “[…] he found that repeating the main instruction at the end of the prompt can help the model to overcome its inner recency bias.”
Warnings or important notes appear like this.
Tips and tricks appear like this.