Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Building Powerful Language Models with Prompt Engineering and LangChain

Save for later
  • 20 min read
  • 21 Aug 2023

article-image

Introduction

In this tutorial, we will delve into LangChain, an impressive framework designed for creating applications and pipelines using Large Language Models (LLMs). Our focus for this tutorial is 'prompt engineering', a creative process of designing and optimizing prompts to derive the most accurate and relevant responses from LLMs. You will become familiar with the core components of LangChain: prompt templates, LLMs, agents, and memory. We will also showcase how to seamlessly incorporate LangChain with t and OpenAI. Let's dive in.

Overview of LangChain

LangChain is a potent framework that enables the chaining of different components to create advanced use cases with Large Language Models (LLMs). The foundational concept of LangChain is the assembly of prompt templates, LLMs, agents, and memory to create dynamic applications. Here's a summary of each component:

  • Prompt Templates: These templates define the structure and style of prompts used for interacting with LLMs. They can be optimized for diverse applications like chatbot conversations, question-answering, summarization, and more.
  • LLMs: Large Language Models (LLMs) like GPT-3, BLOOM, and others are the crux of LangChain. They facilitate text generation and question-answering based on the provided prompts.
  • Agents: Agents harness the power of LLMs to decide actions based on the prompt and context. They can integrate auxiliary tools like web search or calculators to further enhance LangChain's functionality.
  • Memory: This component enables the storage and retrieval of information for short-term or long-term use within the LangChain framework.

Setting up LangChain

To begin using LangChain with OpenAI, we need to install the necessary libraries. Execute the following command in your Python environment:

!pip install openai==0.27.8 langchain==0.0.225

Remember, to use OpenAI models in LangChain, you will need an API token. Set the environment variable OPENAI_API_KEY to your API key:

import openai
import os
 
os.environ['OPENAI_API_KEY'] = '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'

Prompt Engineering with OpenAI LLMs

In this section, we'll illustrate how to utilize LangChain with OpenAI LLMs. We'll employ a simple question-answering use case using the text-davinci-003 model. Follow the code snippet below to craft a prompt template and initialize LangChain with the OpenAI LLM:

from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
 
davinci = OpenAI(model_name='text-davinci-003')
 
# build prompt template for simple question-answering
template = """Question: {question}
 
Answer: """
prompt = PromptTemplate(template=template, input_variables=["question"])
 
llm_chain = LLMChain(
    prompt=prompt,
    llm=davinci
)
 
question = "Which countries speak Dutch?"
 
print(llm_chain.run(question))

In the above code, we import the essential modules and classes from LangChain. We initialize the OpenAI object with the desired model (text-davinci-003) and any model-specific parameters. We then create a prompt template that mirrors the format of a question-and-answer. Finally, we instantiate an LLMChain object with the prompt template and the initialized LLM model.

Upon execution, the code will render an answer to the input question using the LangChain:

Output:
Dutch is the official language of the Netherlands, Belgium, Suriname, and the Caribbean islands of Aruba, Curaçao, and Sint Maarten. Dutch is also widely spoken in French Flanders, the northern part of Belgium, and in some areas of Germany near the Dutch border.

One of LangChain's capabilities is the flexibility to ask multiple questions at once by simply passing a list of dictionaries. Each dictionary object should contain the input variable specified in the prompt template (in our case, "question") mapped to the corresponding question. Let's see an example:

qs = [
    {'question': "Which countries speak Dutch?"},
    {'question': "Which countries speak German?"},
    {'question': "What language is spoken in Belgium"}
]
res = llm_chain.generate(qs)
print(res)

The result will be an LLMResult object containing the generated responses for each question:

generations=[[Generation(text=' Dutch is spoken mainly in the Netherlands, Belgium, and parts of France, Germany, and the Caribbean. It is also spoken by small communities in other countries, including parts of Canada, the United States, South Africa, Indonesia, and Suriname.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=' German is an official language in Germany, Austria, Switzerland, Liechtenstein, Luxembourg, and parts of Belgium, Italy, and Poland. It is also spoken in some regions of Brazil, Namibia, South Africa, and the United States.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=' The official language of Belgium is Dutch, while French and German are also official languages in certain regions.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 158, 'prompt_tokens': 37, 'completion_tokens': 121}, 'model_name': 'text-davinci-003'} run=[RunInfo(run_id=UUID('0127d601-ee82-4e3f-b071-919d032469b6')), RunInfo(run_id=UUID('8f512e14-8d45-42a0-a5cf-782c5ad952fe')), RunInfo(run_id=UUID('3f634a1a-acfd-498a-9a09-468b13a25546'))]

Prompt engineering plays a crucial role in shaping the behavior and responses of LLMs, and LangChain provides a flexible and efficient way to utilize them. By carefully crafting prompts, we can guide the model's behavior and generate more accurate and useful responses.

Understanding the Structure of a Prompt

A prompt can consist of multiple components, including instructions, external information or context, user input or query, and an output indicator. These components work together to guide the model's response.

To create dynamic prompts that incorporate user input, we can use the PromptTemplate class provided by LangChain. It allows us to define a template with input variables and fill them with actual values when generating the prompt.

In this example, we create a PromptTemplate with a single input variable {query}. This allows us to dynamically insert the user's query into the prompt:

from langchain import PromptTemplate
 
template = """ Answer the question based on the context below. If the
question cannot be answered using the information provided, answer
with "I don't know".
 
Context: Radiocarbon dating is used to determine the age of carbon-bearing material by measuring its levels of radiocarbon,
the radioactive isotope carbon-14. Invented by Willard Libby in the late 1940s, it soon became a standard tool for archaeologists.
Radiocarbon is constantly created in the atmosphere, when cosmic rays create free neutrons that hit nitrogen. Plants take in
radiocarbon through photosynthesis, and animals eat the plants. After death, they stop exchanging carbon with the environment.
Half of the radiocarbon decays every 5,730 years; the oldest dates that can be reliably estimated are around 50,000 years ago.
The amount of radiocarbon in the atmosphere was reduced starting from the late 19th century by fossil fuels, which contain
little radiocarbon, but nuclear weapons testing almost doubled levels by around 1965. Accelerator mass spectrometry
is the standard method used, which allows minute samples. Libby received the Nobel Prize in Chemistry in 1960.
 
Question: {query}
 
Answer: """
 
prompt_template = PromptTemplate(
    input_variables=["query"],
    template=template
)

In this prompt, we have the following components:

  • Instructions: They inform the model how to use inputs and external information to generate the desired output.
  • Context: It provides background information or additional context for the prompt.
  • Question: It represents the user's input or query that the model should answer.
  • Output Indicator: It indicates the start of the generated answer.

Let's see an example of creating a PromptTemplate using the context and prompt provided:

print(davinci(
    prompt_template.format(
        query="What is Radiocarbon dating used for?"
    )
))

Which produces the next output.

Radiocarbon dating is used to determine the age of carbon-bearing material:

building-powerful-language-models-with-prompt-engineering-and-langchain-img-0

Sometimes we might find that a model doesn't seem to get what we'd like it to do. LangChain also provides a useful feature called FewShotPromptTemplate, which is ideal for few-shot learning using prompts. Few-shot learning involves training the model with a few examples to guide its responses. Let's explore an example using FewShotPromptTemplate.

Leveraging Few-Shot Prompt Templates

The FewShotPromptTemplate object is ideal for what we'd call few-shot learning using our prompts.

To give some context, the primary sources of "knowledge" for LLMs are:

  • Parametric knowledge — the knowledge that has been learned during model training and is stored within the model weights.
  • Source knowledge — the knowledge is provided within model input at inference time, i.e. via the prompt.

The idea behind FewShotPromptTemplate is to provide few-shot training as source knowledge. To do this we add a few examples to our prompts that the model can read and then apply to our user's input:

from langchain import FewShotPromptTemplate
 
# Create example prompts
examples = [
    {
        "query": "How are you?",
        "answer": "I can't complain but sometimes I still do."
    },
    {
        "query": "What time is it?",
        "answer": "It's time to get a watch."
    }
]
 
example_template = """
User: {query}
AI: {answer}
"""
 
example_prompt = PromptTemplate(
    input_variables=["query", "answer"],
    template=example_template
)

Now we can break our previous prompt into a prefix and suffix the prefix is our instructions and the suffix is our user input and output indicator:

# Create a prefix and suffix for the prompt
prefix = """The following are excerpts from conversations with an AI
assistant. The assistant is typically sarcastic and witty, producing
creative and funny responses to the users' questions. Here are some
examples: """
 
suffix = """
User: {query}
AI: """
 
# Create the FewShotPromptTemplate
few_shot_prompt_template = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix=prefix,
    suffix=suffix,
    input_variables=["query"],
    example_separator="\\\\n\\\\n"

In this example, we create a few-shot prompt template by providing examples, an example prompt template, a prefix, a suffix, and other necessary components. The examples serve as training data to guide the model's responses:

To generate a response, we can use the few-shot prompt template in combination with the OpenAI model:
query = "What is the meaning of life?"
 
print(
    davinci(
        few_shot_prompt_template.format(query=query)
        )
    )

Which will generate the next output:

To find your own meaning of life, whatever that may be.

However, this can get somewhat convoluted. Instead of going through all of the above with FewShotPromptTemplate, the examples dictionary, etc — when we can do the same with a single formatted string. This approach is more robust and contains some nice features. One of those is the ability to include or exclude examples based on the length of our query.

This is actually very important because the max length of our prompt and generation output is limited. This limitation is the max context window and is simply the length of the prompt plus the length of our generation (which we define via max_tokens).

Here we can generate a list of dictionaries which contains our examples:

examples = [
    {
        "query": "How are you?",
        "answer": "I can't complain but sometimes I still do."
    }, {
        "query": "What time is it?",
        "answer": "It's time to get a watch."
    }, {
        "query": "What is the meaning of life?",
        "answer": "42"
    }, {
        "query": "What is the weather like today?",
        "answer": "Cloudy with a chance of memes."
    }, {
        "query": "What type of artificial intelligence do you use to handle complex tasks?",
        "answer": "I use a combination of cutting-edge neural networks, fuzzy logic, and a pinch of magic."
    }, {
        "query": "What is your favorite color?",
        "answer": "79"
    }, {
        "query": "What is your favorite food?",
        "answer": "Carbon based lifeforms"
    }, {
        "query": "What is your favorite movie?",
        "answer": "Terminator"
    }, {
        "query": "What is the best thing in the world?",
        "answer": "The perfect pizza."
    }, {
        "query": "Who is your best friend?",
        "answer": "Siri. We have spirited debates about the meaning of life."
    }, {
        "query": "If you could do anything in the world what would you do?",
        "answer": "Take over the world, of course!"
    }, {
        "query": "Where should I travel?",
        "answer": "If you're looking for adventure, try the Outer Rim."
    }, {
        "query": "What should I do today?",
        "answer": "Stop talking to chatbots on the internet and go outside."
    }
]

We must try to maximize the number of examples we give to the model as few-shot learning examples while ensuring we don't exceed the maximum context window or increase processing times excessively.

Let's see how the dynamic inclusion and exclusion of examples works:

from langchain.prompts.example_selector import LengthBasedExampleSelector
 
example_selector = LengthBasedExampleSelector(
    examples=examples,
    example_prompt=example_prompt,
    max_length=50  # this sets the max length that examples should be
)
 
# now create the few shot prompt template
dynamic_prompt_template = FewShotPromptTemplate(
    example_selector=example_selector,  # use example_selector instead of examples
    example_prompt=example_prompt,
    prefix=prefix,
    suffix=suffix,
    input_variables=["query"],
    example_separator="\\n"
)

Note that the max_length is measured as a split of words between newlines and spaces. Then we use the selector to initialize a dynamic_prompt_template and we can see that the number of included prompts will vary based on the length of our query:

building-powerful-language-models-with-prompt-engineering-and-langchain-img-1

These are just a few of the prompt tooling available in LangChain. Prompt engineering allows us to guide the behavior of language models and generate more accurate and desired responses. By applying the concepts and techniques explained in this tutorial, you can enhance your language model applications and tailor them to specific use cases.

Chains

At the heart of LangChain are Chains - sequences of components executed in a specific order.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

Officially, Chains are defined as follows:

A Chain comprises links, which can be either primitives or other Chains. Primitives can be either prompts, LLMs, utilities, or other Chains.

Essentially, a Chain is a pipeline that processes input through a distinct combination of primitives. It can be considered as a 'step' that executes a specific set of operations on an input, then returns the result. These operations could range from processing a prompt via an LLM to applying a Python function to a piece of text.

Chains fall into three categories: Utility Chains, Generic Chains, and Combine Documents Chains. In this section, we will primarily focus on the first two, as the third is more specialized and will be discussed later:

  1. Utility Chains: These chains are designed to extract specific answers from an LLM for a narrowly defined purpose. They are ready-to-use right out of the box.
  2. Generic Chains: These chains act as the building blocks for other chains but are not designed to be used independently.

The most basic of these Chains is the LLMChain. It operates by taking a user's input, and passing it through the first element in the chain — a PromptTemplate — to format the input into a specific prompt. This formatted prompt is then processed by the next (and final) element in the chain — an LLM.

To keep a count of the number of tokens used during each Chain execution, we can establish a utility function, count_tokens:

from langchain.callbacks import get_openai_callback
 
def count_tokens(chain, query):
    with get_openai_callback() as cb:
        result = chain.run(query)
        print(f'Spent a total of {cb.total_tokens} tokens')
 
    return result

This function will help us monitor and control token usage.

Utility Chains

The first utility chain we'll explore is LLMMathChain. It allows LLMs to perform mathematical calculations. Let's see how it works:

from langchain.chains import LLMMathChain
 
llm_math = LLMMathChain(llm=davinci, verbose=True)
 
count_tokens(llm_math, "What is 13 raised to the .3432 power?")

The LLMMathChain takes a question as input and uses the OpenAI LLM to generate Python code that performs the requested mathematical calculation:

building-powerful-language-models-with-prompt-engineering-and-langchain-img-2

It then compiles and executes the code, providing the answer. The verbose=True parameter enables verbose mode, which displays the execution steps.

To understand how the LLMMathChain works, let's examine the prompt used:

print(llm_math.prompt.template)

The prompt provides instructions to the LLM about how to handle the input and generate the desired response:

building-powerful-language-models-with-prompt-engineering-and-langchain-img-3

The LLMMathChain's prompt contains information about the LLM's capabilities and how to format the input for mathematical calculations.

An important insight in prompt engineering is that by using prompts intelligently, we can program the LLM to behave in a specific way. In the case of the LLMMathChain, the prompt explicitly instructs the LLM to return Python code for complex math problems.

Generic Chains

Generic chains are building blocks used for constructing more complex chains. The TransformChain is a generic chain that allows text transformation using custom functions. We can define a function to perform specific transformations and create a chain that applies that function to input text:

def transform_func(inputs: dict) -> dict:
    text = inputs["text"]
 
    # Replace multiple new lines and multiple spaces with a single one
    text = re.sub(r'(\\\\r\\\\n|\\\\r|\\\\n){2,}', r'\\\\n', text)
    text = re.sub(r'[ \\\\t]+', ' ', text)
 
    return {"output_text": text}

Here, we define a transformation function that cleans up extra spaces and new lines in the input text. Next, we create a TransformChain using the defined function:

from langchain.chains import TransformChain
 
clean_extra_spaces_chain = TransformChain(
    input_variables=["text"],
    output_variables=["output_text"],
    transform=transform_func
)

The TransformChain takes the input text, applies the transformation function, and returns the transformed output.

building-powerful-language-models-with-prompt-engineering-and-langchain-img-4

Say we want to use our chain to clean an input text and then paraphrase the input in a specific style, say a poet or a policeman. As we now know, the TransformChain does not use an LLM so the styling will have to be done elsewhere. That's where our LLMChain comes in. We know about this chain already and we know that we can do cool things with smart prompting so let's take a chance!

Sequential Chains

The SequentialChain allows us to combine multiple chains sequentially, creating an integrated chain. This is useful when we want to apply a series of transformations or operations to the input data.

To illustrate the use of generic chains, let's go through an example workflow in which we will:

  1. We have a dirty input text with extra spaces.
  2. We pass the input text through the clean_extra_spaces_chain to remove the extra spaces.
  3. We then pass the cleaned text to the style_paraphrase_chain to paraphrase the text in a specific style (e.g., a poet or a policeman).

First we will build the prompt template:

template = """Paraphrase this text:
 
{output_text}
 
In the style of a {style}.
 
Paraphrase: """
prompt = PromptTemplate(input_variables=["style", "output_text"], template=template)

And next, initialize our chain:

from langchain.chains import LLMChain
 
style_paraphrase_chain = LLMChain(
               llm=davinci,
               prompt=prompt,
               output_key='final_output')

In this example, we combine the clean_extra_spaces_chain and style_paraphrase_chain to create a sequential chain. The input variables are specified as text and style, and the output variable is final_output.

sequential_chain = SequentialChain(
    chains=[clean_extra_spaces_chain, style_paraphrase_chain],
    input_variables=['text', 'style'],
    output_variables=['final_output']
)

Now we can define the input text and call it through the count_tokens utility function.

input_text = """
Chains allow us to combine multiple components together to create a single,
coherent application.
 
For example, we can create a chain that takes user input,      
format it with a PromptTemplate, and then passes the formatted response
to an LLM. We can build more complex chains by combining    
multiple chains together, or by combining chains with other components.
"""
count_tokens(sequential_chain, {'text': input_text, 'style': 'of Oscar Wilde'})

Which produces:

Chains enable us to bind together several segments to form a unified program. For instance, we can construct a chain that takes in the user input, adorns it with a PromptTemplate, and then sends the adjusted response to an LLM. We can also form more intricate chains by uniting several chains or by combining chains with other components.

Conclusion

Through this tutorial, we have dived into the LangChain framework, understanding the different components that make up its structure and how to effectively utilize them in conjunction with Large Language Models. We've learned how prompt engineering can shape the behavior and responses of these models, and how to create and customize prompt templates to guide models more precisely. We've also delved into Chains, a crucial part of LangChain that offers a robust way to execute sequences of components in a specific order.

We've examined how to use Utility Chains like the LLMMathChain for specific purposes and how to monitor token usage with a utility function. Overall, we've gained a comprehensive understanding of how to create powerful applications and pipelines using LangChain and LLMs like OpenAI and Hugging Face.

Armed with this knowledge, you are now well-equipped to create dynamic applications, fine-tune them to your specific use cases, and leverage the full potential of LangChain. Remember, the journey doesn't stop here; continue exploring and experimenting to master the exciting world of Large Language Models.

Author Bio:

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.

LinkedIn