Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Hands-On tutorial on how to use Pinecone with LangChain

Save for later
  • 17 min read
  • 21 Aug 2023

article-image

A vector database stores high-dimensional vectors and mathematical representations of attributes. Each vector holds dimensions ranging from tens to thousands, enhancing data richness. It operationalizes embedding models, aiding application development with resource management, security, scalability, and query efficiency. Pinecone, a vector database, enables a quick semantic search of vectors. Integrating OpenAI’s LLMs with Pinecone merges deep learning-based embedding generation with efficient storage and retrieval, facilitating real-time recommendation and search systems. Pinecone acts as long-term memory for large language models like OpenAI’s GPT-4.

Introduction

This tutorial will guide you through the process of integrating Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered by large language models (LLMs). Pinecone enables developers to build scalable, real-time recommendation and search systems based on vector similarity search.

Prerequisites

Before you begin this tutorial, you should have the following:

  • A Pinecone account
  • A LangChain account
  • A basic understanding of Python

Pinecone basics

As a starter, we will get familiarized with the use of Pinecone by exploring its basic functionalities of it. Remember to get the Pinecone access key.

Here is a step-by-step guide on how to set up and use Pinecone, a cloud-native vector database that provides long-term memory for AI applications, especially those involving large language models, generative AI, and semantic search.

Initialize Pinecone client

We will use the Pinecone client, so this step is only necessary if you don’t have it installed already.

pip install pinecone-client

To use Pinecone, you must have an API key. You can find your API key in the Pinecone console under the "API Keys" section. Note both your API key and your environment. To verify that your Pinecone API key works, use the following command:

import pinecone
pinecone.init(api_key="YOUR_API_KEY", environment="YOUR_ENVIRONMENT")

If you don't receive an error message, then your API key is valid. This will also initialize the Pinecone session.

Creating and retrieving indexes

The commands below create an index named "quickstart" that performs an approximate nearest-neighbor search using the Euclidean distance metric for 8-dimensional vectors.

pinecone.create_index("quickstart", dimension=8, metric="euclidean")

The Index creation takes roughly a minute.

Once your index is created, its name appears in the index list. Use the following command to return a list of your indexes.

pinecone.list_indexes()

Before you can query your index, you must connect to the index.

index = pinecone.Index("quickstart")

Now that you have created your index, you can start to insert data into it.

Insert the data

To ingest vectors into your index, use the upsert operation, which inserts a new vector into the index or updates the vector if a vector with the same ID is already present. The following commands upsert 5 8-dimensional vectors into your index.

index.upsert([
    ("A", [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]),
    ("B", [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]),
    ("C", [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]),
    ("D", [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]),
    ("E", [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
])

You can get statistics about your index, like the dimensions, the usage, and the vector count. To do this, you can use the following command to return statistics about the contents of your index.

index.describe_index_stats()

This will return a dictionary with information about your index:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-0

Now that you have created an index and inserted data into it, we can query the database to retrieve vectors based on their similarity.

Query the index and get similar vectors

The following example queries the index for the three vectors that are most similar to an example 8-dimensional vector using the Euclidean distance metric specified above.

index.query(
  vector=[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
  top_k=3,
  include_values=True
)

This command will return the first 3 vectors stored in this index that have the lowest Euclidian distance:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-1

Once you no longer need the index, use the delete_index operation to delete it.

pinecone.delete_index("quickstart")

By following these steps, you can set up a Pinecone vector database in just a few minutes. This will help you provide long-term memory for your high-performance AI applications without any infrastructure hassles.

Now, let’s take a look at a bit more complex example, in which we embed text data and insert it into Pinecone.

Preparing and Processing the Data

In this section, we will create a context for large language models (LLMs) using the OpenAI API. We will walk through the different parts of a Python script, understanding the purpose and function of each code block. The ultimate aim is to transform data into larger chunks of around 500 tokens, ensuring that the dataset is ordered sequentially.

Setup

First, we install the necessary libraries for our script. We're going to use OpenAI for AI models, pandas for data manipulation, and transformers for tokenization.

!pip install openai pandas transformers

After the installations, we import the necessary modules for our script.

import pandas as pd
import openai

Before you can interact with OpenAI, you need to provide your API key. Make sure to replace <<YOUR_API_KEY>> with your actual API key.

openai.api_key = ('<<YOUR_API_KEY>>')

Now we are ready to start processing the data to be embedded and stored in Pinecone.

Data transformation

We use pandas to load JSON data files related to different technologies (HuggingFace, PyTorch, TensorFlow, Streamlit). These files seem to contain questions and answers related to their respective topics and are based on the data in the Pinecone documentation. First, we will concatenate these data frames into one for easier manipulation.

hf = pd.read_json('data/huggingface-qa.jsonl', lines=True)
pt = pd.read_json('data/pytorch-qa.jsonl', lines=True)
tf = pd.read_json('data/tensorflow-qa.jsonl', lines=True)
sl = pd.read_json('data/streamlit-qa.jsonl', lines=True)
df = pd.concat([hf, pt, tf, sl], ignore_index=True)
df.head()

We can see the data here:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-2

Next, we define a function to remove new lines and unnecessary spaces in our text data. The function remove_newlines takes a pandas Series object and performs several replace operations to clean the text.

def remove_newlines(serie):
    serie = serie.str.replace('\\\\n', ' ', regex=False)
    serie = serie.str.replace('\\\\\\\\n', ' ', regex=False)
    serie = serie.str.replace('  ',' ', regex=False)
    serie = serie.str.replace('  ',' ', regex=False)
    return serie

We transform the text in our dataframe into a single string format combining the 'docs', 'category', 'thread', 'question', and 'context' columns.

df['text'] = "Topic: " + df.docs + " - " + df.category + "; Question: " + df.thread + " - " + df.question + "; Answer: " + df.context
df['text'] = remove_newlines(df.text)


Tokenization

We use the HuggingFace transformers library to tokenize our text. The GPT2 tokenizer is used, and the number of tokens for each text string is stored in a new column 'n_tokens'.

from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
df['n_tokens'] = df.text.apply(lambda x: len(tokenizer.encode(x)))

We filter out rows in our data frame where the number of tokens exceeds 2000.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
df = df[df.n_tokens < 2000]

Now we can finally embed the data using the OpenAI API.

from openai.embeddings_utils import get_embedding
size = 'curie'
df['embeddings'] = df.text.apply(lambda x: get_embedding(x, engine=f'text-search-{size}-doc-001'))
df.head()

We will be using the text-search-curie-doc-001' Open AI engine to create the embeddings, which is very capable, faster, and lower cost than Davinci:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-3

So far, we've prepared our data for subsequent processing. In the next parts of the tutorial, we will cover obtaining embeddings from the OpenAI API and using them with the Pinecone vector database.

Next, we will initialize the Pinecone index, create text embeddings using the OpenAI API and insert them into Pinecone.

Initializing the Index and Uploading Data to Pinecone

The second part of the tutorial aims to take the data that was prepared previously and upload them to the Pinecone vector database. This would allow these embeddings to be queried for similarity, providing a means to use contextual information from a larger set of data than what an LLM can handle at once.

Checking for Large Text Data

The maximum size limit for metadata in Pinecone is 5KB, so we check if any 'text' field items are larger than this.

from sys import getsizeof
too_big = []
for text in df['text'].tolist():
    if getsizeof(text) > 5000:
        too_big.append((text, getsizeof(text)))
print(f"{len(too_big)} / {len(df)} records are too big")

This will filter out the entries whose metadata is larger than the one Pinecone can manage. The next step is to create a unique identifier for the records.

There are several records with text data larger than the Pinecone limit, so we assign a unique ID to each record in the DataFrame.

df['id'] = [str(i) for i in range(len(df))]
df.head()

This ID can be used to retrieve the original text later:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-4

Now we can start with the initialization of the index in Pinecone and insert the data.

Pinecone Initialization and Index Creation

Next, Pinecone is initialized with the API key, and an index is created if it doesn't already exist. The name of the index is 'beyond-search-openai', and its dimension matches the length of the embeddings. The metric used for similarity search is cosine.

import pinecone
pinecone.init(
    api_key='PINECONE_API_KEY',
    environment="YOUR_ENV"
)
index_name = 'beyond-search-openai'
if not index_name in pinecone.list_indexes():
    pinecone.create_index(
        index_name, dimension=len(df['embeddings'].tolist()[0]),
        metric='cosine'
    )
index = pinecone.Index(index_name)

Now that we have created the index, we can proceed to insert the data. The index will be populated in batches of 32. Relevant metadata (like 'docs', 'category', 'thread', and 'href') is also included with each item. We will use tqdm to create a progress bar for the progress of the insertion.

from tqdm.auto import tqdm
batch_size = 32
for i in tqdm(range(0, len(df), batch_size)):
    i_end = min(i+batch_size, len(df))
    df_slice = df.iloc[i:i_end]
    to_upsert = [
        (
            row['id'],
            row['embeddings'],
            {
                'docs': row['docs'],
                'category': row['category'],
                'thread': row['thread'],
                'href': row['href'],
                'n_tokens': row['n_tokens']
            }
        ) for _, row in df_slice.iterrows()
    ]
    index.upsert(vectors=to_upsert)

This will insert the records into the database to be used later on in the process:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-5

Finally, the ID-to-text mappings are saved into a JSON file. This would allow us to retrieve the original text associated with an ID later on.

mappings = {row['id']: row['text'] for _, row in df[['id', 'text']].iterrows()}
import json
with open('data/mapping.json', 'w') as fp:
    json.dump(mappings, fp)

Now the Pinecone vector database should now be populated and ready for querying. Next, we will use this information to provide context to a question answering LLM.

Querying and Answering Questions

The final part of the tutorial involves querying the Pinecone vector database with questions, retrieving the most relevant context embeddings, and using OpenAI's API to generate an answer to the question based on the retrieved contexts.

OpenAI Embedding Generation

The OpenAI API is used to create embeddings for the question.

from openai.embeddings_utils import get_embedding
q_embeddings = get_embedding(
    'how to use gradient tape in tensorflow',
    engine=f'text-search-curie-query-001'
)

A function create_context is defined to use the OpenAI API to create a query embedding, retrieve the most relevant context embeddings from Pinecone, and append these contexts into a larger string ready for feeding into OpenAI's next generation step.

from openai.embeddings_utils import get_embedding

def create_context(question, index, max_len=3750, size="curie"):
    q_embed = get_embedding(question, engine=f'text-search-{size}-query-001')
    res = index.query(q_embed, top_k=5, include_metadata=True)

    cur_len = 0
    contexts = []

    for row in res['matches']:
        text = mappings[row['id']]
        cur_len += row['metadata']['n_tokens'] + 4
        if cur_len < max_len:
            contexts.append(text)
        else:
            cur_len -= row['metadata']['n_tokens'] + 4
            if max_len - cur_len < 200:
                break
    return "\\\\n\\\\n###\\\\n\\\\n".join(contexts)

 

We can now use this function to retrieve the context necessary based on a given question, as the question is embedded and the relevant context is retrieved from the Pinecone database:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-6

Now we are ready to start passing the context to a question-answering model.

Querying and Answering

We start by defining the parameters that will take during the query, specifically the model we will be using, the maximum token length and other parameters. We can also define given instructions to the model which will be used to constrain the results we can get..

fine_tuned_qa_model="text-davinci-002"
instruction="""
  Answer the question based on the context below,
  and if the question can't be answered based on the context,
  say \\"I don't know\\"\\n\\nContext:\\n{0}\\n\\n---\\n\\nQuestion: {1}\\nAnswer:"""

max_len=3550
size="curie"
max_tokens=400
stop_sequence=None
domains=["huggingface", "tensorflow", "streamlit", "pytorch"]

Different instruction formats can be defined. We will start now making some simple questions and seeing what the results look like.

question="What is Tensorflow"

context = create_context(
    question,
    index,
    max_len=max_len,
    size=size,
)

try:
    # fine-tuned models requires model parameter, whereas other models require engine parameter
    model_param = (
        {"model": fine_tuned_qa_model}
        if ":" in fine_tuned_qa_model
        and fine_tuned_qa_model.split(":")[1].startswith("ft")
        else {"engine": fine_tuned_qa_model}
    )
    #print(instruction.format(context, question))
    response = openai.Completion.create(
        prompt=instruction.format(context, question),
        temperature=0,
        max_tokens=max_tokens,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        stop=stop_sequence,
        **model_param,
    )
    print( response["choices"][0]["text"].strip())
except Exception as e:
    print(e)

We can see that it's giving us the proper results using the context that it's retrieving from Pinecone:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-7

We can also inquire about Pytorch:

question="What is Pytorch"

context = create_context(
    question,
    index,
    max_len=max_len,
    size=size,
)

try:
    # fine-tuned models requires model parameter, whereas other models require engine parameter
    model_param = (
        {"model": fine_tuned_qa_model}
        if ":" in fine_tuned_qa_model
        and fine_tuned_qa_model.split(":")[1].startswith("ft")
        else {"engine": fine_tuned_qa_model}
    )
    #print(instruction.format(context, question))
    response = openai.Completion.create(
        prompt=instruction.format(context, question),
        temperature=0,
        max_tokens=max_tokens,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        stop=stop_sequence,
        **model_param,
    )
    print( response["choices"][0]["text"].strip())
except Exception as e:
    print(e)

The results keep being consistent with the context provided:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-8

Now we can try to go beyond the capabilities of the context by pushing the boundaries a bit more.

question="Am I allowed to publish model outputs to Twitter, without a human review?"

context = create_context(
    question,
    index,
    max_len=max_len,
    size=size,
)

try:
    # fine-tuned models requires model parameter, whereas other models require engine parameter
    model_param = (
        {"model": fine_tuned_qa_model}
        if ":" in fine_tuned_qa_model
        and fine_tuned_qa_model.split(":")[1].startswith("ft")
        else {"engine": fine_tuned_qa_model}
    )
    #print(instruction.format(context, question))
    response = openai.Completion.create(
        prompt=instruction.format(context, question),
        temperature=0,
        max_tokens=max_tokens,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        stop=stop_sequence,
        **model_param,
    )
    print( response["choices"][0]["text"].strip())
except Exception as e:
    print(e)

We can see in the results that the model is working according to the instructions provided as we don’t have any context on Twitter:

hands-on-tutorial-on-how-to-use-pinecone-with-langchain-img-9

Lastly, the Pinecone index is deleted to free up resources.

pinecone.delete_index(index_name)

Conclusion

This tutorial provided a comprehensive guide to harnessing Pinecone, OpenAI's language models, and HuggingFace's library for advanced question-answering. We introduced Pinecone's vector search engine, explored data preparation, embedding generation, and data uploading. Creating a question-answering model using OpenAI's API concluded the process. The tutorial showcased how the synergy of vector search engines, language models, and text processing can revolutionize information retrieval. This holistic approach holds potential for developing AI-powered applications in various domains, from customer service chatbots to research assistants and beyond.

Author Bio:

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.

LinkedIn