Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Building a Containerized LLM Chatbot Application

Save for later
  • 19 min read
  • 21 Aug 2023

article-image

In this hands-on tutorial, we will build a containerized LLM-powered chatbot application that uses examples to create a custom chatbot capable of answering deep philosophical questions and responding with profound questions in return. We will use Streamlit as the web application framework, PostgreSQL as the database to store examples, and OpenAI's GPT-3.5 "text-davinci-003" model for language processing.

The application allows users to input philosophical questions, and the AI-powered chatbot will respond with insightful answers based on the provided examples. Additionally, the chatbot will ask thought-provoking questions in response to user input, simulating the behavior of philosophical minds like Socrates and Nietzsche.

We'll break down the implementation into several files, each serving a specific purpose:

  1. Dockerfile: This file defines the Docker image for our application, specifying the required dependencies and configurations.
  2. docker-compose.yml: This file orchestrates the Docker containers for our application, including the web application (Streamlit) and the PostgreSQL database.
  3. setup.sql: This file contains the SQL commands to set up the PostgreSQL database and insert example data.
  4. streamlit_app.py: This file defines the Streamlit web application and its user interface.
  5. utils.py: This file contains utility functions to interact with the database, create the Da Vinci LLM model, and generate responses.
  6. requirements.txt: This file lists the Python dependencies required for our application.

The Dockerfile

The Dockerfile is used to build the Docker image for our application. It specifies the base image, sets up the working directory, installs the required dependencies, and defines the command to run the Streamlit application:

FROM python:3
WORKDIR /app
 
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
 
COPY . .
 
CMD ["streamlit", "run", "streamlit_app.py"]

In the Dockerfile, we define the base image to Python 3 using FROM python:3, which enables us to use Python and its packages. Next, we specify the working directory inside the container as /app where we will copy our application files. To ensure all required Python packages are installed, we copy the requirements.txt file, which lists the dependencies, into the container's and then, we run the command pip install --no-cache-dir -r requirements.txt to install the Python dependencies. We proceed to copy all the files from the current directory (containing our application files) into the container's /app directory using COPY . .. Finally, we define the command to run the Streamlit application when the container starts using CMD ["streamlit", "run", "streamlit_app.py"]. This command starts the Streamlit app, enabling users to interact with the philosophical AI assistant through their web browsers once the container is up and running.

The requirements.txt file lists the Python dependencies required for our application:

streamlit
streamlit-chat
streamlit-extras
psycopg2-binary
openai==0.27.8
langchain==0.0.225

The requirement file uses the next packages:

  • streamlit: The Streamlit library for creating web applications.
  • streamlit-chat: Streamlit Chat library for adding chat interfaces to Streamlit apps.
  • streamlit-extras: Streamlit Extras library for adding custom components to Streamlit apps.
  • psycopg2-binary: PostgreSQL adapter for Python.
  • openai==0.27.8: The OpenAI Python library for accessing the GPT-3.5 model.
  • langchain==0.0.225: LangChain library for working with language models and prompts.

Next, we will define the docker compose file which will also handle the deployment of the Postgres database where we will store our examples.

Creating the docker-compose

The docker-compose.yml file orchestrates the Docker containers for our application: the Streamlit web application and the PostgreSQL database:

version: '3'
services:
  app:
    build:
      context: ./app
    ports:
      - 8501:8501
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    depends_on:
      - db
 
  db:
    image: postgres:13
    environment:
      - POSTGRES_USER=your_username
      - POSTGRES_PASSWORD=your_password
      - POSTGRES_DB=chatbot_db
      - POSTGRES_HOST_AUTH_METHOD=trust
    volumes:
      - ./db/setup.sql:/docker-entrypoint-initdb.d/setup.sql

The docker-compose.yml file orchestrates the deployment of our LLM-powered chatbot applicationand defines the services, i.e., the containers, needed for our application.

In the services section, we have two distinct services defined: app and db. The app service corresponds to our Streamlit web application, which will serve as the user interface for interacting with the philosophical AI assistant. To build the Docker image for this service, we specify the build context as ./app, where the necessary application files, including the Dockerfile, reside.

To ensure seamless communication between the host machine and the app container, we use the ports option to map port 8501 from the host to the corresponding port inside the container. This allows users to access the web application through their web browsers.

For the application to function effectively, the environment variable OPENAI_API_KEY must be set, providing the necessary authentication for our LLM model to operate. This is done using the environment section, where we define this variable.

One of the critical components of our application is the integration of a PostgreSQL database to store the philosophical question-answer pairs. The db service sets up the PostgreSQL database using the postgres:13 image. We configure the required environment variables, such as the username, password, and database name, to establish the necessary connection.

To initialize the database with our predefined examples, we leverage the volumes option to mount the setup.sql file from the host machine into the container's /docker-entrypoint-initdb.d directory. This SQL script contains the commands to create the examples table and insert the example data. By doing so, our PostgreSQL database is ready to handle the profound philosophical interactions with the AI assistant.

In conclusion, the docker-compose.yml file provides a streamlined and efficient way to manage the deployment and integration of Language Model Microservices with a PostgreSQL database, creating a cohesive environment for our philosophical AI assistant application.

Setting up examples

The setup.sql file contains the SQL commands to set up the PostgreSQL database and insert example data. We use this file in the volumes section of the docker-compose.yml file to initialize the database when the container starts:

-- Create the examples table
CREATE TABLE IF NOT EXISTS examples (
  id SERIAL PRIMARY KEY,
  query TEXT,
  answer TEXT
);
 
-- Insert the examples
INSERT INTO examples (query, answer)
VALUES
('What is the nature of truth?', 'Truth is a mirror reflecting the depths of our souls.'),
('Is there an objective reality?', 'Reality is an ever-shifting kaleidoscope, molded by our perceptions.'),
('
 
What is the role of reason in human understanding?', 'Reason illuminates the path of knowledge, guiding us towards self-awareness.'),
('What is the nature of good and evil?', 'Good and evil are intertwined forces, dancing in the eternal cosmic tango.'),
('Is there a purpose to suffering?', 'Suffering unveils the canvas of resilience, painting a masterpiece of human spirit.'),
('What is the significance of morality?', 'Morality is the compass that navigates the vast ocean of human conscience.'),
('What is the essence of human existence?', 'Human existence is a riddle wrapped in the enigma of consciousness.'),
('How can we find meaning in a chaotic world?', 'Meaning sprouts from the fertile soil of introspection, blooming in the garden of wisdom.'),
('What is the nature of love and its transformative power?', 'Love is an alchemist, transmuting the mundane into the divine.'),
('What is the relationship between individuality and society?', 'Individuality dances in the grand symphony of society, playing a unique melody of self-expression.'),
('What is the pursuit of knowledge and its impact on the human journey?', 'Knowledge is the guiding star, illuminating the path of human evolution.'),
('What is the essence of human freedom?', 'Freedom is the soaring eagle, embracing the vast expanse of human potential.');

The setup.sql script plays a crucial role in setting up the PostgreSQL database for our LLM-powered chatbot application. The SQL commands within this script are responsible for creating the examples table with the necessary columns and adding the example data to this table.

In the context of our LLM application, these examples are of great importance as they serve as the foundation for the assistant's responses. The examples table could be a collection of question-answer pairs that the AI assistant has learned from past interactions. Each row in the table represents a specific question (query) and its corresponding insightful answer (answer).

When a user interacts with the chatbot and enters a new question, the application leverages these examples to create a custom prompt for the LLM model. By selecting a relevant example based on the length of the user's question, the application constructs a few-shot prompt that incorporates both the user's query and an example from the database.

The LLM model uses this customized prompt, containing the user's input and relevant examples, to generate a thoughtful and profound response that aligns with the philosophical nature of the AI assistant. The inclusion of examples in the prompt ensures that the chatbot's responses resonate with the same level of wisdom and depth found in the example interactions stored in the database.

By learning from past examples and incorporating them into the prompts, our LLM-powered chatbot can emulate the thought processes of philosophical giants like Socrates and Nietzsche. Ultimately, these examples become the building blocks that empower the AI assistant to engage in the profound realms of philosophical discourse with the users.

The Streamlit Application

The streamlit_app.py file defines the Streamlit web application and its user interface. It is the main file where we build the web app and interact with the LLM model:

import streamlit as st
from streamlit_chat import message
from streamlit_extras.colored_header import colored_header
from streamlit_extras.add_vertical_space import add_vertical_space
from utils import *
 
# Define database credentials here
DB_HOST = "db"
DB_PORT = 5432
DB_NAME = "chatbot_db"
DB_USER = "your_username"
DB_PASSWORD = "your_password"
 
# Connect to the PostgreSQL database and retrieve examples
examples = get_database_examples(DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD)
 
# Create the Da Vinci LLM model
davinci = create_davinci_model()
 
# Create the example selector and few shot prompt template
example_selector = create_example_selector(examples)
dynamic_prompt_template = create_few_shot_prompt_template(example_selector)
 
# Now the Streamlit app
 
# Sidebar contents
with st.sidebar:
    st.title('The AI seeker of truth and wisdom')
    st.markdown('''
    ## About
    This app is an LLM-powered chatbot built using:
    - Streamlit
    - Open AI Davinci LLM Model
    - LangChain
    - Philosophy
 
    ''')
    add_vertical_space(5)
    st.write('Running in Docker!')
 
# Generate empty lists for generated and past.
## generated stores AI generated responses
if 'generated' not in st.session_state:
    st.session_state['generated'] = ["Hi, what questions do you have today?"]
## past stores User's questions
if 'past' not in st.session_state:
    st.session_state['past'] = ['Hi!']
 
# Layout of input/response containers
input_container = st.container()
colored_header(label='', description='', color_name='blue-30')
response_container = st.container()
 
# User input
## Function for taking user provided prompt as input
def get_text():
    input_text = st.text_input("You: ", "", key="input")
    return input_text
 
## Applying the user input box
with input_container:
    user_input = get_text()
 
# Response output
## Function for taking user prompt as input followed by producing AI generated responses
def generate_response(prompt):
    response = davinci(
        dynamic_prompt_template.format(query=prompt)
    )
    return response
 
## Conditional display of AI generated responses as a function of user provided prompts
with response_container:
    if user_input:
        response = generate_response(user_input)
        st.session_state.past.append(user_input)
        st.session_state.generated.append(response)
 
    if st.session_state['generated']:
        for i in range(len(st.session_state['generated'])):
            message(st.session_state['past'][i], is_user=True, key=str(i) + '_user',avatar_style='identicon',seed=123)
            message(st.session_state["generated"][i], key=str(i),avatar_style='icons',seed=123)

In this part of the code, we set up the core components of our LLM-powered chatbot application. We begin by importing the necessary libraries, including Streamlit, Streamlit Chat, and Streamlit Extras, along with utility functions from the utils.py file. Next, we define the database credentials (DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD) required for connecting to the PostgreSQL database.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

The application then establishes a connection to the database using the get_database_examples function from the utils.py file. This crucial step retrieves profound philosophical question-answer pairs stored in the examples table. These examples are essential as they serve as a knowledge base for the AI assistant and provide the context and wisdom needed to generate meaningful responses.

To leverage the OpenAI Da Vinci LLM model, we create the model instance using the create_davinci_model function from utils.py. This model acts as the core engine of our chatbot, enabling it to produce thoughtful and profound responses.

In order to create custom prompts for the LLM model, we utilize the create_example_selector and create_few_shot_prompt_template functions from the utils.py file. These functions help select relevant examples based on the length of the user's input and construct dynamic prompts that combine the user's query with relevant examples.

The Streamlit web app's sidebar is then set up, providing users with information about the application's purpose and inspiration. Within the application's session state, two lists (generated and past) are initialized to store AI-generated responses and user questions, respectively.

To ensure an organized layout, we define two containers (input_container and response_container). The input_container houses the text input box where users can enter their questions. The get_text function is responsible for capturing the user's input.

For generating AI responses, the generate_response function takes the user's prompt, processes it through the Da Vinci LLM model, and produces insightful replies. The AI-generated responses are displayed in the response_container using the message function from the Streamlit Chat library, allowing users to engage in profound philosophical dialogues with the AI assistant. Overall, this setup lays the groundwork for an intellectually stimulating and philosophical chatbot experience.

Crating the utils file

The utils.py file contains utility functions for our application, including connecting to the database, creating the Da Vinci LLM model, and generating responses:

from langchain import PromptTemplate, FewShotPromptTemplate
from langchain.prompts.example_selector import LengthBasedExampleSelector
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.example_selector import LengthBasedExampleSelector
from langchain import FewShotPromptTemplate
import psycopg2
 
def get_database_examples(host, port, dbname, user, password):
    try:
        conn = psycopg2.connect(
            host=host,
            port=port,
            dbname=dbname,
            user=user,
            password=password
        )
        cursor = conn.cursor()
        cursor.execute("SELECT query, answer FROM examples")
        rows = cursor.fetchall()
        examples = [{"query": row[0], "answer": row[1]} for row in rows]
        cursor.close()
        conn.close()
        return examples
    except psycopg2.Error as e:
        raise Exception(f"Error connecting to the database: {e}")
 
def create_davinci_model():
    return OpenAI(model_name='text-davinci-003')
 
def create_example_selector(examples):
    example_template = """
    User: {query}
    AI: {answer}
    """
 
    example_prompt = PromptTemplate(
        input_variables=["query", "answer"],
        template=example_template
    )
 
    if not examples:
        raise Exception("No examples found in the database.")
 
    return LengthBasedExampleSelector(
        examples=examples,
        example_prompt=example_prompt,
        max_length=50
    )
 
def create_few_shot_prompt_template(example_selector):
    prefix = """The following are excerpts from conversations with a philosophical AI assistant.
    The assistant is a seeker of truth and wisdom, responding with profound questions to know yourself
    in a way that Socrates, Nietzsche, and other great minds would do. Here are some examples:"""
 
    suffix = """
    User: {query}
    AI: """
 
    return FewShotPromptTemplate(
        example_selector=example_selector,
        example_prompt=example_selector.example_prompt,
        prefix=prefix,
        suffix=suffix,
        input_variables=["query"],
        example_separator="\\\\n"
    )
 
def generate_response(davinci, dynamic_prompt_template, prompt):
    response = davinci(dynamic_prompt_template.format(query=prompt))
    return response

The get_database_examples function is responsible for establishing a connection to the PostgreSQL database using the provided credentials (host, port, dbname, user, password). Through this connection, the function executes a query to retrieve the question-answer pairs stored in the examples table. The function then organizes this data into a list of dictionaries, with each dictionary representing an example containing the query (question) and its corresponding answer.

The create_davinci_model function is straightforward, as it initializes and returns the Da Vinci LLM model.

To handle the selection of relevant examples for constructing dynamic prompts, the create_example_selector function plays a crucial role. It takes the list of examples as input and creates an example selector. This selector helps choose relevant examples based on the length of the user's query. By using this selector, the AI assistant can incorporate diverse examples that align with the user's input, leading to more coherent and contextually appropriate responses.

The create_few_shot_prompt_template function is responsible for building the few-shot prompt template. This template includes a custom prefix and suffix to set the tone and style of the philosophical AI assistant. The prefix emphasizes the assistant's role as a "seeker of truth and wisdom" while the suffix provides the formatting for the user's query and AI-generated response. The custom template ensures that the AI assistant's interactions are profound and engaging, resembling the thought-provoking dialogues of historical philosophers like Socrates and Nietzsche.

Finally, the generate_response function is designed to generate the AI's response based on the user's prompt. It takes the Da Vinci LLM model, dynamic prompt template, and the user's input as input parameters. The function uses the LLM model to process the dynamic prompt, blending the user's query with the selected examples, and returns the AI-generated response.

Starting the application

To launch our philosophical AI assistant application with all its components integrated seamlessly, we can use Docker Compose. By executing the command docker-compose --env-file .env up, the Docker Compose tool will orchestrate the entire application deployment process.

The --env-file .env option allows us to specify the environment variables from the .env file, which holds sensitive credentials and configuration details. This ensures that the necessary environment variables, such as the OpenAI API key and database credentials, are accessible to the application without being explicitly exposed in the codebase.

When the docker-compose up command is initiated, Docker Compose will first build the application's Docker image using the Dockerfile defined in the ./app directory. This image will contain all the required dependencies and configurations for our Streamlit web application and the integration with the Da Vinci LLM model.

Next, Docker Compose will create two services: the app service, which represents our Streamlit web application, and the db service, representing the PostgreSQL database. The app service is configured to run on port 8501, making it accessible through http://localhost:8501 in the browser.

Once the services are up and running, the Streamlit web application will be fully operational, and users can interact with the philosophical AI assistant through the user-friendly interface. When a user enters a philosophical question, the application will use the Da Vinci LLM model, together with the selected examples, to generate insightful and profound responses in the style of great philosophers:

building-a-containerized-llm-chatbot-application-img-0

With Docker Compose, our entire application, including the web server, LLM model, and database, will be containerized, enabling seamless deployment across different environments. This approach ensures that the application is easily scalable and portable, allowing users to experience the intellectual exchange with the philosophical AI assistant effortlessly.

Conclusion

In this tutorial, we've built a containerized LLM-powered chatbot application capable of answering deep philosophical questions and responding with profound questions, inspired by philosophers like Socrates and Nietzsche. We used Streamlit as the web application framework, PostgreSQL as the database, and OpenAI's GPT-3.5 model for language processing.

By combining Streamlit, PostgreSQL, and OpenAI's GPT-3.5 model, you've crafted an intellectually stimulating user experience. Your chatbot can answer philosophical inquiries with deep insights and thought-provoking questions, providing users with a unique and engaging interaction.

Feel free to experiment further with the chatbot, add more examples to the database, or explore different prompts for the LLM model to enrich the user experience. As you continue to develop your AI assistant, remember the immense potential these technologies hold for solving real-world challenges and fostering intelligent conversations.

Author Bio:

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.

LinkedIn