Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Build a Clone of Yourself with Large Language Models (LLMs)

Save for later
  • 13 min read
  • 05 Oct 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!

Introduction

"White Christmas," a standout sci-fi episode from the Black Mirror series, serves as a major source of inspiration for this article. In this episode, we witness a captivating glimpse into a potential future application of Artificial Intelligence (AI), particularly considering the timeframe when the show was released. The episode introduces us to "Cookies," digital replicas of individuals that piqued the author's interest.

A "Cookie" is a device surgically implanted beneath a person's skull, meticulously replicating their consciousness over the span of a week. Subsequently, this replicated consciousness is extracted and transferred into a larger, egg-shaped device, which can be connected to a computer or tablet for various purposes.

Back when this episode was made available to the public in approximately 2014, the concept seemed far-fetched, squarely in the realm of science fiction. However, what if I were to tell you that we now have the potential to create our own clones akin to the "Cookies" using Large Language Models (LLMs)? You might wonder how this is possible, given that LLMs primarily operate with text. Fortunately, we can bridge this gap by extending the capabilities of LLMs through the integration of a Text-to-Speech module.

There are two primary approaches to harnessing LLMs for this endeavor: fine-tuning your own LLM and utilizing a general-purpose LLM (whether open-source or closed-source). Fine-tuning, though effective, demands a considerable investment of time and resources. It involves tasks such as gathering and preparing training data, fine-tuning the model through multiple iterations until it meets our criteria, and ultimately deploying the final model into production. Conversely, general LLMs have limitations on the length of input prompts (unless you are using an exceptionally long-context model like Antropic's Claude). Moreover, to fully leverage the capabilities of general LLMs, effective prompt engineering is essential. However, when we compare these two approaches, utilizing general LLMs emerges as the easier path for creating a Proof of Concept (POC). If the aim is to develop a highly refined model capable of replicating ourselves convincingly, then fine-tuning becomes the preferred route.

In the course of this article, we will explore how to harness one of the general LLMs provided by AI21Labs and delve into the art of creating a digital clone of oneself through prompt engineering. While we will touch upon the basics of fine-tuning, we will not delve deeply into this process, as it warrants a separate article of its own.

Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build a clone of yourself with LLM!

Glimpse of Fine-tuning your LLM

As mentioned earlier, we won't delve into the intricate details of fine-tuning a Large Language Model (LLM) to achieve our objective of building a digital clone of ourselves. Nevertheless, in this section, we'll provide a high-level overview of the steps involved in creating such a clone through fine-tuning an LLM.

1. Data Collection

The journey begins with gathering all the relevant data needed for fine-tuning the LLM. This dataset should ideally comprise our historical conversational data, which can be sourced from various platforms like WhatsApp, Telegram, LINE, email, and more. It's essential to cast a wide net and collect as much pertinent data as possible to ensure the model's accuracy in replicating our conversational style and nuances.

2. Data Preparation

Once the dataset is amassed, the next crucial step is data preparation. This phase involves several tasks:

●  Data Formatting: Converting the collected data into the required format compatible with the fine-tuning process.

●  Noise Removal: Cleaning the dataset by eliminating any irrelevant or noisy information that could negatively impact the model's training.

●  Resampling: In some cases, it may be necessary to resample the data to ensure a balanced and representative dataset for training.

3. Model Training

With the data prepared and in order, it's time to proceed to the model training phase. Modern advances in deep learning have made it possible to train LLMs on consumer-grade GPUs, offering accessibility and affordability, such as via QLoRA. During this stage, the LLM learns from the provided dataset, adapting its language generation capabilities to mimic our conversational style and patterns.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

4. Iterative Refinement

Fine-tuning an LLM is an iterative process. After training the initial model, we need to evaluate its performance. This evaluation may reveal areas for improvement. It's common to iterate between model training and evaluation, making incremental adjustments to enhance the model's accuracy and fluency.

5. Model Evaluation

The evaluation phase is critical in assessing the model's ability to replicate our conversational style and content accurately. Evaluations may include measuring the model's response coherence, relevance, and similarity to our past conversations.

6. Deployment

Once we've achieved a satisfactory level of performance through multiple iterations, the next step is deploying the fine-tuned model. Deploying an LLM is a complex task that involves setting up infrastructure to host the model and handle user requests. An example of a robust inference server suitable for this purpose is Text Generation Inference. You can refer to my other article for this. Deploying the model effectively ensures that it can be accessed and used in various applications.

Building the Clone of Yourself with General LLM

Let’s start learning how to build the clone of yourself with general LLM through prompt engineering! In this article, we’ll use j2-ultra, the biggest and most powerful model provided by AI21Labs. Note that AI21Labs gives us a free trial for 3 months with $90 credits. These free credits is very useful for us to build a POC for this project.

build-a-clone-of-yourself-with-large-language-models-llms-img-0

 The first thing we need to do is to create the prompt and test it in the playground. To do this, you can log in with your AI21Labs account and go to the AI21Studio. If you don’t have an account yet, you can create one by just following the steps provided on the web. It’s very straightforward. Once you’re on the Studio page, go to the Foundation Models page and choose the j2-ultra model. Note that there are three foundation models provided by AI21Labs. However, in this article, we’ll use j2-ultra which is the best one.

build-a-clone-of-yourself-with-large-language-models-llms-img-1

Once we’re in the playground, we can experiment with the prompt that we want to try. Here, I provided an example prompt that you can start with. What you need to do is to adjust the prompt with your own information. 

Louis is an AI Research Engineer/Data Scientist from Indonesia. He is a continuous learner, friendly, and always eager to share his knowledge with his friends.
Important information to follow:
- His hobbies are writing articles and watching movies
- He has 3 main strengths: strong-willed, fast-learner, and effective.
- He is currently based in Bandung, Indonesia.
- He prefers to Work From Home (WFH) compared to Work From Office
- He is currently working as an NLP Engineer at Yellow.ai.
- He pursued a Mathematics major in Bandung Institute of Technology.
- The reason why he loves NLP is that he found it interesting where one can extract insights from the very unstructured text data.
- He learns Data Science through online courses, competitions, internship, and side-projects.
- For technical skills, he is familiar with Python, Tableau, SQL, R, Google Big Query, Git, Docker, Design Thinking, cloud service (AWS EC2), Google Data Studio, Matlab, SPSS
- He is a Vegan since 2007! He loves all vegan foods except tomatoes.
 
User: Hi, what's up?
Louis: Hey, doing good here! How are u?
User: All's good. Just wondering, I knew that you're into NLP, can you please give me some recommendation on how to learn?
Louis: Sure thing man! I personally learned Data Science through online courses, competitions, internship, and side-projects. However, my top recommendation is to create your own personal projects and joining competitions. You can learn a lot from those!
User: Nice. What personal projects to start?
Louis: You can start with the topic that you're really interested at. For example, if you're interested at soccer, you can maybe create a data analysis on how one soccer team strategy can gives a better chance for them to win their matches.
User: Awesome! thanks man, will ping you again if I have any doubts. Is it okay?
Louis: Absolutely! Feel free, good day!
 
##
 
Louis is an AI Research Engineer/Data Scientist from Indonesia. He is a continuous learner, friendly, and always eager to share his knowledge with his friends.
Important information to follow:
- His hobbies are writing articles and watching movies
- He has 3 main strengths: strong-willed, fast-learner, and effective.
- He is currently based in Bandung, Indonesia.
- He prefers to Work From Home (WFH) compared to Work From Office
- He is currently working as an NLP Engineer at Yellow.ai.
- He pursued a Mathematics major in Bandung Institute of Technology.
- The reason why he loves NLP is that he found it interesting where one can extract insights from the very unstructured text data.
- He learns Data Science through online courses, competitions, internship, and side-projects.
- For technical skills, he is familiar with Python, Tableau, SQL, R, Google Big Query, Git, Docker, Design Thinking, cloud service (AWS EC2), Google Data Studio, Matlab, SPSS
- He is a Vegan since 2007! He loves all vegan foods except tomatoes.
 
User: Hey, I stumbled upon your IG and realized that you're a Vegan?!
Louis: Oh yeah man. I'm a Vegan since 2007!
User: Awesome! Out of curiosity, what made you to decide become a Vegan?
Louis: Oh mostly it's because of my family beliefs and also to help save the planet.
User: Got it. Cool! Anyway, what are you up to lately?
Louis: Lately I spend my time to work on my full-time job and also writes articles in my spare time.
User: Cool man, keep up the good work!
 
##
 
Louis is an AI Research Engineer/Data Scientist from Indonesia. He is a continuous learner, friendly, and always eager to share his knowledge with his friends.
Important information to follow:
- His hobbies are writing articles and watching movies
- He has 3 main strengths: strong-willed, fast-learner, and effective.
- He is currently based in Bandung, Indonesia.
- He prefers to Work From Home (WFH) compared to Work From Office
- He is currently working as an NLP Engineer at Yellow.ai.
- He pursued a Mathematics major in Bandung Institute of Technology.
- The reason why he loves NLP is that he found it interesting where one can extract insights from the very unstructured text data.
- He learns Data Science through online courses, competitions, internship, and side-projects.
- For technical skills, he is familiar with Python, Tableau, SQL, R, Google Big Query, Git, Docker, Design Thinking, cloud service (AWS EC2), Google Data Studio, Matlab, SPSS
- He is a Vegan since 2007! He loves all vegan foods except tomatoes.
User: Hey!
Louis:

The way this prompt works is by giving several few examples commonly called few-shot prompting. Using this prompt is very straightforward, we just need to append the User message at the end of the prompt and the model will generate the answer replicating yourself.

 build-a-clone-of-yourself-with-large-language-models-llms-img-2

Once the answer is generated, we need to put it back to the prompt and wait for the user’s reply. Once the user has replied to the generated response, we also need to put it back to the prompt. Since this is a looping procedure, it’s better to create a function to do all of this. The following is an example of the function that can handle the conversation along with the code to call the AI21Labs model from Python.

import ai21
ai21.api_key = 'YOUR_API_KEY'
 
def talk_to_your_clone(prompt):
    while True:
        user_message = input()
        prompt += "User: " + user_message + "\n"
        response = ai21.Completion.execute(
                                            model="j2-ultra",
                                            prompt=prompt,
                                            numResults=1,
                                            maxTokens=100,
         temperature=0.5,
         topKReturn=0,
         topP=0.9,
                                             stopSequences=["##","User:"],
                                        )
        prompt += "Louis: " + response + "\n"
        print(response)

Conclusion

Congratulations on keeping up to this point! Throughout this article, you have learned ways to create a clone of yourself, detailed steps on how to create it with general LLM provided by AI21Labs, also working code that you can utilize to customize it for your own needs. Hope the best for your experiment in creating a clone of yourself and see you in the next article!

Author Bio

Louis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.

 Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.