Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
OpenAI API Cookbook

You're reading from   OpenAI API Cookbook Build intelligent applications including chatbots, virtual assistants, and content generators

Arrow left icon
Product type Paperback
Published in Mar 2024
Publisher Packt
ISBN-13 9781805121350
Length 192 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Henry Habib Henry Habib
Author Profile Icon Henry Habib
Henry Habib
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

Preface 1. Chapter 1: Unlocking OpenAI and Setting Up Your API Playground Environment 2. Chapter 2: OpenAI API Endpoints Explained FREE CHAPTER 3. Chapter 3: Understanding Key Parameters and Their Impact on Generated Responses 4. Chapter 4: Incorporating Additional Features from the OpenAI API 5. Chapter 5: Staging the OpenAI API for Application Development 6. Chapter 6: Building Intelligent Applications with the OpenAI API 7. Chapter 7: Building Assistants with the OpenAI API 8. Index 9. Other Books You May Enjoy

Using the Chat Log to modify the model’s behavior

In this recipe, we will learn how to modify the Chat Log and how it impacts the completion response that we receive from the model. This is important because developers often find this to be the best way to fine tune a model, without actually needing to create a new model. This also follows a prompt engineering must-have of providing the model with suitable examples.

How to do it…

We can add examples of prompts and responses to the Chat Log to modify the model’s behavior. Let’s observe this with the following steps:

  1. Navigate to the OpenAI Playground. If you already have messages populated, refresh the page to start afresh.
  2. In the System Message, type in the following: You are an assistant that creates marketing slogans based on descriptions of companies. Here, we are clearly instructing the model of its role and context.
  3. In the Chat Log, populate the USER message with the following: A company that makes ice cream.

    Select the Add message button located underneath the USER label to add a new message. Ensure that the label of the message says ASSISTANT. If it does not, select the label to toggle between USER and ASSISTANT.

    Now, type the following into the ASSISTANT message: Sham - the ice cream that never melts!.

  4. Select the Add message button and ensure that the label of the message says USER instead now. Type the following into the USER message: A company that produces comedy movies.
  5. Select the Add message button, and ensure that the label of the message says ASSISTANT. Type the following into the ASSISTANT message: Sham - the best way to tickle your funny bone!.
  6. Repeat steps 4-5 once more, with the following USER and ASSISTANT messages, respectively: A company that provides legal assistance to businesses, and Sham - we know business law!. At this point, you should see the following:
Figure 1.5 – The OpenAI Playground with Chat Logs populated

Figure 1.5 – The OpenAI Playground with Chat Logs populated

  1. Finally, select the Add message button, and create a USER message with the following: A company that writes engaging mystery novels.
  2. Select the Submit button on the bottom of the page.
  3. You should now see a completion response from OpenAI. In my case (Figure 1.6), the response is as follows:
    Sham – unravel the secrets with our captivating mysteries!

    Yours may be different, but the response you see will definitely start with the word “Sham –” and end with an exclamation point. In this way, we have trained the model to only give us completion responses in that format.

Figure 1.6 – The OpenAI Playground with completion, after changing the Chat Log

Figure 1.6 – The OpenAI Playground with completion, after changing the Chat Log

How it works…

As we learned in the Running a completion request in the OpenAI Playground recipe, ChatGPT and its GPT models are built on a transformer architecture, which processes input and generates responses based on the immediate chat history it has been given. It doesn’t have an ongoing memory of past interactions or a stored understanding of context outside the immediate conversation. The Chat Log has a significant impact on the model’s completions. When the model receives a prompt, it takes into account the most recent prompt, the System Message, and all the preceding messages in the Chat Log.

We can observe this in the Playground by providing our own sets of User and Assistant messages, and then see how the model changes its completion, as we did in the preceding steps.

In particular, the model has detected two patterns in the Chat Log and then generated the completion to follow that behavior:

  • The model detected that all manual Assistant completions begin with the word Sham, and so it added that prefix to its completion
  • The model identified that all slogans end with an exclamation point, and so when it generated the completion, it also added in an exclamation point

Overall, the Chat Log can be used to train the model to generate certain types of completions that the user wants to create. In addition, the Chat Log helps the model understand and maintain the context of the bigger conversation.

For example, if you added a User message with What is an airplane? and followed it up with another User message of How do they fly?, the model would understand that they refers to the airplane because of the Chat Log.

Prompt engineering

The Chat Log plays a pivotal role in influencing the model’s completions, and this observation is a glimpse into the broader realm of prompt engineering. Prompt engineering is a technique where the input or prompt given to a model is carefully crafted to guide the model towards producing a desired output.

Within the sphere of prompt engineering, there are a few notable concepts, as follows:

  • Zero-shot prompting: Here, the model is given a task that it hasn’t been explicitly trained on. It relies entirely on its pre-existing knowledge and training to generate a relevant response. In essence, it’s like asking the model to perform a task cold, without any prior examples.
  • Few-shot prompting: This involves providing the model with a small number of examples related to the desired task. The aim is to nudge the model into recognizing the pattern or context and then generating a relevant completion based on the few examples given.

Understanding these nuances in how prompts can be engineered allows users to leverage ChatGPT’s capabilities more effectively, tailoring interactions to their specific needs.

Overall, the Chat Log (and the System Message, as we learned in the earlier recipe) is a great low-touch method of aligning the completion responses from OpenAI to a desired target, without needing to fine-tune the model itself. Now that we’ve used the Playground to test prompts and completions, it’s time to use the actual OpenAI API.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime