Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Customize ChatGPT for Specific Tasks Using Effective Prompts – Shot Learning

Save for later
  • 5 min read
  • 02 Jun 2023

article-image

This article is an excerpt from the book Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. This book will provide you with insights into the inner workings of the LLMs and guide you through creating your own language models.

 

We know for the fact that OpenAI models, and hence also ChatGPT, come in a pre-trained format. They have been trained on a huge amount of data and have had their (billions of) parameters configured accordingly. However, this doesn’t mean that those models can’t learn anymore. One way to customize an OpenAI model and make it more capable of addressing specific tasks is by fine-tuning.

Fine-tuning is a proper training process that requires a training dataset, compute power, and some training time (depending on the amount of data and compute instances). That is why it is worth testing another method for our model to become more skilled in specific tasks: shot learning.

The idea is to let the model learn from simple examples rather than the entire dataset. Those examples are samples of the way we would like the model to respond so that the model not only learns the content but also the format, style, and taxonomy to use in its response. Furthermore, shot learning occurs directly via the prompt (as we will see in the following scenarios), so the whole experience is less time-consuming and easier to perform.

The number of examples provided determines the level of shot learning we are referring to. In other words, we refer to zero-shot if no example is provided, one-shot if one example is provided, and few-shot if more than 2-3 examples are provided.

Let’s focus on each of those scenarios:

 

Zero-shot learning

 

In this type of learning, the model is asked to perform a task for which it has not seen any training examples. The model must rely on prior knowledge or general information about the task to complete the task. For example, a zero-shot learning approach could be that of asking the model to generate a description, as defined in my prompt:

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-0

 

One-shot learning

 

In this type of learning, the model is given a single example of each new task it is asked to perform. The model must use its prior knowledge to generalize from this single example to perform the task. If we consider the preceding example, I could provide my model with a prompt-completion example before asking it to generate a new one:

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-1

 

Note that the way I provided an example was similar to the structure used for fine-tuning:

 

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-2

 

Few-shot learning

 

In this type of learning, the model is given a small number of examples (typically between 3 and 5) of each new task it is asked to perform. The model must use its prior knowledge to generalize from these examples to perform the task. Let’s continue with our example and provide the model with further examples:

 

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-3

 

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

The nice thing about few-shot learning is that you can also control model output in terms of how it is presented. You can also provide your model with a template of the way you would like your output to look. For example, consider the following tweet classifier:

 

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-4

 

Let’s examine the preceding figure. First, I provided ChatGPT with some examples of labeled tweets. Then, I provided the same tweets but in a different data format (list format), as well as the labels in the same format. Finally, in list format, I provided unlabeled tweets so that the model returns a list of labels.

 

Understanding Prompt Design

 

The output format is not the only thing you can teach your model, though. You can also teach it to act and speak with a particular jargon and taxonomy, which could help you obtain the desired result with the desired wording:

 

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-5

Or, imagine you want to generate a chatbot called Simpy that is very funny and sarcastic while responding:

 

customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning-img-6

We have to say, with this last one, ChatGPT nailed it.

Summary

 

Short–learning possibilities are limitless (and often more useful than Simpy) – it’s only a matter of testing and a little bit of patience in finding the proper prompt design.

As mentioned previously, it is important to remember that these forms of learning are different from traditional supervised learning, as well as fine-tuning. In few-shot learning, the goal is to enable the model to learn from very few examples, and to generalize from those examples to new tasks.

About the Author

 

Valentina Alto graduated in 2021 in Data Science. Since 2020 she has been working in Microsoft as Azure Solution Specialist and, since 2022, she focused on Data&AI workloads within the Manufacturing and Pharmaceutical industry. She has been working on customers’ projects closely with system integrators to deploy cloud architecture with a focus on datalake house and DWH, data integration and engineering, IoT and real-time analytics, Azure Machine Learning, Azure cognitive services (including Azure OpenAI Service), and PowerBI for dashboarding. She holds a BSc in Finance and an MSc degree in Data Science from Bocconi University, Milan, Italy. Since her academic journey she has been writing Tech articles about Statistics, Machine Learning, Deep Learning and AI on various publications. She has also written a book about the fundamentals of Machine Learning with Python. 

You can connect with Valentina on: