Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Automatic Prompt Engineering with Prompt-Royale

Save for later
  • 8 min read
  • 18 Sep 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!

Introduction

AI has never been more accessible than ever since the launch of ChatGPT. With Generative AI, people can now build their own AI engine by only giving commands in natural language. There is no need to know how to code, no need to prepare the training data, and no need to do any model hyperparameter tuning. What we need to do to build our own AI system is to only give commands, more widely known as prompt engineering.

Prompt engineering is more of an art than science. There are so many ways to do prompt engineering. The simplest form of prompt engineering is called zero-shot prompting, where the user just needs to directly give their command to the Large Language Model (LLM). For example: “Write an acrostic poem in Hindi”, “Write a 7-day itinerary in Bali”, etc.

Another prompting technique is called few-shot prompting, where we need to give several examples of the expected output inside the prompt itself. Let's say we want to utilize LLM to do sentiment analysis. We can write the prompt to be something like the following:

You are an expert in performing sentiment analysis. You can only return the output with 3 options: “negative”, “neutral”, and “positive”.
Example 1: I love this product! It works perfectly.
Sentiment: positive
Example 2: The weather today is terrible. It's raining non-stop.
Sentiment: negative
Example 3: I’m feeling sleepy
Sentiment: neutral
Text: Attending the concert last night was a dream come true. The music was incredible!
Sentiment:

The more sophisticated way to do prompt engineering is by performing Chain-of-Though (CoT). Basically, in this technique, we prompt the LLM to give a step-by-step explanation of why it arrives at the final answer. This technique is adopted widely by the AI community since it gives better output in a lot of cases. The drawback of implementing this prompt-engineering technique is the increase of the generated number of tokens which correlates positively with the latency.

There are still many prompting techniques available out there. Choosing the right prompt technique or even doing the prompt engineering itself is indeed not an easy task. We need to pass through many iterations until we find the best prompt for our use case.

In this article, I’ll guide you to do automatic prompt engineering which can surely save our time in creating the best prompt for our use case. We’ll discuss two popular automatic prompt engineering frameworks: GPT-Prompt-Engineer and Prompts-Royale, where we’ll dive deeper more into Prompts-Royale. Finally, there will be a dedicated section on how to install and utilize prompts-royale.

Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to perform automatic prompt engineering!

Automatic Prompt Generator Frameworks

There are two most popular automatic prompt generator frameworks: GPT-Prompt-Engineer and Prompts-Royale. Let’s start looking into the first one.

GPT-Prompt-Engineer [github] is a very famous GitHub repository with more than 4.5k starts. It can be utilized to automatically generate the best prompt by simply inputting the task description along with several test cases. Then, the system will generate, test, and rank several variations of prompts with the goal of finding the best among all of them. The steps to use this framework are very straightforward:

  1. Define your use-case and test cases
  2. Choose how many prompts to generate
  3. The system will generate a list of potential prompts, and test and rate their performance
  4. The final evaluation score will be printed in a table.

There is no UI available for this package, so it might not be very compelling to the non-coders. However, there are two ready-to-use Google Colab notebooks that can be directly used. The first notebook can be utilized for general tasks other than the classification task. ELO rating will be used to evaluate the best prompt amongst several prompt candidates. The second notebook is created specifically for the classification task where the evaluation process is conducted based on the available ground truth.

automatic-prompt-engineering-with-prompt-royale-img-0

Another framework that is relatively new and the “improved” version of the GPT-Prompt-Engineer package is Prompts-Royale [github]. Similar to GPT-Prompt-Engineer, it’s also very straightforward to utilize this framework. You just need to give the description of the task along with a couple of example scenarios and the expected outputs, then the system will do the rest.

There are indeed several plus points offered by this framework:

  • Automatic test cases generation: automatically creating test cases from the description, we just need to provide several examples and the system will generate more test cases automatically.
  • Monte Carlo matchmaking: not only utilizing ELO rating as in GPT-Prompt-Engineer, Prompts-Royale also uses the Monte Carlo method for matchmaking to ensure you get as much information with the least amount of iteration.
  • User Interface: unlike GPT-Prompt-Engineer, Prompts-Royale offers a nice UI where users can directly give all of the inputs and get the returned outputs all in a single nice UI.

 automatic-prompt-engineering-with-prompt-royale-img-1

Since Prompts-Royale offers additional benefits compared to the famous GPT-Prompt-Engineer framework, we’ll dive deeper into this framework instead in this article. Without wasting any more time, let’s see Prompts-Royale in action!

Prompts-Royale in Action!

Installation

To use Prompts-Royale, you can directly visit promptsroyale.com, or you can clone the repository and run it locally. To run locally, you just need to do the following things:

1. Clone the repository

            ```

git clone git@github.com:meistrari/prompts-royale.git

            ```

2. Install all dependencies with Bun

```

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
bun i

```

3. Run prompts-royale as a server in your local machine

```

bun run dev

```

This is the page that you will see once the server is up.

automatic-prompt-engineering-with-prompt-royale-img-2

Using the Prompts Royale

To use prompts-royale, we first need to input the OpenAI API key. We can use GPT3.5 or GPT4. You can find the key in your OpenAI account. If you don’t have an account yet, you can easily sign yourself up here.

automatic-prompt-engineering-with-prompt-royale-img-3

 

Once you insert the API key, you can start giving the necessary inputs in the form. You need to insert the task description and several test cases. Task description can be something like “Write a prompt that creates a headline for a website.” For test cases, we need to provide the scenario and the expected output, just like how we implement the few-shot prompting technique.

automatic-prompt-engineering-with-prompt-royale-img-4

 

 Next, we just let the system generate several prompt candidates by clicking the “Generate prompts” button. Note that we can also add our own written prompt to the list of prompt candidates.

automatic-prompt-engineering-with-prompt-royale-img-5

Finally, once we have the list of prompt candidates, we need to let the system choose which one is the best prompt. To do that, we need to input the number of battles that will be executed by the system. “Battle” simply means the process of selecting the best prompt out of all candidates. The battle will be between 2 prompts. Remember, the higher the number of battles the higher the cost to find the best prompt. By default, prompts-royale will run 60 battles.

automatic-prompt-engineering-with-prompt-royale-img-6

 

The results of the battles will be shown at the bottom of the page. There’s a chart of ratings over iterations and the battle log.

 automatic-prompt-engineering-with-prompt-royale-img-7

The final prompt ranking can be seen on the right side of the page, as follows. You can of course click each of the prompt button and see what’s the generated prompt.

automatic-prompt-engineering-with-prompt-royale-img-8

 

Conclusion

Congratulations on keeping up to this point! Throughout this article, you have learned how to do automatic prompt engineering with the help of prompts-royale. You’ve also learned several prompting techniques and another automatic prompt engineering technique called GPT-Prompt-Engineer. See you in the next article!

Author Bio

Louis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.

 Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.