Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Prompt Engineering Best Practices

Save for later
  • 11 min read
  • 18 Sep 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!

Introduction

Prompt Engineering isn't just about throwing questions at a machine and hoping for a brilliant answer. Oh no, it's a meticulous dance of semantics and syntax. Think of it as the secret sauce that turns raw data into Michelin-star outputs. It's the act of finessing questions, statements, and other inputs in such a way that our ever-so-complex language models (yes, like those GPT variants you've heard so much about) know exactly what performance we're expecting.

prompt-engineering-best-practices-img-0

To put it cheekily: If you've ever tried to get a diva to perform without a rehearsal, you'd know the importance of Prompt Engineering. It's not merely about the questions we ask but the elegance and intent with which we pose them. The spotlight's on, let the show begin!

Why is Prompt Engineering Important?

Step into any grand opera house, and you'll immediately grasp the importance of a well-directed performance. Similarly, in the vast concert hall of machine learning, Prompt Engineering is the esteemed conductor orchestrating every note and crescendo. So, what makes prompt engineering indispensable? The answers unfold in the visual guide below

prompt-engineering-best-practices-img-1

In essence, while having a cutting-edge language model is like owning a Stradivarius violin, it's the skill of the violinist—or in our case, the precision of the prompt—that elicits the true magic. So, if you're looking to harness the full symphony of capabilities from your AI, mastering the art of Prompt Engineering isn't just recommended; it's indispensable

Types of Prompts

Mastering the art of prompting is akin to being a maestro of a symphony, understanding when each section of the orchestra needs to play to produce harmonious music. Each type of prompt offers a different angle to solicit the desired response from a language model. Here's your guide to their nuances:

Patterned Prompts

Patterned prompts serve as the scaffold for models, guiding them in producing outputs of a specified type and structure. These prompts leverage templates or distinct patterns to tailor the generation process, ensuring the output not only aligns with a desired format but also exhibits originality and creativity.

Suppose you want a quatrain about the night sky. Your patterned prompt might be:

Write a quatrain with an ABAB rhyme scheme about the night sky.

The LLM might then generate:

The stars above in splendor gleam (A)
Amidst the vast and dark expanse (B)
Moonlight casts a gentle beam (A)
In this nightly, cosmic dance (B)

Persona-based Prompts:

Persona-based prompts harness the power of fictional character profiles — personas — to steer the LLM in generating responses tailored to a particular user type or demographic. These profiles encapsulate key traits such as age, interests, objectives, and challenges, providing a backdrop against which the model can craft a more relevant and empathetic output. Before delving into persona-based prompts, one must craft a comprehensive persona.  Here is an example

Marketing: Generate promotional materials that echo the sentiments and inclinations of potential customers.

Alex is a 30-year-old fitness instructor from Miami with a penchant for high-intensity workouts. He's a massive fan of HIIT sessions, often referring to the methods of Jillian Michaels. Creating content for Alex, draft a promotional email to introduce a new line of high-intensity workout gear targeting fitness enthusiasts like him.

Zero-shot Prompts:

Zero-shot prompts empower language models to generate responses without needing prior task-specific training. They draw upon the vast knowledge and patterns that large language models (LLMs) have assimilated during their extensive general training. Simple in nature, these prompts can be as straightforward as 

Pen a poem on autumn

or

Render this phrase into French: Good morning.

The strength of zero-shot prompts lies in the extensive and diverse training data of LLMs. By recognizing myriad linguistic structures, relations, and nuances, LLMs can often tackle tasks with remarkable accuracy, even without having seen a direct example before.

Few-shot prompts

Few-shot prompts are a type of prompt that provides the large language model (LLM) with a few examples of the desired output before asking the main question. By doing this, you're helping the model to "warm up" and better understand the task at hand. Consider prompting the LLM to write a poem about a dog. You could use the following few-shot prompt:

Write a poem about a dog.

Example 1:

Loyal and ever by my side,

With a tail that won't stop its wag,

You bark, you play, and you glide,

My faithful friend, more than just a tag.

Example 2:

Golden fur, eyes shining bright,

Chasing balls, every single throw,

Your joy and bark, light up the night,

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

A bond with you, forever to grow.

Based on these two examples, the LLM is now primed to understand the rhythm, style, and thematic elements about dogs.

prompt-engineering-best-practices-img-2

Chain-of-Thought Prompts

Like a mastermind planning a heist, these prompts break down a challenge into digestible bits. It's all about guiding the model step by step. They are ideal for complex tasks that benefit from being broken down into structured, smaller tasks.

Let's consider we want the LLM to design a lesson plan for teaching basic physics to high school students, so we guide LLM by asking these questions in a step by step manner, as shown below:

prompt-engineering-best-practices-img-3

As the LLM progresses through each prompt in the chain, it is guided through a structured thought process, ensuring that the end result is comprehensive and aligned with the desired objective.

Context-Aware Prompts

Context-aware prompts are designed to give the large language model (LLM) essential background or situational information related to the task at hand. By grounding the LLM in a specific context, these prompts aid in generating outputs that are more relevant and nuanced. Whether it's the historical backdrop, cultural nuances, or the specific setting in which a piece of information will be used, context-aware prompts equip the LLM to tailor its response appropriately to the given circumstances. Here is an example:

Context: You are writing for an audience of young adults aged 18-25 who are passionate about environmental conservation and are considering veganism for ecological reasons.

Prompt: Given the context, provide a brief article discussing the environmental benefits of adopting a vegan lifestyle.

Tree of Thought Prompts

These prompts aren't just a series; they're a hierarchical orchestration. Imagine guiding the LLM with a blueprint, where every level delves deeper into the task. This tree-based approach organizes complex tasks by breaking them into branches and leaves, each representing different stages of the task. They shine brightest when handling intricate tasks that demand meticulous planning, reasoning, or a structured approach. By breaking a task into nodes and leaves, it transforms abstract assignments into navigable journeys. Example:

Objective: Plan a unique birthday celebration.

Depth 1 - Idea Generation

ToT Prompt 1: "Provide 5 unique ideas for a birthday celebration."

Potential Responses:

     An underwater-themed party at a local aquarium.
     A time-travel costume party where guests dress from different eras.
     Renting out a cinema for a private movie screening with friends.
     Hosting a cooking competition with birthday-related recipes.
     A mystery-solving escape room challenge tailored for the birthday person.

Depth 2 - Evaluation

ToT Prompt for Voting: "Analyse the choices below, then conclude which is the most promising for a memorable birthday celebration."

Selected Response: "A time-travel costume party where guests dress from different eras."

Depth 3 - Detailing Based on Chosen Idea

ToT Prompt 2: "Provide 5 detailed activities or features for a time-travel costume party."

Potential Responses:

     Setting up photo booths from different eras, like the Roaring 20s, the Renaissance, and the Future.
     Time-specific games or challenges, such as jousting or a 60s dance-off.
     A time-travel passport that gets stamped as guests move through different eras.
     Food and drinks menu tailored to different time periods.
     Prizes for the best costume from each era.

Depth 4 - Evaluation

ToT Prompt for Voting: "Analyse the choices below, then decide which feature will be most engaging for guests at a time-travel costume party."

Selected Response: "Setting up photo booths from different eras, like the Roaring 20s, the Renaissance, and the Future."

By using the Tree of Thought Prompts, the decision-making process becomes structured, allowing the exploration of various options at each stage and refining the choices based on evaluations.

Retrieval Augmented Generation (RAG) Prompts

This technique marries the power of retrieval with the flair of generation. In the vast ocean of data, the RAG ensures the LLM fishes out the most relevant pearls of information from a knowledge base and then weaves them into coherent, insightful narratives. RAG is the hero you want when detailed, factual accuracy is paramount. Think of tasks where there's a need to dive deep into databases or sources, such as Wikipedia, for question answering, translating texts, summarization, or even adding a factual touch to creative compositions.

Although ChatGPT, Bard, Bing, and other LLMs aren't RAG models in the conventional sense, when supplied with links or equipped with plugins granting access to specific documents, they can harness their expansive knowledge banks. This enables them to yield outputs that are not only detailed and insightful but also precise and factually accurate.

Prompting Tips and Best Practices

Navigating the world of prompts with a Large Language Model (LLM) is a tad like tango dancing. The clearer your moves (prompts), the better the performance. To ensure you and your LLM dance in perfect harmony, consider these golden rules of prompting:

  • Precision is Key: Always aim for laser-like specificity in your prompts. The less room there is for guesswork, the more aligned the response will be to your desired outcome.
  • Clarity Over Complexity: A well-phrased question is half the answer! Opt for clear, concise language, ensuring your prompts are easily decipherable.
  • Skip the Gibberish: While using industry-specific jargon might make you sound smart at conferences, your LLM prefers simplicity. Sidestep any ambiguous terms or jargon that might lead to misinterpretation.
  • Bite-Sized is Right-Sized: Complex tasks can be daunting, not just for humans but for LLMs too. Break them down into digestible, smaller tasks. It's easier to tackle a pie slice-by-slice than in one go.
  • Context is King: The more you feed the LLM in terms of background, the richer and more tailored its output will be. Context sets the stage for relevance.
  • The Prompts Playground: There's no one-size-fits-all in prompting. It's an art as much as it is a science. So, roll up your sleeves and experiment. Different tasks might resonate with different types of prompts. Keep tinkering until you strike gold!

Remember, the magic happens when you communicate with your LLM effectively. These best practices are your playbook to unlocking its full potential and ensuring a harmonious tango dance every time.

Conclusion

In the ever-evolving realm of AI and LLMs, the art of prompt engineering is akin to fine-tuning a musical instrument. With the right notes, or in this case, the right prompts, the symphony you create can be both harmonious and impactful. Whether you're dabbling in Zero-shot prompts or diving deep into CoT, remember that the essence lies in clear communication. By embracing best practices and staying adaptable, we not only harness the true prowess of these models but also pave the way for AI-human collaborations that are more seamless and productive. As we continue this dance with AI, may our prompts always lead, guide, and inspire.

Author Bio

Amita Kapoor is an accomplished AI consultant and educator with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita retired early and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. After her retirement, Amita founded NePeur, a company providing data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford.