Prompt Engineering
Prompt engineering for an enterprise takes a slightly different approach to interacting with ChatGPT or any LLM for personal use. Prompt engineering helps ensure that when the customer messages the LLM, a set of instructions is in place for them to succeed. When building prompts to generate a recommendation or complete some backend analysis, the recommendation team directly creates the prompt. The job is to consider how the instructions that give context to the customer’s messages, also called a prompt, are framed or create the prompts that request a result directly from the LLM. First, we will focus on prompt engineering before continuing with fine-tuning in the next chapter, which is an inevitable next step for enterprise solutions.
None of the tools discussed should be considered in a silo. Any enterprise solution will adopt Retrieval-Augmented Generation (RAG), prompt engineering, fine-tuning, and other approaches. Each can support different capabilities...