Mastering the Fundamentals of Prompt Engineering
In Chapter 5, we briefly evaluated a fine-tuned Large Language Model (LLM) against a general-purpose model using in-context learning or the few-shot prompting approach. In this chapter, we will revisit and explore prompting techniques to examine how well we can adapt a general-purpose LLM without fine-tuning. We explore various prompting strategies that leverage the model’s inherent capabilities to produce targeted and contextually relevant outputs. We will start by examining the shift toward prompt-based language models. Then, we will revisit zero- and few-shot methods, explain prompt-chaining, and discuss various strategies, including more advanced techniques such as Retrieval Augmented Generation (RAG). At the end of the chapter, we will apply what we have learned and design a prompting strategy with the aim of consistently eliciting factual, accurate, and consistent responses that accomplish a specific business task.
Before...