Introduction
In the previous chapter, we gave some historical context to how AI has developed over the years, how we’ve gone from natural language processing (NLP) to large language models (LLMs), and how the latter serves as the underlying machine learning model in AI assistants. To use these AI assistants, you use natural language prompts as input. However, to ensure you “prompt” in an efficient way, so that you get what you want, it’s important to have a strategy, and that’s what this chapter aims to give you.
How to “prompt” efficiently is commonly known in the industry as a “prompt strategy” or “prompt engineering.” It’s not an engineering practice in the common sense of the word but rather an art form where practitioners of AI assistants have discovered patterns and practices that seem to work well. We, the authors of this book, are building upon those discovered practices and aim to describe...