Summary
In this chapter, we covered many aspects of the activity of prompt engineering, a core step in the context of improving the performance of LLMs within your application, as well as custimizing it depending on the scenario.We started with an introduction to the concept of prompt engineering and why it is important, to then move towards the basic principles – including clear instructions, asking for justification etc.Then, we moved towards more advanced techniques, which are meant to shape the reasoning approach of our LLM: few-shot learning, CoT and ReAct.Prompt engineering is an emerging discipline which is paving the way for a new category of applications, infused with LLMs. In next chapters, we will see those techniques in action building real-world applications using LLMs.