Summary
In this chapter, we explored the need for RAG and AI orchestration to address the natural limitations of LLMs.
We saw how AI orchestration frameworks such as Semantic Kernel connect a core kernel with specialized plugins and functions that can provide personalities, custom prompts, data from external sources, and knowledge from semantic memory.
We also saw how to create and invoke our own functions and plugins and how functions can work together to address complex requests.
This concludes Part 3 of this book, focusing on exploring AI through Polyglot Notebooks.
In the final part of this book, we’ll wrap everything up by showing how Polyglot Notebooks can integrate into the workflows of software engineering and data science teams.