Summary
In this chapter, you learned:
- How LLMs work and how to obtain access to one
- About the concepts around Semantic Kernel
- How to extend an LLM with functions
- How to add session memory and stream results
- About Hugging Face and local models
In the next chapter, you will learn about dependency injection (DI) containers that automate the process of injecting dependencies and service lifetimes for effective dependency management.