Part 3: Developing, Operationalizing, and Scaling Generative AI Applications
In this section, we will explore important concepts such as agents, copilots, and autonomous agents, alongside discussing prominent application development frameworks such as Semantic Kernel and LangChain, as well as the agent collaboration framework AutoGen, which are currently very popular. This discussion aims to guide you in creating strong autonomous generative AI applications. We will also concentrate on strategies for deploying these generative AI applications in a live production environment and scaling them efficiently for a large enterprise-wide scenario, considering the existing rate limits of Large Language Model (LLM) APIs.
This part contains the following chapters:
- Chapter 6, Developing and Operationalizing LLM-Based Cloud Applications: Exploring Dev Frameworks and LLMOps
- Chapter 7, Deploying ChatGPT in the Cloud: Architecture Design and Scaling Strategies