Understanding limits
Any large-scale cloud deployment needs to be “enterprise-ready,” ensuring both the end user experience is acceptable and the business objectives and requirements are met. “Acceptable” is a loose term that can vary per user and workload. To understand how to scale to meet any user or business requirements, as the appetite for a service increases, we must first understand the basic limits, such as token limits. We covered these limits for most of the common generative AI GPT models in Chapter 5, however, we will quickly revisit them here.
As organizations scale up using an enterprise-ready service, such as Azure OpenAI, there are rate limits on how fast tokens are processed in the prompt+completion request. There is a limit to how many text prompts can be sent due to these token limits for each model that can be consumed in a single prompt+completion. It is important to note that the overall size of tokens for rate limiting includes...