Deploying ML models in Azure
Broadly speaking, there are two common approaches to deploying ML models, namely deploying them as synchronous real-time web services and as asynchronous batch-scoring services. Please note that the same model could be deployed as two different services, serving different use cases. The deployment type depends heavily on the batch size and response time of the scoring pattern of the model. Small batch sizes with fast responses require a horizontally scalable real-time web service, whereas large batch sizes and slow response times require horizontally and vertically scalable batch services.
The deployment of a text-understanding model (for example, an entity recognition model or sentiment analysis) could include a real-time web service that evaluates the model whenever a new comment is posted to an app, as well as a batch scorer in another ML pipeline to extract relevant features from training data. With the former, we want to serve each request as quickly...