Summary
In this chapter, you learned how to take a trained model and deploy it as a managed service in Azure through a few simple lines of code. To do so, we learned that Azure Machine Learning deployments are structured in multiple components: a binary model registered, versioned, and stored in blob storage; a deployment environment based on Docker and Conda registered, versioned, and stored in a container registry; a scoring file, which defines the inference config and a compute target and resources defining the deployment config.
While this gives you great flexibility to configure every detail of your environment and deployment targets, you can also use no-code deployments for specific frameworks (such as scikit-learn, TensorFlow, and ONNX). This will take your model and deploy it using an out-of-the-box default environment and deployment target. When specifying a custom compute target, you need to trade off scalability, flexibility, cost, and operational expense for each supported...