Summary
In this chapter, we learned how to convert ML models into a portable and executable format with ONNX, what an FPGA is, and how we can deploy a DNN featurizer to an FPGA VM through Azure Machine Learning. In addition, we learned how to integrate our ML models into various Azure services, such as Azure IoT Edge and Power BI.
This concludes our discussion through the previous two chapters on the various options to deploy ML models for batch or real-time inferencing.
In the next chapter, we will bring everything we learned so far together to understand and build an end-to-end MLOps pipeline, enabling us to create an enterprise-ready and automated environment for any kind of process that requires the addition of ML.