Machine Learning Workload and Model Deployment
Having acquired knowledge of Edge computing concepts, techniques, and practical hands-on experience to manage microservices on Edge nodes in the last few chapters, let’s start to position ourselves to build a modern, AI-enabled application. In earlier chapters, we discussed that an efficient edge application may need to process data where data is generated. For the AI-enabled application, machine learning (ML)-based inferencing will need to be done locally.
In the last few years, ML-based inferencing at the Edge has started to become an integral part of the overall Edge application. So, it’s imperative that an Edge computing platform inherently provides the capability to deploy and manage ML models to Edge nodes that work seamlessly within the same paradigm of managing the services.
Open Horizon, in addition to all the capabilities discussed earlier, provides just that – where policy-based schemes manage services...