Deploying your first ML model
Now that you are familiar with remote deployments and loading resources from the cloud, it is time to deploy your first ML-powered capability to the edge! After all, a component making use of ML models is much like other components we have deployed. It is a combination of dependencies, runtime code, and static resources that are hosted in the cloud.
Reviewing the ML use case
In this case, the dependencies are packages and libraries for using OpenCV
(an open source library for computer vision (CV) use cases) and the Deep Learning Runtime (DLR), the runtime code is a preconfigured sample of inference code that uses DLR, and the static resources are a preconfigured model store for image classification and some sample images. The components deployed in this example are all provided and managed by AWS.
The solution that you will deploy simulates the use case for our HBS device hub that performs a simple image classification as part of a home security...