Hands-on with ML architecture
In this section, you will deploy a solution on a connected HBS hub that will require you to build and train ML models on the cloud and then deploy them to the edge for inferencing. The following screenshot shows the architecture of the lab with the highlighted steps (1-5) that you will complete:
Your objectives include the following, which are highlighted as distinct steps in the preceding architecture:
- Build the ML workflow using Amazon SageMaker
- Deploy the ML model from the cloud to the edge using AWS IoT Greengrass
- Perform ML inferencing on the edge and visualize the results
The following table shows the list of components you will use during the lab:
Building the ML workflow
In this section, you will build, train, and test the ML model using Amazon SageMaker Studio...