Summary
This chapter taught you the key differences in remote deploying of components to your Greengrass devices, how to accelerate your solution development using managed components provided by AWS, and getting your first ML workload on your prototype hub device. At this point, you have all the basics you need to start writing your own components and designing edge solutions. You can even extend the managed ML components to get started with some basic CV projects. If you have trained ML models and inference code being used in your business today as containerized code, you could get started with deploying them to the edge now as custom components.
In the next chapter, Chapter 5, Ingesting and Streaming Data from the Edge, you will learn more about how data moves throughout the edge in prescriptive structures, models, and transformations. Proper handling of data at the edge is important for adding efficiency, resilience, and security to your edge ML solutions.