In Part III, we focused on how SageMaker can be leveraged to train and deploy ML models, both built-in and custom, to solve business problems that cannot be readily solved using AWS AI services.
We started with Chapter 7, Working with Amazon SageMaker, where we learned how to process large datasets, conduct training, and optimize hyperparameters in SageMaker.
- Additionally, we looked at how SageMaker makes it seamless to run multiple experiments and deploy the best performing model for inference.
- We also illustrated how bringing your own model and container to SageMaker allows you to readily leverage capabilities such as model training, deployment, and inference at scale.
In Chapter 8, Creating Machine Learning Inference Pipelines, we learned how to conduct data preprocessing via Glue, a serverless ETL AWS service. A machine learning...