In the last few chapters, you have learned about readily-available Machine Learning (ML) APIs that solve business challenges. In this chapter, we will deep dive into AWS SageMaker—the service that is used to build, train, and deploy models seamlessly when the ML APIs do not completely meet your requirements. SageMaker increases the productivity of data scientists and machine learning engineers by abstracting away the complexity involved in provisioning compute and storage.
This is what will we cover in this chapter:
- Processing big data through Spark EMR
- Conducting training in Amazon SageMaker
- Deploying trained models and running inference
- Runninghyperparameter optimization
- Understanding SageMaker experimentation service
- Bring your own model – SageMaker, MXNet, and Gluon
- Bring your own container – R Model