Training and deploying models with built-in algorithms
Amazon SageMaker lets you train and deploy models in many different configurations. Although it encourages best practices, it is a modular service that lets you do things your own way.
In this section, we first look at a typical end-to-end workflow, where we use SageMaker from data upload all the way to model deployment. Then, we discuss alternative workflows, and how you can cherry pick the features that you need. Finally, we will take a look under the hood, and see what happens from an infrastructure perspective when we train and deploy.
Understanding the end-to-end workflow
Let's look at a typical SageMaker workflow. You'll see it again and again in our examples, as well as in the AWS notebooks available on GitHub (https://github.com/awslabs/amazon-sagemaker-examples/):
- Make your dataset available in Amazon S3: In most examples, we'll download a dataset from the internet, or load a local copy...