Organizing and tracking training jobs with SageMaker Experiments
A key challenge ML practitioners face is keeping track of the myriad ML experiments that need to be executed before a model achieves desired results. For a single ML project, it is not uncommon for data scientists to routinely train several different models looking for improved accuracy. HPT adds more training jobs to these experiments. Typically, there are many details to track for experiments such as hyperparameters, model architectures, training algorithms, custom scripts, metrics, result artifacts, and more.
In this section, we will discuss Amazon SageMaker Experiments, which allows you to organize, track, visualize, and compare ML models across all phases of the ML lifecycle, including feature engineering, model training, model tuning, and model deploying. SageMaker Experiments' capability tracks model lineage, allowing you to troubleshoot production issues and audit your models to meet compliance requirements...