Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Amazon SageMaker Best Practices
Amazon SageMaker Best Practices

Amazon SageMaker Best Practices : Proven tips and tricks to build successful machine learning solutions on Amazon SageMaker

Arrow left icon
Profile Icon Muppala Profile Icon Eigenbrode Profile Icon DeFauw
Arrow right icon
€8.99 €29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (8 Ratings)
eBook Sep 2021 348 pages 1st Edition
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Muppala Profile Icon Eigenbrode Profile Icon DeFauw
Arrow right icon
€8.99 €29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (8 Ratings)
eBook Sep 2021 348 pages 1st Edition
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Amazon SageMaker Best Practices

Chapter 1: Amazon SageMaker Overview

This chapter will provide a high-level overview of the Amazon SageMaker capabilities that map to the various phases of the machine learning (ML) process. This will set a foundation for the best practices discussion of using SageMaker capabilities in order to handle various data science challenges. 

In this chapter, we're going to cover the following main topics:

  • Preparing, building, training and tuning, deploying, and managing ML models
  • Discussion of data preparation capabilities
  • Feature tour of model-building capabilities
  • Feature tour of training and tuning capabilities
  • Feature tour of model management and deployment capabilities

Technical requirements

All notebooks with coding exercises will be available at the following GitHub link:

https://github.com/PacktPublishing/Amazon-SageMaker-Best-Practices

Preparing, building, training and tuning, deploying, and managing ML models

First, let's review the ML life cycle. By the end of this section, you should understand how SageMaker's capabilities map to the key phases of the ML life cycle. The following diagram shows you what the ML life cycle looks like:

Figure 1.1 – Machine learning life cycle

Figure 1.1 – Machine learning life cycle

As you can see, there are three phases of the ML life cycle at a high level:

  • In the Data Preparation phase, you collect and explore data, label a ground truth dataset, and prepare your features. Feature engineering, in turn, has several steps, including data normalization, encoding, and calculating embeddings, depending on the ML algorithm you choose.
  • In the Model Training phase, you build your model and tune it until you achieve a reasonable validation score that aligns with your business objective.
  • In the Operations phase, you test how well your model performs against real-world data, deploy it, and monitor how well it performs. We will cover model monitoring in more detail in Chapter 11, Monitoring Production Models with Amazon SageMaker Model Monitor and Clarify.

This diagram is purposely simplified; in reality, each phase may have multiple smaller steps, and the whole life cycle is iterative. You're never really done with ML; as you gather data on how your model performs in production, you'll likely try to improve it by collecting more data, changing your features, or tuning the model.

So how do SageMaker capabilities map to the ML life cycle? Before we answer that question, let's take a look at the SageMaker console (Figure 1.2):

Figure 1.2 – Navigation pane in the SageMaker console

Figure 1.2 – Navigation pane in the SageMaker console

The appearance of the console changes frequently and the preceding screenshot shows the current appearance of the console at the time of writing.

These capability groups align to the ML life cycle, shown as follows:

Figure 1.3 – Mapping of SageMaker capabilities to the ML life cycle

Figure 1.3 – Mapping of SageMaker capabilities to the ML life cycle

SageMaker Studio is not shown here, as it is an integrated workbench that provides a user interface for many SageMaker capabilities. The marketplace provides both data and algorithms that can be used across the life cycle.

Now that we have had a look at the console, let's dive deeper into the individual capabilities of SageMaker in each life cycle phase.

Discussion of data preparation capabilities

In this section, we'll dive into SageMaker's data preparation and feature engineering capabilities. By the end of this section, you should understand when to use SageMaker Ground Truth, Data Wrangler, Processing, Feature Store, and Clarify.

SageMaker Ground Truth

Obtaining labeled data for classification, regression, and other tasks is often the biggest barrier to ML projects, as many companies have a lot of data but have not explicitly labeled it according to business properties such as anomalous and high lifetime value. SageMaker Ground Truth helps you systematically label data by defining a labeling workflow and assigning labeling tasks to a human workforce.

Over time, Ground Truth can learn how to label data automatically, while still sending low-confidence results to humans for review. For advanced datasets such as 3D point clouds, which represent data points like shape coordinates, Ground Truth offers assistive labeling features, such as adding bounding boxes to the middle frames of a sequence once you label the start and end frames. The following diagram shows an example of labels applied to a dataset:

Figure 1.4 – SageMaker Ground Truth showing the labels applied to sentiment reviews

Figure 1.4 – SageMaker Ground Truth showing the labels applied to sentiment reviews

The data is sourced from the UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences). To counteract individual worker bias or error, a data object can be sent to multiple workers. In this example, we only have one worker, so the confidence score is not used.

Note that you can also use Ground Truth in other phases of the ML life cycle; for example, you may use it to check the labels generated by a production model.

SageMaker Data Wrangler

Data Wrangler helps you understand your data and perform feature engineering. Data Wrangler works with data stored in S3 (optionally accessed via Athena) and Redshift and performs typical visualization and transformations, such as correlation plots and categorical encoding. You can combine a series of transformations into a data flow and export that flow into an MLOps pipeline. The following screenshot shows an example of Data Wrangler information for a dataset:

Figure 1.5 – Data Wrangler displaying summary table information regarding a dataset

Figure 1.5 – Data Wrangler displaying summary table information regarding a dataset

You may also use Data Wrangler in the operations phase of the ML life cycle if you want to analyze the data coming into an ML model for production inference.

SageMaker Processing

SageMaker Processing jobs help you run data processing and feature engineering tasks on your datasets. By providing your own Docker image containing your code, or using a pre-built Spark or sklearn container, you can normalize and transform data to prepare your features. The following diagram shows the logical flow of a SageMaker Processing job:

Figure 1.6 – Conceptual overview of a Spark processing job. Spark jobs are particularly handy for processing larger datasets

Figure 1.6 – Conceptual overview of a Spark processing job. Spark jobs are particularly handy for processing larger datasets

You may also use processing jobs to evaluate the performance of ML models during the Model Training phase and to check data and model quality in the Model Operations phase.

SageMaker Feature Store

SageMaker Feature Store helps you organize and share your prepared features. Using a feature store improves quality and saves time by letting you reuse features rather than duplicate complex feature engineering code and computations that have already been done. Feature Store supports both batch and stream storage and retrieval. The following screenshot shows an example of feature group information:

Figure 1.7 – Feature Store showing a feature group with a set of related features

Figure 1.7 – Feature Store showing a feature group with a set of related features

Feature Store also helps during the Model Operations phase, as you can quickly look up complex feature vectors to help obtain real-time predictions.

SageMaker Clarify

SageMaker Clarify helps you understand model behavior and calculate bias metrics from your model. It checks for imbalance in the dataset, models that give different results based on certain attributes, and bias that appears due to data drift. It can also use leading explainability algorithms such as SHAP to help you explain individual predictions to get a sense of which features drive model behavior. The following figure shows an example of class imbalance scores for a dataset, where we have many more samples from the Gift Card category than the other categories:

Figure 1.8 – Clarify showing class imbalance scores in a dataset. Class imbalance can lead to biased results in an ML model

Figure 1.8 – Clarify showing class imbalance scores in a dataset. Class imbalance can lead to biased results in an ML model

Clarify can be used throughout the entire ML life cycle, but consider using it early in the life cycle to detect imbalanced data (datasets that have many examples of one class but few of another).

Now that we've introduced several SageMaker capabilities for data preparation, let's move on to model-building capabilities.

Feature tour of model-building capabilities

In this section, we'll dive into SageMaker's model-building capabilities. By the end of this section, you should understand when to use SageMaker Studio or SageMaker notebook instances, and how to choose between SageMaker's built-in algorithms, frameworks, and libraries, versus a bring your own (BYO) approach.

SageMaker Studio

SageMaker Studio is an integrated development environment (IDE) for ML. It brings together Jupyter notebooks, experiment management, and other tools into a unified user interface. You can easily share notebooks and notebook snapshots with other team members using Git or a shared filesystem. The following screenshot shows an example of one of SageMaker Studio's built-in visualizations:

Figure 1.9 – SageMaker Studio showing an experiment graph

Figure 1.9 – SageMaker Studio showing an experiment graph

SageMaker Studio can be used in all phases of the ML life cycle.

SageMaker notebook instances

If you prefer a more traditional Jupyter or JupyterLab experience, and you don't need the additional integrations and collaboration tools that Studio provides, you can use a regular SageMaker notebook instance. You choose the notebook instance compute capacity (that is, whether you want GPUs and how much storage you need), and SageMaker provisions the environment with the Jupyter Notebook and JupyterLab and several of the common ML frameworks and libraries installed.

The notebook instance also supports Docker in case you want to build and test containers with ML code locally. Best of all, the notebook instances come bundled with over 100 example notebooks. The following figure shows an example of the JupyterLab interface in a notebook:

Figure 1.10 – JupyterLab interface in a SageMaker notebook, showing a list of example notebooks

Figure 1.10 – JupyterLab interface in a SageMaker notebook, showing a list of example notebooks

Similar to SageMaker Studio, you can perform almost any part of the ML life cycle in a notebook instance.

SageMaker algorithms

SageMaker bundles open source and proprietary algorithms for many common ML use cases. These algorithms are a good starting point as they are tuned for performance, often supporting distributed training. The following table lists the SageMaker algorithms provided for different types of ML problems:

Figure 1.11 – SageMaker algorithms for various ML scenarios

Figure 1.11 – SageMaker algorithms for various ML scenarios

BYO algorithms and scripts

If you prefer to write your own training and inference code, you can work with a supported ML, graph, or RL framework, or bundle your own code into a Docker image. The BYO approach works well if you already have a library of model code, or if you need to build a model for a use case where a pre-built algorithm doesn't work well. Data scientists who use R like to use this approach. SageMaker supports the following frameworks:

  • Supported machine learning frameworks: XGBoost, sklearn
  • Supported deep learning frameworks: TensorFlow, PyTorch, MXNet, Chainer
  • Supported reinforcement learning frameworks: Ray RLLib, Coach
  • Supporting graph frameworks: Deep Graph Library

Now that we've introduced several SageMaker capabilities for model building, let's move on to training and tuning capabilities.

Feature tour of training and tuning capabilities

In this section, we'll dive into SageMaker's model training capabilities. By the end of this section, you should understand the basics of SageMaker training jobs, Autopilot and Hyperparameter Optimization (HPO), SageMaker Debugger, and SageMaker Experiments.

SageMaker training jobs

When you launch a model training job, SageMaker manages a series of steps for you. It launches one or more training instances, transfers training data from S3 or other supported storage systems to the instances, gets your training code from a Docker image repository, and starts the job. It monitors job progress and collects model artifacts and metrics from the job. The following screenshot shows an example of the hyperparameters tracked in a training job:

Figure 1.12 – SageMaker training jobs capture data such as input hyperparameter values

Figure 1.12 – SageMaker training jobs capture data such as input hyperparameter values

For larger training datasets, SageMaker manages distributed training. It will distribute subsets of data from storage to different training instances and manage the inter-node communication during the training job. The specifics vary based on the ML framework you're using, but note that most of the supported frameworks and several of the SageMaker built-in algorithms support distributed training.

Autopilot

If you are working with tabular data and solving regression or classification problems, you may find that you're performing a lot of repetitive work. You may have settled on XGBoost as a high-performing algorithm, always one-hot encoding for low-cardinality categorical features, normalizing numeric features, and so on. Autopilot performs many of these routine steps for you. In the following diagram, you can see the logical steps for an Autopilot job:

Figure 1.13 – Autopilot process

Figure 1.13 – Autopilot process

Autopilot saves you time by automating a lot of that routine process. It will run normal feature preparation tasks, try the three supported algorithms (Linear Learner, XGBoost, and a multilayer perceptron), and run hyperparameter tuning. Autopilot is a great place to start even if you end up needing to refine the output, as it generates a notebook with the code used for the entire process.

HPO

Some ML algorithms accept tens of hyperparameters as inputs. Tuning these by hand is time-consuming. Hyperparameter Optimization (HPO) simplifies that process by letting you define the hyperparameters you want to experiment with, the ranges to work over, and the metric you want to optimize. The following screenshot shows example output for an HPO job:

Figure 1.14 – Hyperparameter tuning jobs showing the objective metric of interest

Figure 1.14 – Hyperparameter tuning jobs showing the objective metric of interest

SageMaker Debugger

SageMaker Debugger helps you debug and, depending on your ML framework, profile your training jobs. While making training jobs run faster is always helpful, debugging is particularly useful if you are writing your own deep learning code with neural networks. Problems such as exploding gradients or mysterious NaN in your tensors are quite tough to track down, particularly in distributed training jobs. Debugger can effectively help you set breakpoints to see where things are going wrong. The following figure shows an example of the training and validation loss captured by SageMaker Debugger:

Figure 1.15 – Visualization of tensors captured by SageMaker Debugger

Figure 1.15 – Visualization of tensors captured by SageMaker Debugger

SageMaker Experiments

ML is an iterative process. When you're tuning a model, you may try several variations of hyperparameters, features, and even algorithms. It's important to track that work systematically so you can reproduce your results later on. That's where SageMaker Experiments comes into the picture. It helps you track, organize, and compare different trials. The following screenshot shows an example of SageMaker Experiments information:

Figure 1.16 – Trial results in SageMaker Experiments

Figure 1.16 – Trial results in SageMaker Experiments

Now that we've introduced several SageMaker capabilities for training and tuning, let's move on to model management and deployment capabilities.

Feature tour of model management and deployment capabilities

In this section, we'll dive into SageMaker's model hosting and monitoring capabilities. By the end of this section, you should understand the basics of SageMaker model endpoints along with the use of SageMaker Model Monitor. You'll also learn about deploying models on edge devices with SageMaker Edge Manager.

Model Monitor 

In some organizations, the gap between the ML team and the operations team causes real problems. Operations teams may not understand how to monitor an ML system in production, and ML teams don't always have deep operational expertise.

Model Monitor tries to solve that problem: it will instrument a model endpoint and collect data about the inputs to, and outputs from, an ML model used for inference. It can then analyze that data for data drift and other quality problems, as well as model accuracy or quality problems. The following diagram shows an example of model monitoring data captured for an inference endpoint:

Figure 1.17 – Model Monitor checking data quality on inference inputs

Figure 1.17 – Model Monitor checking data quality on inference inputs

Model endpoints

In some cases, you need to get a large number of inferences at once, in which case SageMaker provides a batch inference capability. But if you need to get inferences closer to real time, you can host your model in a SageMaker managed endpoint. SageMaker handles the deployment and scaling of your endpoints. Just as important, SageMaker lets you host multiple models in a single endpoint. That's useful both for A/B testing (that is, you can direct some percentage of traffic to a newer model) and for hosting multiple models that are tuned for different traffic segments.

You can also host an inference pipeline with multiple containers chained together, which is convenient if you need to preprocess inputs before performing inference. The following screenshot shows a model endpoint with two models serving different percentages of traffic:

Figure 1.18 – Multiple models configured behind a single inference endpoint

Figure 1.18 – Multiple models configured behind a single inference endpoint

Edge Manager

In some cases, you need to get model inferences on a device rather than from the cloud. You may need a lower response time that doesn't allow for an API call to the cloud, or you may have intermittent network connectivity. In video use cases, it's not always feasible to stream data to the cloud for inference. In such cases, Edge Manager and related tools such as SageMaker Neo help you compile models optimized to run on devices, deploy them, manage them, and get operational metrics back to the cloud. The following screenshot shows an example of a virtual device managed by Edge Manager:

Figure 1.19 – A device registered to an Edge Manager device fleet

Figure 1.19 – A device registered to an Edge Manager device fleet

Before we conclude with the summary, let's have a recap of the SageMaker capabilities provided for the following primary ML phases:

  • For data preparation:
Figure 1.20 – SageMaker capabilities for data preparation

Figure 1.20 – SageMaker capabilities for data preparation

  • For operations:
Figure 1.21 – SageMaker capabilities for operations

Figure 1.21 – SageMaker capabilities for operations

  • For model training:
Figure 1.22 – SageMaker capabilities for model training

Figure 1.22 – SageMaker capabilities for model training

With this, we have come to the end of this chapter.

Summary

In this chapter, you saw how to map SageMaker capabilities to different phases of the ML life cycle. You got a quick look at important SageMaker capabilities. In the next chapter, you will learn about the technical requirements and the use case that will be used throughout. You'll also learn about setting up managed data science environments for scaling model-building activities.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn best practices for all phases of building machine learning solutions - from data preparation to monitoring models in production
  • Automate end-to-end machine learning workflows with Amazon SageMaker and related AWS
  • Design, architect, and operate machine learning workloads in the AWS Cloud

Description

Amazon SageMaker is a fully managed AWS service that provides the ability to build, train, deploy, and monitor machine learning models. The book begins with a high-level overview of Amazon SageMaker capabilities that map to the various phases of the machine learning process to help set the right foundation. You'll learn efficient tactics to address data science challenges such as processing data at scale, data preparation, connecting to big data pipelines, identifying data bias, running A/B tests, and model explainability using Amazon SageMaker. As you advance, you'll understand how you can tackle the challenge of training at scale, including how to use large data sets while saving costs, monitoring training resources to identify bottlenecks, speeding up long training jobs, and tracking multiple models trained for a common goal. Moving ahead, you'll find out how you can integrate Amazon SageMaker with other AWS to build reliable, cost-optimized, and automated machine learning applications. In addition to this, you'll build ML pipelines integrated with MLOps principles and apply best practices to build secure and performant solutions. By the end of the book, you'll confidently be able to apply Amazon SageMaker's wide range of capabilities to the full spectrum of machine learning workflows.

Who is this book for?

This book is for expert data scientists responsible for building machine learning applications using Amazon SageMaker. Working knowledge of Amazon SageMaker, machine learning, deep learning, and experience using Jupyter Notebooks and Python is expected. Basic knowledge of AWS related to data, security, and monitoring will help you make the most of the book.

What you will learn

  • Perform data bias detection with AWS Data Wrangler and SageMaker Clarify
  • Speed up data processing with SageMaker Feature Store
  • Overcome labeling bias with SageMaker Ground Truth
  • Improve training time with the monitoring and profiling capabilities of SageMaker Debugger
  • Address the challenge of model deployment automation with CI/CD using the SageMaker model registry
  • Explore SageMaker Neo for model optimization
  • Implement data and model quality monitoring with Amazon Model Monitor
  • Improve training time and reduce costs with SageMaker data and model parallelism

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 24, 2021
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077767
Category :
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 24, 2021
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077767
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 120.97
Serverless Analytics with Amazon Athena
€41.99
Machine Learning with Amazon SageMaker Cookbook
€41.99
Amazon SageMaker Best Practices
€36.99
Total 120.97 Stars icon
Banner background image

Table of Contents

19 Chapters
Section 1: Processing Data at Scale Chevron down icon Chevron up icon
Chapter 1: Amazon SageMaker Overview Chevron down icon Chevron up icon
Chapter 2: Data Science Environments Chevron down icon Chevron up icon
Chapter 3: Data Labeling with Amazon SageMaker Ground Truth Chevron down icon Chevron up icon
Chapter 4: Data Preparation at Scale Using Amazon SageMaker Data Wrangler and Processing Chevron down icon Chevron up icon
Chapter 5: Centralized Feature Repository with Amazon SageMaker Feature Store Chevron down icon Chevron up icon
Section 2: Model Training Challenges Chevron down icon Chevron up icon
Chapter 6: Training and Tuning at Scale Chevron down icon Chevron up icon
Chapter 7: Profile Training Jobs with Amazon SageMaker Debugger Chevron down icon Chevron up icon
Section 3: Manage and Monitor Models Chevron down icon Chevron up icon
Chapter 8: Managing Models at Scale Using a Model Registry Chevron down icon Chevron up icon
Chapter 9: Updating Production Models Using Amazon SageMaker Endpoint Production Variants Chevron down icon Chevron up icon
Chapter 10: Optimizing Model Hosting and Inference Costs Chevron down icon Chevron up icon
Chapter 11: Monitoring Production Models with Amazon SageMaker Model Monitor and Clarify Chevron down icon Chevron up icon
Section 4: Automate and Operationalize Machine Learning Chevron down icon Chevron up icon
Chapter 12: Machine Learning Automated Workflows Chevron down icon Chevron up icon
Chapter 13:Well-Architected Machine Learning with Amazon SageMaker Chevron down icon Chevron up icon
Chapter 14: Managing SageMaker Features across Accounts Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(8 Ratings)
5 star 87.5%
4 star 12.5%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Wesley Pasfield Sep 24, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Overall this is an excellent and practical overview of the full breadth of the SageMaker platform, I’d recommend this to anyone who is using SageMaker on AWS for full end-to-end machine learning workflows. There are an overwhelming number of products within the SageMaker platform, and this book does a great job of clearly explaining the benefit of each product, and placing it within the broader AWS ecosystem. Often with machine learning platforms and tutorials there’s an over-emphasis on the modeling portion of the process, but this book covers the full machine learning lifecycle including model management, versioning and deployment, and I really appreciated the book’s practical focus.I especially enjoyed the focus on integration with other AWS services, and notably sections with detail on CloudFormation orchestration (particularly the setting up Data Science environments chapter), and would have liked to see more CloudFormation focus in the deployment section. It also would have been nice to see the pros and cons of some of the new SageMaker features vs. their prior AWS service predecessors to better understand the value proposition (ex. Online Feature Store vs. DynamoDB). I thought the comparison of Model Registry options (SageMaker, AWS Custom, and Open Source) was especially strong and a helpful given the vast number of architecture decisions that are possible with AWS
Amazon Verified review Amazon
J. Wu Dec 18, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a Data Scientist I have done many ML projects in the Jupyter Notebook and I have always been intrigued about AWS SageMaker. The book gave a great overall view of what Safemaker does. This is what I learned from the book: that Sagemaker is not magic. For mature algorithms, it has prebuilt scripts. For more customized projects you need to write your own code. The main point of SafeMaker is to automate and scale. It could potentially automate labor intense labeling. It could facilitate ETL, the whole ML pipeline and scale to big data. Regarding how to do so, this book provides examples of good practices. Overall this book is definitely valuable for someone who is able to dive into AWS SageMaker.
Amazon Verified review Amazon
Amrita Oct 28, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is a great addition to the documentation around AWS Sagemaker - from my personal experience, Sagemaker is a very powerful MLOps tool, but the official documentation/examples/tutorials are definitely too sparse on many details. This book would be a great resource for both beginners and advanced users. I think one thing the book could have covered in more detail is examples on how Sagemaker can be adapted to existing ML workflows - since that is not always obvious, as the book covers each Sagemaker feature separately.
Amazon Verified review Amazon
laura Feb 21, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book contains a detailed overview of the full machine learning lifecycle. Inside you will learn how to process and prepare data, use big data pipelines, and run A/B tests. What is unique and great about the book is its use of example code files/cloudformation provided on GitHub, which allows the reader to follow each chapter and test the features described throughout the book.To get the most out of this book, the reader needs to have a working knowledge of Amazon SageMaker and experience with the AWS console.I particularly enjoyed the fact that the AWS Well Architected Framework is taken into account by including an overview of the AWS services necessary to stay within the well architected guidelines.
Amazon Verified review Amazon
Jim Oct 28, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a comprehensive book on applying Amazon SageMaker to the whole lifecycle of a machine learning process. I highly recommend it to anyone who uses Amazon SageMaker to develop machine learning applications, especially the systems with large-scale datasets. The book provides many advices with examples on how to address data science challenges with large-scale datasets such as data pre-processing and preparation at scale, model training at scale using big data pipelines. In addition, the book also shows how to integrate machine learning pipelines with MLOps principles to automate the development of a secure and performant machine learning application system.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.