Monitoring prediction quality with Amazon SageMaker Model Monitor
SageMaker Model Monitor has two main features, outlined here:
- Capturing data sent to an endpoint, as well as predictions returned by the endpoint. This is useful for further analysis, or to replay real-life traffic during the development and testing of new models.
- Comparing incoming traffic to a baseline built from the training set, as well as sending alerts about data quality issues, such as missing features, mistyped features, and differences in statistical properties (also known as "data drift").
We'll use the Linear Learner example from Chapter 4, Training Machine Learning Models, where we trained a model on the Boston Housing dataset. First, we'll add data capture to the endpoint. Then, we'll build a baseline and set up a monitoring schedule to periodically compare the incoming data to that baseline.
Capturing data
We can set up the data-capture process when we deploy...