Deploying an endpoint from a model and enabling data capture with SageMaker Model Monitor
In this recipe, we will deploy the model we trained in the Detecting post-training bias with SageMaker Clarify recipe to an inference endpoint. We must be aware that the machine learning process does not end after a model has been deployed to production. We will only know the deployed model's true performance once it is exposed to more data that it has not seen before. That said, we must capture the request and response pairs when the inference endpoint is invoked. This gives us the ability to analyze if there are issues in the deployed model, or if there are issues in the data that is being passed as the payload to the inference endpoint.
The great thing about using Amazon SageMaker is that we do not have to build this ourselves, since these challenges and potential issues can already be solved and handled using SageMaker Model Monitor. Finally, we will demonstrate how to use the SageMaker...