Hands-on lab – detecting bias, explaining models, training privacy-preserving mode, and simulating adversarial attack
Building a comprehensive system for ML governance is a complex initiative. In this hands-on lab, you will learn to use some of SageMaker’s built-in functionalities to support certain aspects of ML governance.
Problem statement
As an ML solutions architect, you have been assigned to identify technology solutions to support a project that has regulatory implications. Specifically, you need to determine the technical approaches for data bias detection, model explainability, and privacy-preserving model training. Follow these steps to get started.
Detecting bias in the training dataset
- Launch the SageMaker Studio environment:
- Launch the same SageMaker Studio environment that you have been using.
- Create a new folder called
Chapter13
. This will be our working directory for this lab. Create a new Jupyter notebook and...