Detecting pre-training bias with SageMaker Clarify
As we deal with more real-world examples, we will start to encounter requirements that involve detecting and managing ML bias. For example, deployed machine learning models may reject applications from disfavored or underrepresented groups, since the training data used to train these models is already biased against the disfavored groups to begin with. This reduces opportunities for these disfavored groups, which then perpetuates their lack of fitness for an application. That said, once we start to realize the importance of ensuring fairness in machine learning, we will start looking for solutions that will help us handle the legal, ethical, and technical considerations as well. The good news is that SageMaker Clarify is there to help us detect ML bias in our data and models!
AI and ML bias may be present in specific stages in the machine learning pipeline – before, during, and after training. In this recipe, we will use...