The mission
The issue of algorithmic fairness is one with massive societal implications, from the allocation of welfare resources, to the prioritization of life-saving surgeries, to screening job applications. These machine learning algorithms can determine a person's livelihood or life, and it's often the most marginalized and vulnerable populations that get the worst treatment from these algorithms because they perpetuate systemic biases learned from the data. Therefore, it's poorer families that get misclassified for child abuse; it's racial-minority people that get underprioritized for medical treatment; and it's women that get screened out of high-paying tech jobs. Even in cases involving less immediate and individualized risks such as online searches, Twitter bots, and social media profiles, societal prejudice such as elitism, racism, sexism, and agism are reinforced.
This chapter will continue on the mission from Chapter 7, Anchor and Counterfactual...