The mission
The issue of algorithmic fairness is one with massive social implications, from the allocation of welfare resources to the prioritization of life-saving surgeries to screening job applications. These machine learning algorithms can determine a person’s livelihood or life, and it’s often the most marginalized and vulnerable populations that get the worst treatment from these algorithms because they perpetuate systemic biases learned from the data. Therefore, it’s poorer families that get misclassified for child abuse; it’s racial-minority people who get underprioritized for medical treatment; and it’s women who get screened out of high-paying tech jobs. Even in cases involving less immediate and individualized risks such as online searches, Twitter/X bots, and social media profiles, societal prejudices such as elitism, racism, sexism, and ageism are reinforced.
This chapter will continue on the mission from Chapter 6, Anchors and Counterfactual...