Addressing model bias and fairness
A key characteristic of ML lies in its learning from the past to predict the future. This implies that future predictions would be influenced by the past. Some training datasets are structured in ways that could introduce bias into ML models. These biases are based on unspoken unfairness evident in human systems. Bias is known to maintain prejudice and unfairness that preexisted the models and could lead to unintended consequences. An AI system that is unable to understand human bias mirrors, if not exacerbates, the bias present in the training dataset. It is easy to see why women are more likely to receive lower salary predictions by ML models than their male counterparts. In a similar example, credit card companies using historic data-driven ML models could be steered into offering higher rates to individuals from minority backgrounds. Such unwarranted associated are caused by human bias that is inherent in the training dataset. It is unfair to...