Monitoring bias in ML models
At this point in the book, for beginners, you are probably starting to realize that in fact, we are just at the tip of the iceberg in terms of identifying and solving bias problems. Implications for this range from everything from poor model performance to actual harm to humans, especially in domains such as hiring, criminal justice, financial services, and more. These are some of the reasons Cathy O’Neil raised these important issues in her 2016 book, Weapons of Math Destruction (8). She argues that while ML models can be useful, they can also be quite harmful to humans when designed and implemented carelessly.
This raises core issues about ML-driven innovation. How good is good enough in a world full of biases? As an ML practitioner myself who is passionate about large-scale innovation, and also as a woman who is on the negative end of some biases, while certainly on the positive side of others, I grapple with these questions a lot.
Personally...