Exploring Bias and Fairness
A biased machine learning model produces and amplifies unfair or discriminatory predictions against certain groups. Such models can produce biased predictions that lead to negative consequences such as social or economic inequality. Fortunately, some countries have discrimination and equality laws that protect minority groups against unfavorable treatment. One of the worst scenarios a machine learning practitioner or anyone who deploys a biased model could face is either receiving a legal notice imposing a heavy fine or receiving a lawyer letter from being sued and forced to shut down their deployed model. Here are a few examples of such situations:
- The ride-hailing app Uber faced legal action from two unions in the UK for its facial verification system, which showed racial bias against dark-skinned people by displaying more frequent verification errors. This impeded their work as Uber drivers (https://www.bbc.com/news/technology-58831373).
- Creators...