Understanding ML fairness
Aside from ethical concerns and ensuring that your dataset lacks issues such as bias, it’s important that the dataset and its associated model deliver a fair result. It’s possible for a dataset to lack any sort of PII, features that could be linked to particular groups, and unnecessary features, and yet remain unfair. One of the most controversial and well-known examples of ML unfairness is the models used to assess the recidivism risk of individuals seeking release from prison. Fairness in Machine Learning – The Case of Juvenile Criminal Justice in Catalonia (https://blog.re-work.co/using-machine-learning-for-criminal-justice/) tells of only one incidence. The problem is extremely widespread, leading many to ask whether ML is capable of being fair in this scenario. The following sections explore ML fairness in more detail.
Determining what fairness means
The term fair isn’t actually well understood in most contexts and is...