Impartial modeling fairness in machine learning
Machine learning models make mistakes. But when a mistake happens, they could have biases, such as in the COMPAS example provided in Chapter 1, Beyond Code Debugging. We need to investigate our models for the existence of such biases and revise them to eliminate these biases. Let’s go through more examples to clarify the importance of investigating our data and models for the existence of such biases.
Recruiting is a challenging process for every company as they must identify the most suitable candidates to interview from hundreds of applicants who have submitted resumes and cover letters. In 2014, Amazon started to develop a hiring tool using machine learning to screen job applicants and select the best ones to pursue based on the information provided in their resumes. This was a text processing model that used the text in resumes to identify the key information and select the top candidates. But eventually, Amazon decided...