Data + AI: Weapons of math destruction
In a world more and more driven by AI models, data scientists cannot effectively ascertain on their own the costs associated with the unintended consequences of false positives and false negatives. Mitigating unintended consequences requires collaboration across diverse stakeholders to identify the metrics against which the AI utility function will seek to optimize.
As was well covered in Cathy O’Neil’s book Weapons of Math Destruction, the biases built into many AI models used to approve loans and mortgages, hire job applicants, and accept university admissions yield unintended consequences severely impacting individuals and society.
For example, AI has become a decisive decision-making component in the job applicant hiring process[5]. In 2018, about 67% of hiring managers and recruiters[6] used AI to pre-screen job applicants. By 2020, that percentage had increased to 88%[7]. Everyone must be concerned that AI models introduce bias, lack accountability and transparency, and aren’t even guaranteed to be accurate in the hiring process. These AI-based hiring models may reject highly qualified candidates whose resumes and job experience don’t match the background qualifications, behavioral characteristics, and operational assumptions of the employee performance data used to train the AI hiring models.
The good news is that this problem is solvable. The data science team can construct a feedback loop to measure the effectiveness of the AI model’s predictions. That would include not only the false positives – hiring people who you thought would be successful but were not – but also false negatives – not hiring people who you thought would NOT be successful but, ultimately, they are.
We will deep dive into how data science teams can create a feedback loop to learn and adjust the AI model’s effectiveness based on the AI model’s false positives and false negatives in Chapter 6.
So far, we’ve presented a simplified explanation of what AI is and reviewed many of the challenges and risks associated with the design, development, deployment, and management of AI. We’ve discussed how the US government is trying to mandate the responsible and ethical deployment of AI through the introduction of the AI Bill of Rights. But AI usage is growing exponentially, and as citizens, we cannot rely on the government to stay abreast of these massive AI advancements. It’s more critical than ever that, as citizens, we understand the role we must play in ensuring the responsible and ethical usage of AI. And that starts with AI and data literacy.