AI Fairness with Google's What-If Tool (WIT)
Google's PAIR (People + AI Research) designed the What-If Tool (WIT), a visualization tool designed to investigate the fairness of an AI model. Usage of WIT leads us directly to a critical ethical and moral question: what do we consider as fair? WIT provides the tools to represent what we view as bias so that we can build the most impartial AI systems possible.
Developers of machine learning (ML) systems focus on accuracy and fairness. WIT takes us directly to the center of human-machine pragmatic decision-making. In Chapter 2, White Box XAI for AI Bias and Ethics, we discovered the difficulty each person experiences when trying to make the right decision. MIT's Moral Machine experiment brought us to the core of ML ethics. We represented life and death decision-making by humans and autonomous vehicles.
The PAIR team is dedicated to people-centric AI systems. Human-centered AI systems will take us beyond...