Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding

Save for later
  • 3 min read
  • 12 Sep 2018

article-image

Google’s PAIR ( People + AI Research ) team has come out with a new tool called “What-if”. It is a new feature in the open-source TensorBoard web application which allows users to analyze an ML model without the need of writing code. It also provides an interactive visual interface which lets you explore the model results.

The “What-if” tool comes packed with two major features namely, Counterfactuals, and Performance and Algorithmic Fairness analysis.

Let’s have a look at these two features.

Counterfactuals


What-if allows you to compare a datapoint to the most similar point where your model predicts a different result. These points are known as "counterfactuals”.

It lets you edit a datapoint by hand and explore the prediction changes in a model’s a. In the figure below, the What-if tool is used on a binary classification model which predicts whether a person’s income is more than $50k depending on public census data from the UCI census dataset.

googles-new-what-if-tool-to-analyze-machine-learning-models-and-assess-fairness-without-any-coding-googles-pair-people-ai-research-team-has-come-out-with-a-new-tool-called-img-0

Comparing counterfactuals

This is a prediction task used by ML researchers when analyzing algorithmic fairness. So, here the model made a prediction that the person’s income is more than $50k for the selected datapoint. The tool then automatically locates the most-similar person in the dataset for which earnings of less than $50k had been predicted in the model and compares the two cases side-by-side.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime


Performance and Algorithmic Fairness Analysis


With the What-if tool, exploring the effects of distinct classification thresholds is also possible. The tool considers constraints such as different numerical fairness criteria. The figure below presents the results of a smile detector model which has been trained on the open-source CelebA dataset. The CelebA dataset comprises annotated face images of celebrities.

googles-new-what-if-tool-to-analyze-machine-learning-models-and-assess-fairness-without-any-coding-googles-pair-people-ai-research-team-has-come-out-with-a-new-tool-called-img-1

           Comparing the performance of two slices of data in a smile detection model


 In the figure above, the datasets have been divided by whether the people have brown hair. Each of the two groups in the figure has a ROC curve and a confusion matrix of the predictions. It also includes sliders for setting how confident the model must be before determining that a face is smiling. Here, the What-if tool automatically sets up the confidence thresholds for the two groups in order to optimize for equal opportunity.

Apart from these major features, the What-if tool also explores features such as visualizing your dataset directly using Facets and manually editing examples from your dataset along with automatic generation of partial dependence plots ( shows how the model’s predictions change with any single feature changing).

Additionally, the Google’s PAIR team released a set of demos using pre-trained models to illustrate the capabilities of the What-If Tool. Some of these demos include detecting misclassifications (A multiclass classification model), assessing fairness in binary classification models (image classification model), and investigating model performance across different subgroups (A regression model).

“We look forward to people inside and outside of Google using this tool to better understand ML models and to begin assessing fairness,” says the PAIR team.

For more information on What-if, be sure to check out the official Google AI blog.

Dr. Fei Fei Li, Google’s AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Introducing Deon, a tool for data scientists to add an ethics checklist

Google wants web developers to embrace AMP. Great news for users, more work for developers