Machine learning has emerged with a vast new ecosystem of techniques and infrastructure and we are just beginning to learn their full capabilities. But with the exciting innovations happening, there are also some really concerning problems arising. Forms of bias, stereotyping and unfair determination are being found in computer vision systems, object recognition models, and in natural language processing and word embeddings.
The Conference on Fairness, Accountability, and Transparency (FAT) scheduled on Feb 23 and 24 this year in New York is an annual conference dedicating to bringing theory and practice of fair and interpretable Machine Learning, Information Retrieval, NLP, Computer Vision, Recommender systems, and other technical disciplines. This year's program includes 17 peer-reviewed papers and 6 tutorials from leading experts in the field. The conference will have three sessions. Session 3 of the two-day conference on Saturday, February 24, is in the field of fairness in computer vision and NLP. In this article, we give our readers a peek into the three papers that have been selected for presentation in Session 3.
You can also check out Session 1 and Session 2, in case you’ve missed them.
What is the paper about
The paper talks about substantial disparities in the accuracy of classifying darker and lighter females and males in gender classification systems. The authors have evaluated bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, they have characterized the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. They have also evaluated 3 commercial gender classification systems using this dataset.
Key takeaways
- The paper measures accuracy of 3 commercial gender classification algorithms by Microsoft, IBM, and Face++ on the new Pilot Parliaments Benchmark which is balanced by gender and skin type.
- On annotating the dataset with the Fitzpatrick skin classification system and testing gender classification performance on 4 subgroups, they found :
- All classifiers perform better on male faces than on female faces (8.1% − 20.6% difference in error rate)
- All classifiers perform better on lighter faces than darker faces (11.8% − 19.2% difference in error rate)
- All classifiers perform worst on darker female faces (20.8% − 34.7% error rate)
- Microsoft and IBM classifiers perform best on lighter male faces (error rates of 0.0% and 0.3% respectively)
- Face++ classifiers perform best on darker male faces (0.7% error rate)
- The maximum difference in error rate between the best and worst classified groups is 34.4%
- They encourage further work to see if the substantial error rate gaps on the basis of gender, skin type and intersectional subgroup revealed in this study of gender classification persist in other human-based computer vision tasks as well.
What is the paper about
The paper studies gender stereotypes and cases of bias in the Hindi movie industry (Bollywood) and propose an algorithm to remove these stereotypes from text. The authors have analyzed movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at sentence and intra-sentence level. Different features like occupation, introductions, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. Next, they have developed an algorithm to generate debiased stories. The proposed debiasing algorithm extracts gender biased graphs from unstructured piece of text in stories from movies and de-bias these graphs to generate plausible unbiased stories.
Key takeaways
- The analysis is performed at sentence at multi-sentence level and uses word embeddings by adding context vector and studying the bias in data.
- Data observation showed that while analyzing occupations for males and females, higher level roles are designated to males while lower level roles are designated to females. A similar trend has been observed for centrality where females were less central in the plot vs their male counterparts.
- Also, while predicting gender using context word vectors, with very small training data, a very high accuracy was observed in gender prediction for test data reflecting a substantial amount of bias present in the data.
- The authors have also presented an algorithm to remove such bias present in text. They show that by interchanging the gender of high centrality male character with a high centrality female character in the plot text, leaves no change in the story but de-biases it completely.
What is the paper about
This paper broadcasts that a knowledge gap exists between data scientists studying NLP and policymakers advocating for the wide adoption of automated social media analysis and moderation. It urges policymakers to understand the capabilities and limits of NLP before endorsing or adopting automated content analysis tools, particularly for making decisions that affect fundamental rights or access to government benefits. It draws on existing research to explain the capabilities and limitations of text classifiers for social media posts and other online content. This paper is aimed at helping researchers and technical experts address the gaps in policymakers knowledge about what is possible with automated text analysis.
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Key takeaways
The authors have provided an overview of how NLP classifiers work and identified five key limitations of these tools that must be communicated to policymakers:
- NLP classifiers require domain-specific training and cannot be applied with the same reliability across different domains.
- NLP tools can amplify social bias reflected in language and are likely to have lower accuracy for minority groups.
- Accurate text classification requires clear, consistent definitions of the type of speech to be identified. Policy debates around content moderation and social media mining tend to lack such precise definitions.
- The accuracy achieved in NLP studies does not warrant widespread application of these tools to social media content analysis and moderation.
- Text filters remain easy to evade and fall far short of humans ability to parse meaning from text.
The paper concludes with recommendations for NLP researchers to bridge the knowledge gap between technical experts and policymakers, including
- Clearly describe the domain limitations of NLP tools.
- Increase development of non-English training resources.
- Provide more detail and context for accuracy measures.
- Publish more information about definitions and instructions provided to annotators.
Don’t miss our coverage on Session 4 and Session 5 on Fair Classification, Fat recommenders, etc.