Tailoring bias and fairness measures across use cases
The process of figuring out bias and fairness metrics to use for our use case can flow similarly to the process of figuring out general model performance evaluation metrics, as introduced in Chapter 10, Exploring Model Evaluation Methods, in the Engineering the base model evaluation metric section. So, be sure to check that topic out! However, bias and fairness have unique aspects that require additional heuristical recommendations. Earlier, recommendations for metrics that belong to the same metric group were explored. Now, let’s explore general recommendations on the four metric groups:
- Equal representation is always desired when there is a sensitive and protected attribute. So, when you see these attributes, be sure to use equal representation-based metrics on both your data and the model. Examples include race, gender, religion, sexual orientation, disability, age, socioeconomic status, political affiliations...