Creating fair models
Fairness is a complicated concept that is often used to reflect the equal treatment of different groups. We want a trained model to perform equally well across different groups. To ensure fairness, we need to assess whether a model is fair. If the model shows that it treats groups unfairly, we can retrain that model and force it to perform equally across groups.
Identifying unfairness in models
Imagine that we train a model to predict whether high school students will be successful when they continue their studies at a university. The model's predictions may influence a student's decision to apply to a university. As this is an important life decision, we want to make sure that the model predicts it correctly for both female and male students. We may also want to assess whether the model predicts it correctly for different minority groups.
Depending on the use case and the features you include when training a model, you may wish to identify so...