Evaluating the bias and fairness of a deep learning model
In this practical example, we will be exploring the infamous real-world use case of face recognition. This practical example will be leveraged for the practical implementation of bias mitigation in the next section. The basis of face recognition is to generate feature vectors that can be used to carry out KNN-based classification so that new faces don’t need to undergo additional network training. In this example, we will be training a classification model and evaluating it using traditional classification accuracy-based metrics; we won’t be demonstrating the recognition part of the use case, which allows us to handle unknown facial identity classes.
The goal here is to ensure that the resulting facial classification model has low gender bias. We will be using a publicly available facial dataset called BUPT-CBFace-50, which has a diverse coverage of facial images that have different facial expressions, poses...