Mitigating bias in vision and language models
Now that you’ve learned about detecting bias in your vision and language models, let’s explore methods to mitigate this. Generally, this revolves around updating your dataset in various ways, whether through sampling, augmentation, or generative methods. We’ll also look at some techniques to use during the training process itself, including the concept of fair loss functions and other techniques.
As you are well aware by now, there are two key training phases to stay on top of. The first is the pretraining process, and the second is the fine-tuning or transfer learning (TL). In terms of bias, a critical point is how much bias transfer your models exhibit. That is to say, if your pretrained model was built on a dataset with bias, does that bias then transfer into your new model after you’ve done some fine-tuning?
A research team out of MIT delivered an interesting study on the effects of bias transfer in...