Measuring and monitoring for bias
Let’s look at one of these frameworks – IBM AI Fairness 360 (https://github.com/Trusted-AI/AIF360). The basis for this framework is the ability to set variables that can be linked to bias and then calculate how different the other variables are. So, let’s dive into an example of how to calculate bias for a dataset. Since bias is often associated with gender or similar attributes, we need to use a dataset that contains it. So far in this book, we have not used any dataset that contained this kind of attribute, so we need to find another one.
Let’s take the Titanic survival dataset to check if there was any bias in terms of survivability between male and female passengers. First, we need to install the IBM AI Fairness 360 framework:
pip install aif360
Then, we can start creating a program that will check for bias. We need to import the appropriate libraries and create the data. In this example, we’ll create...