Summary
In this chapter, you learned more about the concept of fairness in the machine learning era, as well as the metrics, definitions, and challenges for assessing fairness. We talked about example proxies for sensitive attributes such as sex and race. We also talked about possible sources of bias, such as in data collection or model training. You also learned how you can use Python libraries for model explainability and fairness to assess fairness or improve it in your models, as well as avoid biases that not only would be unethical but could have legal and financial consequences for your organization.
In the next chapter, you will learn about test-driven development and concepts such as unit and differential testing. We will also talk about machine learning experiment tracking and how it helps us avoid issues in our models in the model training, testing, and selection processes.