Bias, Explainability, Fairness, and Lineage
Now that we have learned all of the steps required to build and deploy models in Google Cloud and to automate the entire machine learning (ML) model development life cycle, it’s time to dive into yet more advanced concepts that are fundamental to developing and maintaining high-quality models.
In addition to our models providing predictions that are as accurate as possible for a given use case, we need to ensure that the predictions provided by our models are as fair as possible and that they do not exhibit bias or prejudice against any individuals or demographic groups.
The topics of bias, fairness, and explainability are at the forefront of ML research today. This chapter discusses these concepts in detail and explains how to effectively incorporate these concepts into our ML workloads. Specifically, we will cover the following topics in this chapter:
- An overview of bias, fairness, and explainability in artificial intelligence...