Summary
By using the Responsible AI Toolbox SDK, we can analyze data for fairness and errors and look deep into decision trees to understand how the model makes decisions. Note that there is work to be done in this field. The SDK is still going through development, and features are being added, so please remember that the functionality will change and new features will be added. At the time of writing, we tested with the LightGBM, XGBoost, and PyTorch algorithms for fairness. The Toolbox allows us to open black-box models and see how decisions are made, and also produce output that is fair and unbiased.
In the next chapter, we will learn how to productionalize ML models.