Establishing trust – model interpretability and transparency in automated ML
Establishing trust in the model trained by automated ML can appear to be a challenging value proposition. Explaining to the business leaders, auditors, and stakeholders responsible for automated decision management that they can trust an algorithm to train and build a model that will be used for a potentially mission-critical system requires that you don't treat it any different from a "man-made" ML model. Model monitoring and observability requirements do not change based on the technique used to build the model. Reproducible model training and quality measurements, such as validating data, component integration, model quality, bias, and fairness, are also required as part of any ML development life cycle.
Let's explore some of the approaches and techniques we can use to build trust in automated ML models and ensure governance measures.
Feature importance
Feature importance...