Explaining AutoML results to your business
To realize business value, your AutoML models must be implemented and used by the business. A common obstacle to implementation is a lack of trust stemming from a lack of understanding of how ML works. At the same time, explaining the ins and outs of how individual ML algorithms work is a poor way to gain trust. Throwing math symbols and complicated statistics at end users will not work unless they already have a deep background in mathematics.
Instead, use AutoML's inbuilt explainability. As long as you enable explainability when training models, you can say exactly which features AutoML is using to generate predictions. In general, it's a good practice to do the following four things:
- Always enable explainability when training any AutoML model.
- When presenting results to the business, first show performance, then show explainability.
- Rank the features in order of most to least important.
- Drop any unimportant...