Introducing model explicability
When models are learning in an online fashion, they are repeatedly relearning. This relearning process is happening automatically, and it is often impossible for a human user to keep an eye on the models continuously. In addition, this would go against the main goal of doing ML as the goal is to let machines—or models—take over, rather than having continuous human intervention.
When models learn or relearn, data scientists are generally faced with programmatic model-building interfaces. Imagine a random forest, in which hundreds of decision trees are acting at the same time to predict a target variable for a new observation. Even the task of printing out and looking at all those decisions would be a huge task.
Model explicability is a big topic in recent advances in ML. By throwing black-box models at data-science use cases, big mistakes are occurring. An example is that when self-driving cars were trained on a biased sample containing...