Understanding the key principles of monitoring an ML system
Building trust into AI systems is vital these days with the growing demands for products to be data-driven and to adjust to the changing environment and regulatory frameworks. One of the reasons ML projects are failing to bring value to businesses is due to the lack of trust and transparency in their decision making. Many black box models are good at reaching high accuracy, but they become obsolete when it comes to explaining the reasons behind the decisions that have been made. At the time of writing, news has been surfacing that raises these concerns of trust and explainability, as shown in the following figure:
This image showcases concerns in important areas in real life. Let's look at how this translates into some key aspects of model explainability, such as model drift, model bias, model transparency, and model compliance...