Model evaluation and interpretability metrics
Acquiring data and training ML models is a good start toward creating business value. After training models, it is vital to measure the models' performance and understand why and how a model is predicting or performing in a certain way. Hence, model evaluation and interpretability are essential parts of the MLOps workflow. They enable us to understand and validate the ML models to determine the business value they will produce. As there are several types of ML models, there are numerous evaluation techniques as well.
Looking back at Chapter 2, Characterizing Your Machine Learning Problem, where we studied various types of models categorized as learning models, hybrid models, statistical models, and HITL (Human-in-the-loop) models, we will now discuss different metrics to evaluate these models. Here are some of the key model evaluation and interpretability techniques as shown in Figure 5.1. These have become standard in research...