Evaluating a model continuously
We can detect model performance in multiple ways. Some of them include:
- Comparing the performance drops using some metrics with the predictions and ground truths
- Comparing the input feature and output distributions of the training dataset are compared with the input feature and output distributions during the prediction
As an example demonstration, we will assess the model performance by comparing the predictions against the ground truths using the metrics. In this approach, to evaluate a model continuously for model performance, we have the challenge of getting the ground truth. Therefore, a major step in continuous evaluation is to collect the ground truth. So, after a model has been deployed, we need to take the following steps to continuously evaluate the model’s performance:
- Collect the ground truth.
- Plot the metrics on a live dashboard.
- Select the threshold for the metric.
- If the metric value crosses...