Monitoring and validating live performance
We can use monitoring and logging mechanisms during deployment to track the model’s performance and detect potential issues. We can regularly evaluate the deployed model to ensure it continues to meet performance criteria, or other criteria, such as being unbiased, that we defined for it. We can also benefit from the information coming from model monitoring to update or retrain the model as needed. Here are three important concepts in this subject regarding differences between modeling before deployment and in production:
- Data variance: The data that is used in model training and testing goes through the steps of data wrangling and all the cleaning and reformatting needed. However, the data that is given to the deployed model – that is, the data coming from the user to the model – might not go through the same data processes, which then causes variations in the model results in production.
- Data drift: Data...