Online evaluation
When we do cross-validation, we perform offline evaluation of our model, we train the model on the past data, and then hold out some of it and use it only for testing. It is very important, but often not enough, to know if the model will perform well on actual users. This is why we need to constantly monitor the performance of our models online--when the users actually use it. It can happen that a model, which is very good during offline testing, does not actually perform very well during online evaluation. There could be many reasons for that--overfitting, poor cross-validation, using the test set too often for checking the performance, and so on.
Thus, when we come up with a new model, we cannot just assume it will be better because its offline performance is better, so we need to test it on real users.
For testing models online we usually need to come up with a sensible way of measuring performance. There are a lot of metrics we can capture, including simple ones such...