Testing the recommendations
Finally, our machine learning-based recommender system is ready. It will provide a significant boost in user experience for any bookshop, for sure. But before we start advertising it, we should make sure that it's reliable. Remember that we put aside 10% of our dataset for testing purposes. The idea is to compare the recommendations with actual ratings from the test data to see what degree of similarity exists between the two; that is, how many of the actual ratings from the dataset were in fact recommended. Depending on the data that's used for the training, you may want to test that both correct recommendations are made, but also that bad recommendations are not included (that is, the recommender does not suggest items that got low ratings, indicating a dislike). Since we only used ratings of 8, 9, and 10, we won't check if low-ranked recommendations were provided. We'll just focus on checking how many of the recommendations are actually part of the user's data...