Cross-validation is an important concept in machine learning. In the previous recipe, we split the data into training and testing datasets. However, in order to make it more robust, we need to repeat this process with different subsets. If we just fine-tune it for a particular subset, we may end up overfitting the model. Overfitting refers to a situation where we fine-tune a model to a dataset too much and it fails to perform well on unknown data. We want our machine learning model to perform well on unknown data. In this recipe, we will learn how to evaluate model accuracy using cross-validation metrics.
Evaluating accuracy using cross-validation metrics
Getting ready...
When we are dealing with machine learning models, we...