The type of cross validation most commonly used is known as k-folds cross validation, and it involves randomly dividing the sample dataset into a number of folds, k corresponding to equal portions (if possible) of data.
The learning process is performed in an iterative way, based on the different compositions of folds, used both as a training dataset and as a testing dataset. In this way, each fold is used in turn as a training dataset or as a testing dataset:
In practice, the different folds (randomly generated) alternate in the role of training and testing datasets, and the iterative process ends when all the k-folds have been used both as training and testing datasets.
Since, at each iteration, the generalization error generated is different (as the algorithm is trained and tested with different...