In holdout cross-validation, we hold out a percentage of observations and so we get two datasets. One is called the training dataset and the other is called the testing dataset. Here, we use the testing dataset to calculate our evaluation metrics, and the rest of the data is used to train the model. This is the process of holdout cross-validation.
The main advantage of holdout cross-validation is that it is very easy to implement and it is a very intuitive method of cross-validation.
The problem with this kind of cross-validation is that it provides a single estimate for the evaluation metric of the model. This is problematic because some models rely on randomness. So in principle, it is possible that the evaluation metrics calculated on the test sometimes they will vary a lot because of random chance. So the main problem with holdout cross-validation...