K-Fold Cross-Validation
This technique is the most recommended approach for model evaluation. In this technique, we partition the data into k groups and use k-1 groups for training and the remainder (1 group) for validation. The process is repeated k times, where a new group is used for validation in each successive iteration, and therefore, each group is used for testing at one point of time. The overall results are the average error estimates across k iterations.
k-fold cross-validations, therefore, overcomes the drawbacks of the holdout technique by mitigating the perils associated with the nature of split as each data point is tested once over the book of k iterations. The variance of the model is reduced as the value of k increases. The most common values used for k are 5 or 10. The major drawback of this technique is that it trains the model k times (for k iterations). Therefore, the total compute time required for the model to train and validate is approximately k times the holdout...