Cross-Validation
Cross-validation is a model validation technique that aids in assessing the performance and ability of a machine learning model to generalize on an independent dataset. It is also called rotation validation, as it approaches the validation of a model with several repetitions by drawing the training and validation data from the same distribution.
The cross-validation helps us:
Evaluate the robustness of the model on unseen data.
Estimate a realistic range for desired performance metrics.
Mitigate overfitting and underfitting of models.
The general principle of cross-validation is to test the model on the entire dataset in several iterations by partitioning data into groups and using majority to train and minority to test. The repetitive rotations ensure the model has been tested on all available observations. The final performance metrics of the model are aggregated and summarized from the results of all rotations.
To study if the model has high bias, we can check the mean (average...