Splitting data for validation or cross-validation and testing
Splitting data into training, validation, and test sets is the accepted standard for model building when the size of the data is sufficiently large. The idea behind validation is simple: most algorithms naturally overfit on training data. Here, overfitting means that some of what is being modeled are actual idiosyncrasies of that specific dataset (for instance, noise) rather than representative of the population as a whole. So, how do you correct this? Well, you can do it by creating a holdout sample, called a validation set, which is scored against during the model-building process to determine whether what is being modeled is a signal or noise. This enables things such as hyperparameter tuning, model regularization, early stopping, and more.
The test dataset is an additional holdout that is used at the end of model building to determine true model performance. Having holdout test data is critical for...