Splitting the Dataset
A common mistake made when determining how well a model is performing is to calculate the prediction error on the data that the model was trained on and conclude that a model performs really well on the basis of a high prediction accuracy on the training dataset.
This means that we are trying to test the model on data that the model has already seen, that is, the model has already learned the behavior of the training data because it was exposed to it—if asked to predict the behavior of the training data again, it would undoubtedly perform well. And the better the performance on the training data, the higher the chances that the model knows the data too well, so much so that it has even learned the noise and behavior of outliers in the data.
Now, high training accuracy results in a model having high variance, as we saw in the previous chapter. In order to get an unbiased estimate of the model's performance, we need to find its prediction accuracy on data it has not already...