A final general warning – training versus test datasets
I know I said we were going to move on to new models estimation, but let me clarify a concept before that—the difference between the training and test datasets.
When you estimate a model, you usually have at least two different datases:
- The training dataset: This is the one on which you actually estimate the model. To be clear, the one over which you apply the
lm()
function, or whatever algorithm you want to employ. - The testing dataset: This is a separate dataset you use to validate your model's performance. It can also be a new dataset that becomes available after you first estimate your model.
Why are there two different datasets and why do we need a separate dataset to test our model? This is because of the danger of overfitting, that is, fitting a model that is really good for the dataset it was estimated for, but underperforms when it is applied to new data.
To understand it, you can think of your high school or university tests and...