A common issue in machine learning is overfitting data. Generally, overfitting is used to refer to the phenomenon where the model performs better on the data used to train the model than it does on data not used to train the model (holdout data, future real use, and so on). Overfitting occurs when a model memorizes part of the training data and fits what is essentially noise in the training data. The accuracy in the training data is high, but because the noise changes from one dataset to the next, this accuracy does not apply to unseen data, that is, we can say that the model does not generalize very well.
Overfitting can occur at any time, but tends to become more severe as the ratio of parameters to information increases. Usually, this can be thought of as the ratio of parameters to observations, but not always...