Shrinkage methods
The bias-variance trade-off is a decision point all statistics and machine learning practitioners must balance when performing modeling. Too much of either renders results useless. To catch these when they become issues, we look at test results and the residuals. For example, assuming a useful set of features and the appropriate model have been selected, a model that performs well on validation, but poorly on a test set could be indicative of too much variance and conversely, a model that fails to perform well at all could have too much bias. In either case, both models fail to generalize well. However, while bias in a model can be identified in poor model performance from the start, high variance can be notoriously deceptive as it has the potential to perform very well during training and even during validation, depending on the data. High-variance models frequently use values of coefficients that are unnecessarily high when very similar results can be obtained from...