Regularizing priors
Using informative and weakly informative priors is a way of introducing bias in a model and, if done properly, can be a good thing because it helps to prevent overfitting.
The regularization idea is so powerful and useful that it has been discovered several times, including outside the Bayesian framework. In some fields, this idea is known as the Tikhonov regularization. In non-Bayesian statistics, this regularization idea takes the form of two modifications on the least square method, known as ridge regression and Lasso regression. From the Bayesian point of view, a ridge regression can be interpreted as using normal distributions for the beta coefficients (of a linear model), with small standard deviation that pushes the coefficients towards zero, while the Lasso regression can be interpreted from a Bayesian point of view as using Laplace priors instead of Gaussian for the beta coefficients. The standard versions of ridge and lasso regressions corresponds to single...