Using informative and weakly informative priors is a way of introducing bias in a model and, if done properly, this can be a really good because bias prevents overfitting and thus contributes to models being able to make predictions that generalize well. This idea of adding a bias to reduce a generalization error without affecting the ability of the model to adequately model the data that's used to fit is known as regularization. This regularization often takes the form of penalizing larger values for the parameters in a model. This is a way of reducing the information that a model is able to represent and thus reduces the chances that a model captures the noise instead of the signal.
The regularization idea is so powerful and useful that it has been discovered several times, including outside the Bayesian framework. In some fields, this idea is known...