A common alternative to evaluate and compare models in the Bayesian world (at least in some of its countries) are Bayes factors. To understand what Bayes factors are, let's write Bayes' theorem one more time (we have not done so for a while!):
Here, represents the data. We can make the dependency of the inference on a given model explicit and write:
The term in the denominator is known as marginal likelihood (or evidence), as you may remember from the first chapter. When doing inference, we do not need to compute this normalizing constant, so in practice, we often compute the posterior up to a constant factor. However, for model comparison and model averaging, the marginal likelihood is an important quantity. If our main objective is to choose only one model, the best one, from a set of models, we can just choose the one with the largest . As a general...