Multivariate Bernoulli classification
So far, our investigation of the Naïve Bayes has focused on features that are essentially binary {UP=1, DOWN=0}. The mean value is computed as the ratio of the number of observations for which xi = UP over the total number of observations.
As stated in the first section, the Gaussian distribution is more appropriate for either continuous features or binary features in the case of very large labeled datasets. The example is the perfect candidate for the Bernoulli model.
Model
The Bernoulli model differs from the Naïve Bayes classifier in that it penalizes the features x, which does not have any observation; the Naïve Bayes classifier ignores them [5:10].
Note
The Bernoulli mixture model
M8: For a feature function fk with fk = 1 if the feature is observed, 0 otherwise, and the probability p of the observed feature xk belongs to the class Cj, the posterior probability is computed as follows:
Implementation
The implementation of the Bernoulli model consists of...