We start with understanding the magic behind the algorithm-how naive Bayes works. Given a data sample x with n features x1, x2, ..., xn (x represents a feature vector and x = (x1, x2, ..., xn)), the goal of naive Bayes is to determine the probabilities that this sample belongs to each of K possible classes y1, y2, ..., yK, that is or , where k = 1, 2, ..., K. It looks no different from what we have just dealt with: x or x1, x2, ..., xn is a joint event that the sample has features with values x1, x2, ..., xn respectively, yk is an event that the sample belongs to class k. We can apply Bayes' theorem right away:
portrays how classes are distributed, provided no further knowledge of observation features. Thus, it is also called prior in Bayesian probability terminology. Prior can be either predetermined (usually in a uniform manner where each class has an equal chance of occurrence...