Naive Bayes is based on Bayes' theorem. Bayes' theorem applies conditional probability, defined as follows:

- P(A|B) is a posterior probability, the probability of A after having observed some events (B). It is also a conditional probability: the likelihood of A happening given B has already happened.
- P(B|A) the probability of B given the prior observations A. It is also a conditional probability: the likelihood of B happening given A has already happened.
- P(A) is the probability of A prior to the observations.
- P(B) is the probability of the predictions.
Naive Bayes, although based on Bayes' theorem, assumes that the features in a class are independent of each other. In many cases, this makes predictions more practical to implement. The statistical presence of features, related or not, will produce a prediction. As long...