Building a naïve Bayes classifier from scratch
In this section, we will study one of the most classic and important classification algorithms, the naïve Bayes classification. We covered Bayes' theorem in previous chapters several times, but now is a good time to revisit its form.
Suppose A and B are two random events; the following relationship holds as long as P(B) ≠0:
Some terminologies to review: P(A|B) is called the posterior probability as it is the probability of event A after knowing the outcome of event B. P(A), on another hand, is called the prior probability because it contains no information about event B.
Simply put, the idea of the Bayes classifier is to set the classification category variable as our A and the features (there can be many of them) as our B. We predict the classification results as posterior probabilities.
Then why the naïve Bayes classifier? The naïve Bayes classifier assumes that different features are...