Learning to use Naïve Bayes classifiers
Learning to use examples could be hard even for humans. For example, given a list of examples for two sets of values, it's not always easy to see the connection between them. One way of solving this problem would be to classify one set of values and then give it a try, and that's where classifier algorithms come in handy.
Naïve Bayes classifiers are prediction algorithms for assigning labels to problem instances; they apply probability and Bayes' theorem with a strong-independence assumption between the variables to analyze. One of the key advantages of Bayes' classifiers is scalability.
Getting ready…
Since it is hard to build a general classifier, we will build ours assuming that the inputs are positive- and negative-labeled examples. So, the first thing that we need to address is defining the labels that our classifier will handle using an enum
data structure called NBCLabel
:
public enum NBCLabel { POSITIVE, NEGATIVE }
How to do it…
The classifier...