Naive Bayes is a classification method based on the Bayes theorem. Bayes' theorem is named after its inventor, the statistician Thomas Bayes. It is a fast, accurate, robust, easy-to-understand, and interpretable technique. It can also work faster on large datasets. Naive Bayes is effectively deployed in text mining applications such as document classification, predicting sentiments of customer reviews, and spam filtering.
The naive Bayes classifier is called naive because it assumes class conditional independence. Class conditional independence means each feature column is independent of the remaining other features. For example, in the case of determining whether a person has diabetes or not, it depends upon their eating habits, their exercise routine, the nature of their profession, and their lifestyle. Even if features are correlated or depend on each other, naive Bayes will still assume they are independent. Let's understand the Bayes theorem formula...