Summary
Naïve Bayes is a great algorithm to add to our regular toolkit for solving classification problems. It is not often the approach that will produce predictions with the least bias. However, the flip side is also true. There is less risk of overfitting, particularly when working with continuous features. It is also quite efficient, scaling well to a large number of observations and a large feature space.
The next two chapters of this book will explore unsupervised learning algorithms – those where we do not have a target to predict. In the next chapter, we will examine principal component analysis, and then K-means clustering in the chapter after that.