Summary
In this chapter, you have learned about KNN and Naive Bayes techniques, which require somewhat a little less computational power. KNN, in fact, is called a lazy learner, as it does not learn anything apart from comparing with training data points to classify them into class. Also, you have seen how to tune the k-value using grid search technique. Whereas explanation has been provided for Naive Bayes classifier, NLP examples have been provided with all the famous NLP processing techniques to give you a flavor of this field in a very crisp manner. Though in text processing, either Naive Bayes or SVM techniques could be used as these two techniques can handle data with high dimensionality, which is very relevant in NLP, as the number of word vectors is relatively high in dimensions and sparse at the same time.
In the next chapter, we will be covering the details of unsupervised learning, more precisely, clustering and principal component analysis models.