Summary
We covered a number of powerful and extremely useful classification models in this chapter, starting with the use of linear regression as a classifier, then we observed a significant performance increase through the use of the logistic regression classifier. We then moved on to memorizing models, such as K-NN, which, while simple to fit, was able to form complex non-linear boundaries in the classification process, even with images as input information into the model. We then finished our introduction to classification problems, looking at decision trees and the ID3 algorithm. We saw how decision trees, like K-NN models, memorize the training data using rules and decision gates to make predictions with quite a high degree of accuracy.
In the next chapter, we will be extending what we have learned in this chapter. It will cover ensemble techniques, including boosting and the very effective random forest method.