We already used and implemented some classification algorithms in the previous chapters: decision tree learning, random forest, and KNN are all well suited for solving this task. However, as Boromir used to say, "One cannot simply walk into neural networks without knowing about logistic regression"
. So, to remind you, classification is almost the same as regression, except that response variable y is not a continuous (float) but takes values from some set of discrete values (enum). In this chapter, we're primarily concerned with the binary classification, where y can be either true or false, one or zero, and belong to a positive or negative class.
Although, if you think about this for a moment, it's not too hard to build a multiclass classifier from several binary classifiers by chaining them one after the other. In the classification...