We made it! From a very noisy dataset, we built two classifiers that solve part of our goal. Of course, we had to be pragmatic and adapt our initial goal to what was achievable. But on the way, we learned about the strengths and weaknesses of nearest-neighbor and logistic regression, and got an introduction to simple classification with neural networks. We learned how to extract features, such as LinkCount, NumTextTokens, NumCodeLines, AvgSentLen, AvgWordLen, NumAllCaps, and NumExclams, and how to analyze their impact on the classifier's performance.
But what is even more valuable is that we learned an informed way of debugging poorly performing classifiers. That will help us in the future to produce usable systems much faster.
After having looked into nearest-neighbor and logistic regression, in Chapter 5, Dimensionality Reduction, we will get familiar with yet another...