Evaluation
It is generally never a good idea to base an assessment on a single number. In the case of the f-score, it is usually more robust to tricks that give good scores despite not being useful. An example of this is accuracy. As we said in our previous chapter, a spam classifier could predict everything as being spam and get over 80 percent accuracy, although that solution is not useful at all. For that reason, it is usually worth going more in-depth on the results.
To start with, we will look at the confusion matrix, as we did in Chapter 8, Beating CAPTCHAs with Neural Networks. Before we can do that, we need to predict a testing set. The previous code uses cross_val_score
, which doesn't actually give us a trained model we can use. So, we will need to refit one. To do that, we need training and testing subsets:
from sklearn.cross_validation import train_test_split training_documents, testing_documents, y_train, y_test = train_test_split(documents, classes, random_state=14)
Next, we...