Now that we have our data ready, we can predict the digits using the nearest neighbors classifier, as follows:
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=11, metric='manhattan')
clf.fit(x_train, y_train)
y_test_pred = clf.predict(x_test)
For this example, I set n_neighbors to 11 and metric to manhattan, meaning at prediction time, we compare each new sample to the 11 nearest training samples, using the Manhattan distance to evaluate how near they are. More on these parameters in a bit. This model made predictions with an accuracy of 96.4% on the test set. This might sound reasonable, but I'm sorry to break it to you; this isn't a fantastic score for this particular dataset. Anyway, let's keep on dissecting the model's performance further.
Using a confusion matrix to understand the model's mistakes
When dealing with a dataset with 10 class labels...