Chapter 12: K-Nearest Neighbors for Classification
K-nearest neighbors (KNN) is a good choice for a classification model when there are not many observations or features and predicting class membership does not need to be very efficient. It is a lazy learner, so it is quicker to fit than other classification algorithms but considerably slower at classifying new observations. It can also yield less accurate predictions at the extremes, though this can be improved by adjusting k appropriately. We will consider these choices carefully in the model we will develop in this chapter.
KNN is perhaps the most straightforward non-parametric algorithm we could select, making it a good diagnostic tool. No assumptions need to be made about the distribution of features or the relationship that features have with the target. There are not many hyperparameters to tune, and the two key hyperparameters – nearest neighbors and the distance metric – are quite easy to interpret.
KNN...