In this chapter, we learned that the k-nearest neighbor algorithm is a classification algorithm that assigns the majority class among the k-nearest neighbors to a given data point. The distance between two points is measured by a metric. We covered examples of distances, including the Euclidean distance, Manhattan distance, tangential distance, and cosine distance. We also discussed how experiments with various parameters and cross-validation can help to establish which parameter, k, and which metric should be used.
We also learned that the dimensionality and position of a data point are determined by its qualities. A large number of dimensions can result in low accuracy of the k-NN algorithm. Reducing the dimensions of qualities of lesser importance can increase accuracy. Similarly, to increase accuracy further, distances for each dimension should be scaled according to the importance of the quality of that dimension.
In the next chapter, we will look at the Naive Bayes algorithm, which classifies an element based on probabilistic methods using Bayes' theorem.