Nearest-neighbor methods are rooted in a distance-based conceptual idea. We consider our training set a model, and make predictions on new points based on how close they are to points in the training set. A naive method is to make the prediction class the same as the closest training data point class. But since most datasets contain a degree of noise, a more common method is to take a weighted average of a set of k-nearest-neighbors. This method is called k-nearest-neighbors (k-NN).
Given a training dataset (x1,x2.....xn) with corresponding targets (y1, y2....yn), we can make a prediction on a point, z, by looking at a set of nearest-neighbors. The actual method of prediction depends on whether we are performing regression (continuous ) or classification (discrete ).
For discrete classification targets, the prediction can be given by a maximum voting scheme, weighted...