K-nearest neighbors with dynamic time warping
K-nearest neighbors is a well-known machine learning method (sometimes also going under the guise of case-based reasoning). In kNN, we can use a distance measure to find similar data points. We can then take the known labels of these nearest neighbors as the output and integrate them in some way using a function.
Figure 7.3 illustrates the basic idea of kNN for classification (source – WikiMedia Commons: https://commons.wikimedia.org/wiki/File:KnnClassification.svg):
Figure 7.3: K-nearest neighbor for classification
We know a few data points already. In the preceding illustration, these points are indicated as squares and triangles, and they represent data points of two different classes, respectively. Given a new data point, indicated by a circle, we find the closest known data points to it. In this example, we find that the new point is similar to triangles, so we might assume that the new point is of the triangle...