Estimating missing data with nearest neighbors
Imputation with K-Nearest Neighbors (KNN) involves estimating missing values in a dataset by considering the values of their nearest neighbors, where similarity between data points is determined based on a distance metric, such as the Euclidean distance. It assigns the missing value the average of the nearest neighbors’ values, weighted by their distance.
Consider the following data set containing 4 variables (columns) and 11 observations (rows). We want to impute the dark value in the fifth row of the second variable. First, we find the row’s k-nearest neighbors, where k=3 in our example, and they are highlighted by the rectangular boxes (middle panel). Next, we take the average value shown by the closest neighbors for variable 2.
Figure 1.11 – Diagram showing a value to impute (dark box), the three closest rows to the value to impute (square boxes), and the values considered to take the average for the imputation
The value for the imputation is given by (value1 × w1 + value2 × w2 + value3 × w3) / 3, where w1, w2, and w3 are proportional to the distance of the neighbor to the data to impute.
In this recipe, we will perform KNN imputation using scikit-learn.
How to do it...
To proceed with the recipe, let’s import the required libraries and prepare the data:
- Let’s import the required libraries, classes, and functions:
import matplotlib.pyplot as plt import pandas as pd from sklearn.model_selection import train_test_split from sklearn.impute import KNNImputer
- Let’s load the dataset described in the Technical requirements section (only some numerical variables):
variables = [ "A2", "A3", "A8", "A11", "A14", "A15", "target"] data = pd.read_csv( "credit_approval_uci.csv", usecols=variables, )
- Let’s divide the data into train and test sets:
X_train, X_test, y_train, y_test = train_test_split( data.drop("target", axis=1), data["target"], test_size=0.3, random_state=0, )
- Let’s set up the imputer to replace missing data with the weighted mean of its closest five neighbors:
imputer = KNNImputer( n_neighbors=5, weights="distance", ).set_output(transform="pandas")
Note
The replacement values can be calculated as the uniform mean of the k-nearest neighbors, by setting weights
to uniform
or as the weighted average, as we do in the recipe. The weight is based on the distance of the neighbor to the observation to impute. The nearest neighbors carry more weight.
- Find the nearest neighbors:
imputer.fit(X_train)
- Replace the missing values with the weighted mean of the values shown by the neighbors:
X_train_t = imputer.transform(X_train) X_test_t = imputer.transform(X_test)
The result is a pandas DataFrame with the missing data replaced.
How it works...
In this recipe, we replaced missing data with the average value shown by each observation’s k-nearest neighbors. We set up KNNImputer()
to find each observation’s five closest neighbors based on the Euclidean distance. The replacement values were estimated as the weighted average of the values shown by the five closest neighbors for the variable to impute. With transform()
, the imputer calculated the replacement value and replaced the missing data.