Missing value imputation with K-nearest neighbor
KNN is a popular machine learning technique because it is intuitive and easy to run and yields good results when there is not a large number of features (variables) and observations. For the same reasons, it is often used to impute missing values. As its name suggests, KNN identifies the k observations whose features are most similar to each observation. When used to impute missing values, KNN uses the nearest neighbors to determine what fill values to use.
Getting ready
We will work with the National Longitudinal Survey data again in this recipe, and then try to impute reasonable values for the same school record data that we worked with in the preceding recipe.
You will need scikit-learn to run the code in this recipe. You can install it by entering pip install sklearn
in a Terminal or Windows PowerShell.
How to do it…
In this recipe, we will use scikit-learn's KNNImputer
module to fill in missing values...