Finding the Best Hyperparameterization
The best hyperparameterization depends on your overall objective in building a machine learning model in the first place. In most cases, this is to find the model that has the highest predictive performance on unseen data, as measured by its ability to correctly label data points (classification) or predict a number (regression).
The prediction of unseen data can be simulated using hold-out test sets or cross-validation, the former being the method used in this chapter. Performance is evaluated differently in each case, for instance, Mean Squared Error (MSE) for regression and accuracy for classification. We seek to reduce the MSE or increase the accuracy of our predictions.
Let's implement manual hyperparameterization in the following exercise.
Exercise 8.01: Manual Hyperparameter Tuning for a k-NN Classifier
In this exercise, we will manually tune a k-NN classifier, which was covered in Chapter 7, The Generalization of Machine...