Hyperparameter optimization with search methods
We've already seen some hyperparameters for the few ML models we've examined so far, for example, the regularization strength hyperparameters for linear and logistic regression, or the value of k (for the number of nearest neighbors) in KNN.
Remember that hyperparameters are settings for the models that we choose, while parameters are values the models learn (like the coefficients for linear or logistic regression). Arguments provided to functions in Python and programming are also called parameters as we've seen previously.
We've also seen how there are cross-validation (CV) classes for linear and logistic regression in sklearn
that allow us to optimize the C or alpha hyperparameter for regularization strength. However, these built-in CV methods only search for one optimum hyperparameter and don't have to worry about multiple hyperparameters at once. When we introduce more than one hyperparameter...