Introduction
In previous chapters, we discussed several methods to arrive at a model that performs well. These include transforming the data via preprocessing, feature engineering and scaling, or simply choosing an appropriate estimator (algorithm) type from the large set of possible estimators made available to the users of scikit-learn.
Depending on which estimator you eventually select, there may be settings that can be adjusted to improve overall predictive performance. These settings are known as hyperparameters, and deriving the best hyperparameters is known as tuning or optimizing. Properly tuning your hyperparameters can result in performance improvements well into the double-digit percentages, so it is well worth doing in any modeling exercise.
This chapter will discuss the concept of hyperparameter tuning and will present some simple strategies that you can use to help find the best hyperparameters for your estimators.
In previous chapters, we have seen some exercises...