Summary
In this chapter, we have covered three strategies for hyperparameter tuning based on searching for estimator hyperparameterizations that improve performance.
The manual search is the most hands-on of the three but gives you a unique feel for the process. It is suitable for situations where the estimator in question is simple (a low number of hyperparameters).
The grid search is an automated method that is the most systematic of the three but can be very computationally intensive to run when the range of possible hyperparameterizations increases.
The random search, while the most complicated to set up, is based on sampling from distributions of hyperparameters, which allows you to expand the search range, thereby giving you the chance to discover a good solution that you may miss with the grid or manual search options. In the next chapter, we will be looking at how to visualize results, summarize models, and articulate feature importance and weights.