The simplest way to perform hyperparameter tuning is called grid search. We define different values that we would like to try for each hyperparameter; for example, if we are training trees, we may want to try depths of 5, 10, and 15. At the same time, we'd like to see whether the best impurity measure is information gain or gini. This creates a total of six combinations that have to be tested for accuracy. As you might be anticipating, the number of combinations will grow exponentially with the number of hyperparameters to consider. For this reason, other techniques are used to avoid testing all possible combinations. A simple approach is to randomize the combinations being tried. Some combinations will be missed, but some variations will be tested without an inductive bias.
AWS SageMaker provides a service for hyperparameter tuning that is...