Hyperparameter tuning using HyperDrive
In Chapter 8, Experimenting with Python Code, you trained a LassoLars
model that was accepting the alpha
parameter. In order to avoid overfitting to the training dataset, the LassoLars
model uses a technique called regularization, which basically introduces a penalty term within the optimization formula of the model. You can think of this technique as if the linear regression that we are trying to fit consists of a normal linear function that is being fitted with the least-squares function plus this penalty term. The alpha
parameter specifies how important this penalty term is, something that directly impacts the training outcome. Parameters that affect the training process are referred to as being hyperparameters. To understand better what a hyperparameter is, we are going to explore the hyperparameters of a decision tree. In a decision tree classifier model, like the DecisionTreeClassifier
class located in the scikit-learn
library, you can define...