While optimizing the network architecture configuration—the hidden layer parameters—we have been using the default parameters of the MLP classifier. However, as we saw in the previous chapter, tuning the various hyperparameters has the potential to increase the classifier's performance. Can we incorporate hyperparameter tuning into our optimization? As you may have guessed, the answer is yes. But first, let's take a look at the hyperparameters we would like to optimize.
The sklearn implementation of the MLP classifier contains numerous tunable hyperparameters. For our demonstration, we will concentrate on the following hyperparameters:
Name | Type | Description | Default value |
activation | {'tanh', 'relu', 'logistic'} | Activation function for the hidden layers | &apos... |