We're now going to optimize the weight initialization that we're applying to the end of each of these neurons:
- To do this, we will first copy the code from the cell that we ran in the previous Reducing overfitting using dropout regularization section, and paste it into a new one. In this section, we won't be changing the general structure of the code; instead, we will be modifying some parameters and optimizing the search.
- We now know the best learn_rate and dropout_rate, so we are going to hardcode these and remove them. We are also going to remove the Dropout layers that we added in the previous section. We will modify the learning rate of the Adam optimizer to 0.001, as this is the best value that we found.
- Since we are trying to optimize the activation and init variables, we will define them in the create_model() function...