In this section, we'll explore hyperparameters, or parameters that can't quite be machine learned.
We'll also cover trainable parameters (these are the parameters that are learned by the solver), nontrainable parameters (additional parameters in the models that don't require training), and then finally, hyperparameters (parameters that aren't learned by a traditional solver).
In our Model summary output screenshot, pay attention to the number of trainable parameters in the highlighted section of code at the bottom of the screenshot. That is the number of individual floating-point numbers that are contained inside of our model that our adam optimizer, in conjunction with our categorical cross-entropy loss function, will be exploring in order to find the best parameter values possible. So, this trainable parameter number is the only set of...