So far, we have been optimizing loss based on the Adam optimizer. In this section, we will do the following:
- Modify the optimizer so that it becomes a Stochastic Gradient Descent (SGD) optimizer
- Revert to a batch size of 32 while fetching data in the DataLoader
- Increase the number of epochs to 10 (so that we can compare the performance of SGD and Adam over a longer number of epochs)
Making these changes means that only one step in the Batch size of 32 section will change (since the batch size is already 32 in the Batch size of 32 section); that is, we will modify the optimizer so that it's the SGD optimizer.
Let's modify the get_model function in step 4 of the Batch size of 32 section in order to modify the optimzier so that we're using the SGD optimizer instead, as follows:
The following code is available as Varying_loss_optimizer.ipynb in the Chapter03 folder of this book's GitHub repository - https://tinyurl...