Implementing a Training Algorithm
So far, we have learned about the basics of neural network training. Important keywords such as "loss function", "mini-batch", "gradient", and "gradient descent method" have appeared in succession. Here, we will look at the procedure of neural network training for review purposes. Let's go over the neural network training procedure.
Presupposition
A neural network has adaptable weights and biases. Adjusting them so that they fit the training data is called "training." Neural network training consists of four steps.
Step 1 (mini-batch)
Select some data at random from the training data. The selected data is called a mini-batch. The purpose here is to reduce the value of the loss function for the mini-batch.
Step 2 (calculating gradients)
To reduce the loss function for the mini-batch, calculate the gradient for each weight parameter. The gradient shows the direction that reduces...