Dropout Regularization
In this section, you will learn about how dropout regularization works, how it helps with reducing overfitting, and how to implement it using Keras. Lastly, you will have the chance to practice what you have learned about dropout by completing an activity involving a real-life dataset.
Principles of Dropout Regularization
Dropout regularization works by randomly removing nodes from a neural network during training. More precisely, dropout sets up a probability on each node that determines the chance of that node being included in the training at each iteration of the learning algorithm. Imagine we have a large neural network where a dropout chance of 0.5 is assigned to each node. Therefore, at each iteration, the learning algorithm flips a coin for each node to decide whether that node will be removed from the network or not. An illustration of such a process is shown in the following figure. This process is repeated at each iteration; this means that at each iteration...