Dropout Regularization
In this section, you will learn how dropout regularization works, how it helps with reducing overfitting, and how to implement it using Keras. Lastly, you will practice what you have learned about dropout by completing an activity involving a real-life dataset.
Principles of Dropout Regularization
Dropout regularization works by randomly removing nodes from a neural network during training. More precisely, dropout sets up a probability on each node. This probability refers to the chance that the node is included in the training at each iteration of the learning algorithm. Imagine we have a large neural network where a dropout chance of 0.5
is assigned to each node. In such a case, at each iteration, the learning algorithm flips a coin for each node to decide whether that node will be removed from the network or not. An illustration of such a process can be seen in the following diagram: