Regularization is a way to deal with the problem of overfitting: the goal of regularization is to modify the learning algorithm, or the model itself, to make the model perform well—not just on the training data, but also on new inputs.
One of the most widely used solutions to the overfitting problem—and probably one of the most simple to understand and analyze—is known as dropout.