Nowadays, neural networks have started to reach human-level accuracy on a number of tasks, and in some, they can be seen to have even surpassed humans. But have they really surpassed humans or does it just seem this way? In production environments, we often have to deal with noisy data, which can cause our model to make incorrect predictions. So, we will now learn about another very important method of regularization—adversarial training.
Before we get into the what and the how of adversarial training, let's take a look at the following diagram:
What we have done, in the preceding diagram, is added in negligible Gaussian noise to the pixels of the original image. To us, the image looks exactly the same, but to a convolutional neural network, it looks entirely different. This is a problem, and it occurs even when our models are perfectly trained...