From VGG16 to VGG19, we have increased the number of layers and generally, the deeper the neural network, the better its accuracy. However, if merely increasing the number of layers is the trick, then we could keep on adding more layers (while taking care to avoid over-fitting) to the model to get a more accurate results.
Unfortunately, that does not turn out to be true and the issue of the vanishing gradient comes into the picture. As the number of layers increases, the gradient becomes so small as it traverses the network that it becomes hard to adjust the weights, and the network performance deteriorates.
ResNet comes into the picture to address this specific scenario.
Imagine a scenario where a convolution layer does nothing but pass the output of the previous layer to the next layer...