The residual network (ResNet) represents an architecture that, through the use of new and innovative types of blocks (known as residual blocks) and the concept of residual learning, has allowed researchers to reach depths that were unthinkable with the classic feedforward model, due to the problem of the degradation of the gradient.
Pretrained models are trained on a large set of data, and so they allow us to obtain excellent performance. We can therefore adopt pretrained models for a problem similar to the one that we want to solve, to avoid the problem of a lack of data. Because of the computational costs of the formation of such models, they are available in ready-to-use formats. For example, the Keras library offers several models such as Xception, VGG16, VGG19, ResNet, ResNetV2, ResNeXt, InceptionV3, InceptionResNetV2...