Exploring GoogLeNet and Inception v3
As we have discovered the progression of CNN models from LeNet to VGG so far, we have observed the sequential stacking of more convolutional and fully connected layers. This resulted in deep networks with a lot of parameters to train. GoogLeNet emerged as a radically different type of CNN architecture that is composed of a module of parallel convolutional layers called the inception module. Because of this, GoogLeNet is also called Inception v1 (v1 marked the first version as more versions came along later). Some of the drastically new elements introduced in GoogLeNet were the following:
- The inception module – a module of several parallel convolutional layers
- Using 1x1 convolutions to reduce the number of model parameters
- Global average pooling instead of a fully connected layer – reduces overfitting
- Using auxiliary classifiers for training – for regularization and gradient stability
GoogLeNet has...