Summary
New techniques have been presented to achieve state-of-the-art classification results, such as batch normalization, global average pooling, residual connections, and dense blocks.
These techniques have led to the building residual networks, and densely connected networks.
The use of multiple GPUs helps training image classification networks, which have numerous convolutional layers, large reception fields, and for which the batched inputs of images are heavy in memory usage.
Lastly, we looked at how data augmentation techniques will enable an increase of the size of the dataset, reducing the potential of model overfitting, and learning weights for more robust networks.
In the next chapter, we'll see how to use the early layers of these networks as features to build encoder networks, as well as how to reverse the convolutions to reconstruct an output image to perform pixel-wise predictions.