We started this chapter with a quick recap of CNNs and discussed transposed, depthwise separable, and dilated convolutions. Next, we talked about improving the performance of CNNs by representing the convolution as a matrix multiplication or with the Winograd convolution algorithm. Then, we focused on visualizing CNNs with the help of guided backpropagation and Grad-CAM. Next, we discussed the most popular regularization techniques. Finally, we learned about transfer learning and implemented the same TL task with both PyTorch and TF as a way to compare the two libraries.
In the next chapter, we'll discuss some of the most popular advanced CNN architectures.