Join our book community on Discord
https://packt.link/EarlyAccessCommunity
In the previous two chapters, we learned extensively about the various convolutional and recurrent network architectures available, along with their implementations in PyTorch. In this chapter, we will take a look at some other deep learning model architectures that have proven to be successful on various machine learning tasks and are neither purely convolutional nor recurrent in nature. We will continue from where we left off in both Chapter 3, Deep CNN Architectures, and Chapter 4, Deep Recurrent Model Architectures.
First, we will explore transformers, which, as we learnt toward the end of Chapter 4, Deep Recurrent Model Architectures, have outperformed recurrent architectures on various sequential tasks. Then, we will pick up from the EfficientNets discussion at the end of Chapter 3, Deep CNN Architectures, and explore the idea of generating randomly wired neural networks, also known as RandWireNNs.
With...