Summary
MLPs are the foundational piece of architecture in deep learning that transcends just processing tabular data and is more than an old architecture that got superseded. MLPs are very commonly utilized as a sub-component in many advanced neural network architectures today to either provide more automatic feature engineering, reduce the dimensionality of large features, or shape the features into the desired shapes for target predictions. Look out for MLPs or, more importantly, the fully connected layer, in the next few architectures that are going to be introduced in the next few chapters!
The automatic gradient computation provided by deep learning frameworks simplifies the implementation of backpropagation and allows us to focus on designing new neural networks. It is essential to ensure that the mathematical functions used in these networks are differentiable, although this is often taken care of when adopting successful research findings. And that’s the beauty of...