In general, it is assumed that deeper model architectures give access to higher representational power, allowing us to hierarchically organize abstract representations for predictive tasks.
However, as we know, deeper architectures are prone to overfitting, and hence can be challenging to train, requiring keen attention to aspects such as regularization (as seen with the regularization strategies explored in Chapter 3, Signal Processing - Data Analysis with Neural Networks). How can we assess exactly how many layers to initialize, with the appropriate number of neurons and relevant regularization strategies to use? Given the complexity involved in designing the right architecture, it can be very time consuming to experiment with different model hyperparameters to find the right network specifications to solve the task at hand.
While we have discussed general...