Creating global deep learning forecasting models
In Chapter 10, Global Forecasting Models, we talked in detail about why a global model makes sense. We talked about the benefits regarding increased sample size, cross-learning, multi-task learning and the regularization effect that comes with it, and reduced engineering complexity. All of these are relevant for a deep learning model as well. Engineering complexity and sample size become even more important because deep learning models are data-hungry and take quite a bit more engineering effort and training time than other machine learning models. I would go to the extent to say that in the deep learning context, in most practical cases where we have to forecast at scale, global models are the only deep learning paradigm that makes sense.
So, why did we spend all that time looking at individual models? Well, it’s easier to grasp the concept at that level, and the skills and knowledge we gained at that level are very easily...