Understanding when and how to scale
Before we dive into scaling techniques, let's first discuss the monitoring information that we should consider when deciding whether we need to scale, and how we should do it.
Understanding what scaling means
The training log tells us how long the job lasted. In itself, this isn't really useful. How long is too long? This feels very subjective, doesn't it? Furthermore, even when training on the same dataset and infrastructure, changing a single hyperparameter can significantly impact training time. Batch size is one example of this, and there are many more.
When we're concerned about training time, I think we're really trying to answer three questions:
- Is the training time compatible with our business requirements?
- Are we making good use of the infrastructure we're paying for? Did we underprovision or overprovision?
- Could we train faster without spending more money?