Scaling out machine learning
In the previous sections, we learned that ML is a set of algorithms that, instead of being explicitly programmed, automatically learn patterns hidden within data. Thus, an ML algorithm exposed to a larger dataset can potentially result in a better-performing model. However, traditional ML algorithms were designed to be trained on a limited data sample and on a single machine at a time. This means that the existing ML libraries are not inherently scalable. One solution to this problem is to down-sample a larger dataset to fit in the memory of a single machine, but this also potentially means that the resulting models aren't as accurate as they could be.
Also, typically, several ML models are built on the same dataset, simply varying the parameters supplied to the algorithm. Out of these several models, the best model is chosen for production purposes, using a technique called hyperparameter tuning. Building several models using a single machine,...