Understanding key concepts – data and model parallelism
Some of my most extreme memories of working with ML infrastructure came from graduate school. I’ll always remember the stress of a new homework assignment, usually some large dataset I needed to analyze. However, more often than not, the dataset wouldn’t fit on my laptop! I’d have to clear out all of my previous assignments just to start the download. Then, the download would take a long time, and it was often interrupted by my spotty café network. Once I managed to download, I realized to my dismay that it was too large to fit into memory! On a good day, the Python library pandas, which you were introduced to in Chapter 2, had a function built to read that file type, which could limit the read to just a few objects. On a bad day, I needed to build a streaming reader myself. After I managed to run some analysis, I would pick a handful of models I thought would be relevant and well suited. However...