Model training using embarrassingly parallel computing
As you learned previously, Apache Spark follows the data parallel processing paradigm of distributed computing. In data parallel processing, the data processing code is moved to where the data resides. However, in traditional computing models, such as those used by standard Python and single-node ML libraries, data is processed on a single machine and the data is expected to be present locally. Algorithms designed for single-node computing can be designed to be multiprocessed, where the process makes use of multiprocessing and multithreading techniques offered by the local CPUs to achieve some level of parallel computing. However, these algorithms are not inherently capable of being distributed and need to be rewritten entirely to be capable of distributed computing. Spark ML library is an example where traditional ML algorithms have been completely redesigned to work in a distributed computing environment. However, redesigning...