Chapter 10: Scaling Out Single-Node Machine Learning Using PySpark
In Chapter 5, Scalable Machine Learning with PySpark, you learned how you could use the power of Apache Spark's distributed computing framework to train and score machine learning (ML) models at scale. Spark's native ML library provides good coverage of standard tasks that data scientists typically perform; however, there is a wide variety of functionality provided by standard single-node Python libraries that were not designed to work in a distributed manner. This chapter deals with techniques for horizontally scaling out standard Python data processing and ML libraries such as pandas, scikit-learn, XGBoost, and more. It also covers scaling out of typical data science tasks such as exploratory data analysis (EDA), model training, model inferencing, and, finally, also covers a scalable Python library named Koalas that lets you effortlessly write PySpark code using the very familiar and easy-to-use pandas-like...