Preface
Apache Spark is a unified data analytics engine designed to process huge volumes of data in a fast and efficient way. PySpark is the Python language API of Apache Spark that provides Python developers an easy-to-use scalable data analytics framework.
Essential PySpark for Scalable Data Analytics starts by exploring the distributed computing paradigm and provides a high-level overview of Apache Spark. You'll then begin your data analytics journey with the data engineering process, learning to perform data ingestion, data cleansing, and integration at scale.
This book will also help you build real-time analytics pipelines that enable you to gain insights much faster. Techniques for building cloud-based data lakes are presented along with Delta Lake, which brings reliability and performance to data lakes.
A newly emerging paradigm called the Data Lakehouse is presented, which combines the structure and performance of a data warehouse with the scalability of cloud-based data lakes. You'll learn how to perform scalable data science and machine learning using PySpark, including data preparation, feature engineering, model training, and model productionization techniques. Techniques to scale out standard Python machine learning libraries are also presented, along with a new pandas-like API on top of PySpark called Koalas.