Introduction
The last chapter introduced us to one of the most popular distributed data processing platforms used to process big data—Spark.
In this chapter, we will learn more about how to work with Spark and Spark DataFrames using its Python API—PySpark. It gives us the capability to process petabyte-scale data, but also implements machine learning (ML) algorithms at petabyte scale in real time. This chapter will focus on the data processing part using Spark DataFrames in PySpark.
Note
We will be using the term DataFrame quite frequently during this chapter. This will explicitly refer to the Spark DataFrame, unless mentioned otherwise. Please do not confuse this with the pandas DataFrame.
Spark DataFrames are a distributed collection of data organized as named columns. They are inspired from R and Python DataFrames and have complex optimizations at the backend that make them fast, optimized, and scalable.
The DataFrame API was developed as part of Project Tungsten and is designed to improve...