Ingesting partitioned data
The practice of partitioning data is not recent. It was implemented in databases to distribute data across multiple disks or tables. Actually, data warehouses can partition data according to the purpose and use of the data inside. You can read more here: https://www.tutorialspoint.com/dwh/dwh_partitioning_strategy.htm.
In our case, partitioning data is related to how our data will be split into small chunks and processed.
In this recipe, we will learn how to ingest data that is already partitioned and how it can affect the performance of our code.
Getting ready
This recipe requires an initialized SparkSession
. You can create your own or use the code provided at the beginning of this chapter.
The data required to complete the steps can be found here: https://github.com/PacktPublishing/Data-Ingestion-with-Python-Cookbook/tree/main/Chapter_7/ingesting_partitioned_data.
You can use a Jupyter notebook or a PySpark shell session to execute the...