Learning column predicate pushdown
Column predicate pushdown is an optimization technique where we filter down to the level of the data source to reduce the amount of data getting scanned. This greatly enhances jobs, as Spark only reads the data that is needed for operations. For example, if we are reading from a Postgres database, we can push down a filter to the database to ensure that Spark only reads the required data. The same can be applied to Parquet and delta files as well. While writing Parquet and delta files to the storage account, we can partition them by one or more columns. And while reading, we can push down a filter to read only the required partitions.
In the following steps, we will look at an example of column predicate pushdown with Parquet files:
- To get started, we will re-create our airlines DataFrame in a new cell:
from pyspark.sql.types import * manual_schema = StructType([ Â Â StructField('Year',IntegerType(),True), Â Â StructField...