Ingesting data from a JDBC database using SQL
With the connection tested and SparkSession configured, the next step is to ingest the data from PostgreSQL, filter it, and save it in an analytical format called a Parquet file. Don’t worry about how Parquet files work for now; we will cover it in the following chapters.
This recipe aims to use the connection we created with our JDBC database and ingest the data from the world_population
table.
Getting ready
This recipe will use the same dataset and code as the Configuring a JDBC connection recipe to connect to the PostgreSQL database. Ensure your Docker container is running or your PostgreSQL server is up.
This recipe continues from the content presented in Configuring a JDBC connection. We will now learn how to ingest the data inside the Postgres database.
How to do it…
Following on from our previous code, let’s read the data in our database as follows:
- Creating our DataFrame: Using the...