Scheduling notebooks to process data incrementally
Consider the following scenario. Data is loaded daily into the data lake in the form of CSV files. The task is to create a scheduled batch job that processes the files loaded daily, performs basic checks, and loads the data into the Delta table in the lake database. This recipe addresses this scenario by covering the following tasks:
- Only reading the new CSV files that are loaded to the data lake daily using Spark pools and notebooks
- Processing and performing
upserts
(update if the row exists, insert if it doesn’t), and loading data into the Delta lake table using notebooks - Scheduling the notebook to operationalize the solution
Getting ready
Create a Synapse Analytics workspace, as explained in the Provisioning an Azure Synapse Analytics workspace recipe in this chapter.
Create a Spark pool, as explained in the Provisioning and configuring Spark pools recipe in this chapter.
Download the TransDtls...