Handling schema changes dynamically in data flows using schema drift
A common challenge in extraction, transformation, and load (ETL) projects is when the schema changes at the source and the pipelines that are supposed to read the data from the source, transform it, and ingest it to the destination, start to fail. Schema drift, a feature in data flows, addresses this problem by allowing us to dynamically define the column mapping in transformations. In this recipe, we will make some changes to the schema of a data source, use schema drift to detect the changes, and handle changes without any manual intervention gracefully.
Getting ready
Create a Synapse Analytics workspace as explained in the Provisioning an Azure Synapse Analytics workspace recipe in Chapter 8, Processing Data Using Azure Synapse Analytics.
Complete the Copying data using a Synapse data flow recipe in this chapter.
How to do it…
In this recipe, we will be using the Copy_CSV_to_Parquet data flow...