Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Data Factory Cookbook

You're reading from   Azure Data Factory Cookbook Build ETL, Hybrid ETL, and ELT pipelines using ADF, Synapse Analytics, Fabric and Databricks

Arrow left icon
Product type Paperback
Published in Feb 2024
Publisher Packt
ISBN-13 9781803246598
Length 532 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Tonya Chernyshova Tonya Chernyshova
Author Profile Icon Tonya Chernyshova
Tonya Chernyshova
Xenia Ireton Xenia Ireton
Author Profile Icon Xenia Ireton
Xenia Ireton
Dmitry Foshin Dmitry Foshin
Author Profile Icon Dmitry Foshin
Dmitry Foshin
Dmitry Anoshin Dmitry Anoshin
Author Profile Icon Dmitry Anoshin
Dmitry Anoshin
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Getting Started with ADF 2. Orchestration and Control Flow FREE CHAPTER 3. Setting Up Synapse Analytics 4. Working with Data Lake and Spark Pools 5. Working with Big Data and Databricks 6. Data Migration – Azure Data Factory and Other Cloud Services 7. Extending Azure Data Factory with Logic Apps and Azure Functions 8. Microsoft Fabric and Power BI, Azure ML, and Cognitive Services 9. Managing Deployment Processes with Azure DevOps 10. Monitoring and Troubleshooting Data Pipelines 11. Working with Azure Data Explorer 12. The Best Practices of Working with ADF 13. Other Books You May Enjoy
14. Index

Creating event-based pipeline triggers

Often, it is convenient to run a data movement pipeline in response to an event. One of the most common scenarios is triggering a pipeline run in response to the addition or deletion of blobs in a monitored storage account. Azure Data Factory supports this functionality.

In this recipe, we shall create an event-based trigger that will invoke a pipeline whenever new backup files are added to a monitored folder. The pipeline will move backup files to another folder.

Getting ready

  • To demonstrate the trigger in action, we shall use the pipeline from the Using parameters and built-in functions recipe. If you did not follow the recipe, do so now.
  • We shall be creating a pipeline that is similar to the pipeline in the Using the ForEach and Filter activities recipe. If you did not follow that recipe, do so now.
  • In the storage account (see the Technical requirements section), create another container called backups.
  • Following steps 1 to 3 of the Using the Copy activity with parameterized datasets recipe, create a new dataset and point it to the backups container. Call it Backups.
  • Register Event.Grid Provider with your subscription:
    1. Go to the portal and look for Subscription. Click on your subscription name.
    2. In the Subscription blade, look for Resource Providers.
    3. Find Microsoft.EventGrid in the list and hit the Register button. Wait until the button turns green (an indication that the registration succeeded).

How to do it…

First, we create the pipeline that will be triggered when a new blob is created:

  1. Clone the pipeline from the Using the ForEach and Filter activities recipe. Rename the clone pl_orchestration_recipe_7_trigger.
  2. Rename the FilterOnCSV activity Filter for Backup. In the Settings tab, change Condition to @endswith(item().name, '.backup'):

    Figure 2.36: Configuring the Filter for Backup activity

  1. In the ForEach activity, change the Items value to @activity('Filter For Backup').output.Value in the Settings tab:

    Figure 2.37: Updating the ForEach activity

  1. In the ForEach activity canvas, remove the Metadata and Stored Procedure activities. Add a Copy activity to the ForEach canvas and configure it the following way:
    • Name: Copy from Data to Backup
    • Source Dataset: CsvData (the parameterized dataset created in the first recipe)
    • Filename: @item().name
    • Sink Dataset: The Backups dataset
  2. In the same ForEach canvas, add a Delete activity. Leave the default name (Delete1). Configure it in the following way.

    In the Source tab, specify Source Dataset as CsvData. In the Filename field, enter @item().name.

    In the Logging Settings tab, uncheck the Enable Logging checkbox.

    NOTE

    In this tutorial, we do not need to keep track of the files we deleted. However, in a production environment, you will want to evaluate your requirements very carefully: it might be necessary to set up a logging store and enable logging for your Delete activity.

  1. Connect the Copy from Data to Backup activity to the Delete1 activity:

    Figure 2.38: The ForEach activity canvas and configurations for the Delete activity

  1. Configure the event trigger. In the Manage tab, select Triggers and click on the New button to create a new trigger. In the New trigger blade, configure it as shown in Figure 2.39. Make sure to select the Started radio button in the Status section:

    Figure 2.39: Trigger configuration

    After you select Continue, you will see the Data Preview blade. Click OK to finish creating the trigger.

    We have created a pipeline and a trigger, but we did not assign the trigger to the pipeline. Let’s do so now.

  1. In the Author tab, select the pipeline we created in step 1 (pl_orchestration_recipe_7). Click the Add Trigger button and select the New/Edit option.

    In the Add trigger blade, select the newly created trigger_blob_added trigger. Review the configurations in the Edit trigger and Data preview blades, and hit OK to assign the trigger to the pipeline:

    Figure 2.40: Assigning a trigger to the pipeline

  1. Publish all your changes.
  2. Run the pl_orchestration_recipe_1 pipeline. That should create the backup files in the data container. The trigger we designed will invoke the pl_orchestration_recipe_7 pipeline and move the files from the data container to the backups container.

How it works…

Under the hood, Azure Data Factory uses a service called Event Grid to detect changes in the blob (that is why we had to register the Microsoft.EventGrid provider before starting with the recipe). Event Grid is a Microsoft service that allows you to send events from a source to a destination. Right now, only blob addition and deletion events are integrated.

The trigger configuration options offer us fine-grained control over what files we want to monitor. In the recipe, we specified that the pipeline should be triggered when a new file with the.backup extension is created in the data container in our storage account. We can monitor the following, for example:

  • Subfolders within a container: The trigger will be invoked whenever a file is created within a subfolder. To do this, specify a particular folder within the container by providing values for the container (that is, data) and the folder path(s) in the blob name begins with field (that is, airlines/).
  • .backup files within any container: To accomplish this, select all containers in the container field and leave .backup in the blob name ends with field.

To find out other ways to configure the trigger to monitor files in a way that fulfills your business needs, please refer to the documentation listed in the See also section.

There’s more…

In the recipe, we worked with event triggers. The types of events that ADF supports are currently limited to blob creation and deletion; however, this selection may be expanded in the future. If you need to have your pipeline triggered by another type of event, the way to do it is by creating and configuring another Azure service (for example, a function app) to monitor your events and start a pipeline run when an event of interest happens. You will learn more about ADF integration with other services in Chapter 7, Extending Azure Data Factory with Logic Apps and Azure Functions.

ADF also offers two other kinds of triggers: a scheduled trigger and a tumbling window trigger.

A scheduled trigger invokes the pipeline at regular intervals. ADF offers rich configuration options: apart from recurrence (number of times a minute, a day, a week, and so on), you can configure start and end dates and more granular controls for the hour and minute of the run for a daily trigger, the day of the week for weekly triggers, and the day(s) of the month for monthly triggers.

A tumbling window trigger bears many similarities to the scheduled trigger (it will invoke the pipeline at regular intervals), but it has several features that make it well suited to collecting and processing historical data:

  • A tumbling window trigger can have a start date in the past.
  • A tumbling window trigger allows pipelines to run concurrently (in parallel), which considerably speeds up historical data processing.
  • A tumbling window trigger provides access to two variables:
    trigger().outputs.WindowStartTime
    trigger().outputs.WindowEndTime
    
  • Those may be used to easily filter the range of the data being processed, for both past and current data.

A tumbling window trigger also offers the ability to specify a dependency between pipelines. This feature allows users to design complex workflows that reuse existing pipelines.

Both event-based and scheduled triggers have a many-to-many relationship with pipelines: one trigger may be assigned to many pipelines, and a pipeline may have more than one trigger. A tumbling window trigger is pipeline-specific: it may only be assigned to one pipeline, and a pipeline may only have one tumbling window trigger.

See also

To learn more about all three types of ADF triggers, start here:

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://discord.gg/U229qmBmT3

You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024
Publisher: Packt
ISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime