Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Data Factory Cookbook

You're reading from   Azure Data Factory Cookbook Build ETL, Hybrid ETL, and ELT pipelines using ADF, Synapse Analytics, Fabric and Databricks

Arrow left icon
Product type Paperback
Published in Feb 2024
Publisher Packt
ISBN-13 9781803246598
Length 532 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Tonya Chernyshova Tonya Chernyshova
Author Profile Icon Tonya Chernyshova
Tonya Chernyshova
Xenia Ireton Xenia Ireton
Author Profile Icon Xenia Ireton
Xenia Ireton
Dmitry Foshin Dmitry Foshin
Author Profile Icon Dmitry Foshin
Dmitry Foshin
Dmitry Anoshin Dmitry Anoshin
Author Profile Icon Dmitry Anoshin
Dmitry Anoshin
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Getting Started with ADF 2. Orchestration and Control Flow FREE CHAPTER 3. Setting Up Synapse Analytics 4. Working with Data Lake and Spark Pools 5. Working with Big Data and Databricks 6. Data Migration – Azure Data Factory and Other Cloud Services 7. Extending Azure Data Factory with Logic Apps and Azure Functions 8. Microsoft Fabric and Power BI, Azure ML, and Cognitive Services 9. Managing Deployment Processes with Azure DevOps 10. Monitoring and Troubleshooting Data Pipelines 11. Working with Azure Data Explorer 12. The Best Practices of Working with ADF 13. Other Books You May Enjoy
14. Index

Chaining and branching activities within a pipeline

In this recipe, we shall build a pipeline that will extract the data from the CSV files in Azure Blob Storage, load this data into the Azure SQL table, and record a log message with the status of this job. The status message will depend on whether the extract and load succeeded or failed.

Getting ready

We shall be using all the Azure services that are mentioned in the Technical requirements section at the beginning of the chapter. We shall be using the PipelineLog table and the InsertLogRecord stored procedure. If you have not created the table and the stored procedure in your Azure SQL database yet, please do so now.

How to do it…

  1. In this recipe, we shall reuse portions of the pipeline from the Using parameters and built-in functions recipe. If you completed that recipe, just create a clone of that pipeline and name it pl_orchestration_recipe_4. If you did not, go through steps 1-10 of that recipe and create a parameterized pipeline.
  2. Observe that each activity by default has a little green check mark on the right. This denotes that the activity runs on a successful outcome of the previous activity. However, sometimes activities fail. We want to add an action to take place on the failure of the Copy from Blob to Azure SQL activity. To denote a failure, each activity has a red cross on the right.

    Figure 2.23: Possible activity outcomes

  1. From the Activities pane on the left, drag two Stored Procedure activities onto the canvas. Connect one of them to the green check mark of the Copy From Blob to Azure SQL activity and another one to the red cross.
  2. First, configure the Stored Procedure activity that is connected to the green cross in the following way:
    1. In the General tab, rename it On Success.
    2. In the Settings tab, specify AzureSQLTables as the linked service and [dbo].[InsertPipelineLog] as the Stored Procedure name. Click on Test Connection to verify that you can connect to the Azure SQL database.
    3. Click on the Import Parameters button and fill in the values as follows:
      • PipelineID: @pipeline().Pipeline
      • RunID: @pipeline().RunId
      • Status: Success
      • UpdatedAt: @utcnow()

        NOTE

        You can also use the Add dynamic content functionality to fill in the values. For each one, put your cursor into the field and then click on the little blue Add dynamic content link that appears underneath the field. You will see a blade that gives you a selection of system variables, functions, and activity outputs to choose from.

  1. Now, select the stored procedure that is connected to the red cross in the Copy Data activity. Configure it in a similar way to the previous step, but give it the name On Failure, and for the Status parameter, enter Failure:

    Figure 2.24: A full pipeline with On Success and On Failure branches

  1. It is time to test the pipeline. Run it in Debug mode and verify that, when your pipeline succeeds, you have a corresponding entry in the PipelineLog table.
  2. Now, in order to see the branching in action, let’s imitate the failure of our pipeline. Edit your Copy From Blob To Azure SQL activity: in the Sink tab below the canvas, put any string into the tableName textbox.
  3. Run your pipeline in debug mode. You will see that now the Copy From Blob To Azure SQL activity failed, and the On Failure stored procedure was invoked. Verify that the PipelineLog table in the Azure SQL database has a new record:

    Figure 2.25: Entries in PipelineLog after successful and failed pipeline runs

  1. Publish your changes to save them.

There’s more…

ADF offers another option for branching out on a condition during pipeline execution: the If Condition activity. This activity is another example of a compound activity (like the ForEach activity in the previous recipe): it contains two activity subgroups and a condition. Only one of the activity subgroups is executed, based on whether the condition is true or false.

The use case for the If Condition activity is different than the approach we illustrated in this recipe. While the recipe branches out on the outcome (success or failure) of the previous activity, you design the condition in the If Condition activity to branch out on the inputs from the previous activity. For example, let’s suppose that we want to retrieve metadata about a file, and perform one stored procedure if the file is a CSV and another stored procedure if the file is of a different type.

Here is how we would configure an If Condition activity to accomplish this:

Figure 2.26: Configuring the If Condition activity

The full formula used in the Expression field is: @not(endswith(activity('CsvDataFolder Metadata').output.itemName, 'csv')).

You have been reading a chapter from
Azure Data Factory Cookbook - Second Edition
Published in: Feb 2024
Publisher: Packt
ISBN-13: 9781803246598
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime