Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Data Factory Cookbook

You're reading from   Azure Data Factory Cookbook Build and manage ETL and ELT pipelines with Microsoft Azure's serverless data integration service

Arrow left icon
Product type Paperback
Published in Dec 2020
Publisher Packt
ISBN-13 9781800565296
Length 382 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Dmitry Anoshin Dmitry Anoshin
Author Profile Icon Dmitry Anoshin
Dmitry Anoshin
Roman Storchak Roman Storchak
Author Profile Icon Roman Storchak
Roman Storchak
Xenia Ireton Xenia Ireton
Author Profile Icon Xenia Ireton
Xenia Ireton
Dmitry Foshin Dmitry Foshin
Author Profile Icon Dmitry Foshin
Dmitry Foshin
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Chapter 1: Getting Started with ADF 2. Chapter 2: Orchestration and Control Flow FREE CHAPTER 3. Chapter 3: Setting Up a Cloud Data Warehouse 4. Chapter 4: Working with Azure Data Lake 5. Chapter 5: Working with Big Data – HDInsight and Databricks 6. Chapter 6: Integration with MS SSIS 7. Chapter 7: Data Migration – Azure Data Factory and Other Cloud Services 8. Chapter 8: Working with Azure Services Integration 9. Chapter 9: Managing Deployment Processes with Azure DevOps 10. Chapter 10: Monitoring and Troubleshooting Data Pipelines 11. Other Books You May Enjoy

Using Metadata and Stored Procedure activities

In this recipe, we shall create a pipeline that fetches some metadata from an Azure storage container and stores it in an Azure SQL database table. You will work with two frequently used activities, the Metadata activity and the Stored Procedure activity.

Getting ready

  • In the first recipe, we created two datasets and two linked services. We shall be using the AzureSqlDatabase and OrchestrationAzureBlobStorage linked services in this recipe as well, so if you did not create them before, please go through the necessary steps in the previous recipe.
  • We shall be using AzureSQLDatabase. If you haven't done so already, create the FileMetadata table and the stored procedure to insert the data as described in the Technical requirements section of this chapter.

How to do it…

  1. Create a new pipeline in the Author tab, and call it pl_orchestration_recipe_2.
  2. Create a new dataset named CsvDataFolder, pointing to the Azure Storage container (adforchestrationstorage) we specified in the Technical requirements section. Use the delimited text file format. This time, do not specify the filename; leave it pointing to the data container itself. Use the same linked service for Azure Blob Storage as we used in the previous recipe.
  3. From the Activities pane on the left, find the Get Metadata activity (under the General Tab) and drag it onto the pipeline canvas. Using the configuration tabs at the bottom, configure it in the following way:

    (a) In the General tab, rename this Metadata activity CsvDataFolder Metadata.

    (b) In the Source tab, pick the CsvDataFolder dataset. In the same tab, under Field list, use the New button to add two fields, and select Item Name and Last Modified as the values for those fields:

    Figure 2.16 – Get Metadata activity configuration

    Figure 2.16 – Get Metadata activity configuration

  4. In the Activities pane, find the Stored Procedure activity (under the General tab), and drag it onto the canvas. In the pipeline canvas, connect the CsvDataFolder Metadata activity to the Stored Procedure activity.
  5. Configure the Stored Procedure activity in the following way:

    (a) In the General tab, change the activity name to Insert Metadata.

    (b) In the Settings tab, specify the linked service (AzureSqlDatabase) and the name of the stored procedure: [dbo].[InsertFileMetadata].

    (c) In the same Settings tab, click on Import Parameters to display the text fields to specify the parameters for the Stored Procedure activity. Use the following values:

    FileName: @activity('CsvDataFolder Metadata').output.itemName

    ModifiedAt: @convertFromUtc(activity('CsvDataFolder Metadata').output.lastModified, 'Pacific Standard Time')

    UpdatedAt: @convertFromUtc(utcnow(), Pacific Standard Time'):

    Figure 2.17 – Stored Procedure activity configuration

    Figure 2.17 – Stored Procedure activity configuration

  6. Run your pipeline in Debug mode. After the run is done, go to AzureSqlDatabase and verify that the FileMetadata table is populated with one record: the last modified date of the folder where we keep the .csv files.
  7. Do not forget to publish your pipeline in order to save your changes.

How it works…

In this simple recipe, we introduced two new activities. In step 2, we have used the Metadata activity, with the dataset representing a folder in our container. In this step, we were only interested in the item name and the last modified date of the folder. In step 3, we added a Stored Procedure activity, which allows us to directly invoke a stored procedure in the remote database. In order to configure the Stored Procedure activity, we needed to obtain the parameters (itemName, lastModified, and UpdatedAt). The formulas used in step 5 (such as @activity('CsvDataFolder Metadata').output.itemName) define which activity the value is coming from (the CsvDataFolder Metadata activity) and which parts of the output are required (output.itemName). We have used the built-in conversion function convertFromUtc in order to present the time in a specific time zone (Pacific Standard Time, in our case).

There's more…

In this recipe, we only specified the itemName and lastModified fields as the metadata outputs. However, the Metadata activity supports many more options. Here is the list of currently supported options from the Data Factory documentation at https://docs.microsoft.com/azure/data-factory/control-flow-get-metadata-activity#capabilities:

Figure 2.18 – Metadata activity options

Figure 2.18 – Metadata activity options

The Metadata type options that are available to you will depend on the dataset: for example, the contentMD5 option is only available for files, while childItems is only available for folders.

You have been reading a chapter from
Azure Data Factory Cookbook
Published in: Dec 2020
Publisher: Packt
ISBN-13: 9781800565296
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime