Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Azure Data Factory Cookbook
Azure Data Factory Cookbook

Azure Data Factory Cookbook: Build and manage ETL and ELT pipelines with Microsoft Azure's serverless data integration service

Arrow left icon
Profile Icon Dmitry Anoshin Profile Icon Dmitry Foshin Profile Icon Storchak Profile Icon Xenia Ireton
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (13 Ratings)
Paperback Dec 2020 382 pages 1st Edition
eBook
$31.99 $35.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Dmitry Anoshin Profile Icon Dmitry Foshin Profile Icon Storchak Profile Icon Xenia Ireton
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (13 Ratings)
Paperback Dec 2020 382 pages 1st Edition
eBook
$31.99 $35.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$31.99 $35.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Azure Data Factory Cookbook

Chapter 2: Orchestration and Control Flow

Azure Data Factory is an excellent tool for designing and orchestrating your Extract, Transform, Load (ETL) processes. In this chapter, we introduce several fundamental data factory concepts and guide you through the creation and scheduling of increasingly complex data-driven workflows. All the work in this chapter is done using the Microsoft data factory online portal. You'll learn how to create and configure Linked Services and datasets, take advantage of built-in expressions and functions, and, most importantly, learn how and when to use the most popular Data Factory activities.

This chapter covers the following topics:

  • Using parameters and built-in functions
  • Using the Metadata and Stored Procedure activities
  • Using the ForEach and Filter activities
  • Chaining and branching activities within a pipeline
  • Using the Lookup, Web, and Execute Pipeline activities
  • Creating event-based pipeline triggers

Technical requirements

Note

To fully understand the recipes, we make naming suggestions for the accounts, pipelines, and so on throughout the chapter. Many services, such as Azure Storage and SQL Server, require that the names you assign are unique. Follow your own preferred naming conventions, making appropriate substitutions as you follow the recipes. For the Azure resource naming rules, refer to the documentation at https://docs.microsoft.com/azure/azure-resource-manager/management/resource-name-rules.

In addition to Azure Data Factory, we shall be using three other Azure Services: Logic Apps, Blob Storage, and Azure SQL Database. You will need to have Azure Blob Storage and Azure SQL Database accounts set up to follow the recipes. The following steps describe the necessary preparation:

  • Create an Azure Blob Storage account and name it adforchestrationstorage. When creating the storage account, select the same region (that is, East US) as you selected when you created the Data Factory instance. This will save us costs when moving data.
  • Create a container named data within this storage account, and upload two CSV files to the folder: airlines.csv and countries.csv (the files can be found on GitHub: https://github.com/PacktPublishing/Azure-Data-Factory-Cookbook/tree/master/data).
  • Create an Azure SQL Database instance and name it AzureSQLDatabase. When you create the Azure SQL Database instance, you will have the option of creating a server on which the SQL database will be hosted. Create that server, and take note of the credentials you entered. You will need these credentials later when you log in to your database.

    Choose the basic configuration for your SQL server to save on costs. Once your instance is up and running, configure the firewall and network settings for the SQL server: make sure that you set the Allow Azure services and resources to access this database option to Yes. You may also need to create a rule to allow your IP to access the database:

    Figure 2.1 – Firewall configuration

Figure 2.1 – Firewall configuration

Download the SQL scripts from GitHub at https://github.com/PacktPublishing/Azure-Data-Factory-Cookbook/tree/master/Chapter02/sql-scripts:

  • CreateAirlineTable.sql and CreateCountryTable.sql: These scripts will add two tables, Country and Airline, which are used in several recipes, including the first one.
  • CreateMetadataTable.sql: This will create the FileMetadata table and a stored procedure to insert data into that table. This table is necessary for the Using Metadata and Stored Procedure activities and Filtering your data and looping through your files recipes.
  • CreateActivityLogsTable.sql: This will create the PipelineLog table and a stored procedure to insert data into that table. This table is necessary for the Chaining and branching activities within your pipeline recipe.
  • CreateEmailRecipients.sql: This script will create the EmailRecipients table and populate it with a record. This table is used in the Using the Lookup, Web, and Execute Pipeline activities recipe. You will need to edit it to enter email recipient information.

Using parameters and built-in functions

In this recipe, we shall demonstrate the power and versatility of ADF by performing a common task: importing data from several files (blobs) from a storage container into tables in Azure SQL Database. We shall create a pipeline, define datasets, and use a Copy activity to tie all the pieces together and transfer the data. We shall also see how easy it is to back up data with a quick modification to the pipeline.

Getting ready

In this recipe, we shall be using most of the services that were mentioned in the Technical requirements section of this chapter. Make sure that you have access to Azure SQL Database (with the AzureSQLDatabase instance we created) and the Azure storage account with the necessary .csv files already uploaded.

How to do it…

First, open your Azure Data Factory instance in the Azure portal and go to the Author and Monitor interface. Here, we shall define the datasets for input files and database tables, and the linked services (for Azure Blob Storage and Azure SQL Database):

  1. Start by creating linked services for the Azure storage account and AzureSQLDatabase.
  2. Create the linked service for the adforchestrationstorage storage account:

    (a) In the Manage tab, select Linked Services and click on the New button. On the New linked service blade, select Azure Blob Storage:

    Figure 2.2 – The New linked service blade

    Figure 2.2 – The New linked service blade

    (b) On the next screen, configure the linked service connection properties as shown in the following screenshot:

    Figure 2.3 – Connection configurations for Azure Blob Storage

    Figure 2.3 – Connection configurations for Azure Blob Storage

    Name your linked service according to your naming convention (in our example, we named it OrchestrationAzureBlobStorage1).

    Select the appropriate subscription and enter the name of your storage account (where you store the .csv files).

    For Integration Runtime, select AutoResolveIntegrationRuntime.

    For Authentication method, select Account Key.

    Note:

    In this recipe, we are using Account Key authentication to access our storage account. However, in your work environment, it is recommended to authenticate using Managed Identity, taking advantage of the Azure Active Directory service. This is more secure and allows you to avoid using credentials in your code. Find the references for more information about using Managed Identity with Azure Data Factory in the See also section of this recipe.

    (c) Click the Test Connection button at the bottom and verify that you can connect to the storage account.

    (d) Finally, click on the Create button and wait for the linked service to be created.

  3. Create the second linked service for AzureSQLDatabase:
    Figure 2.4 – Connection properties for Azure SQL Database

    Figure 2.4 – Connection properties for Azure SQL Database

    (a) In the Manage tab, create a new linked service, but this time select Azure SQL from choices in the New linked service blade. You can enter Azure SQL into the search field to find it easily.

    (b) Select the subscription information and the SQL server name (the dropdown will present you with choices). Once you have selected the SQL server name, you can select your database (AzureSQLDatabase) from the dropdown in the Database Name section.

    (c) Select SQL Authentication for Authentication Type. Enter the username and password for your database. 

    (d) Make sure to test the connection. If the connection fails, ensure that you configured access correctly in Firewall and Network Settings. Once you have successfully tested the connection, click on Create to save your linked service.

    Now, we shall create two datasets, one for each linked service.

  4. In the Author tab, define the dataset for Azure Storage as shown in the following screenshot:
    Figure 2.5 – Create a new dataset

    Figure 2.5 – Create a new dataset

    (a) Go to Datasets and click on New dataset. Select Azure Blob Storage from the choices and click Continue.

    (b) In the Select Format blade, select Delimited Text and hit Continue.

    (c) Call your new dataset CsvData and select OrchestrationAzureBlobStorage in the Linked Service dropdown.

    (d) With the help of the folder button, navigate to your Azure folder and select any file from there to specify the file path:

    Figure 2.6 – Dataset properties

    Figure 2.6 – Dataset properties

    (e) Check the First Row as Header checkbox and click on Create.

  5. In the same Author tab, create a dataset for the Azure SQL table:

    (a) Go to Datasets and click on New dataset.

    (b) Select Azure SQL from the choices in the New Dataset blade.

    (c) Name your dataset AzureSQLTables.

    (d) In the Linked Service dropdown, select AzureSQLDatabase. For the table name, select Country from the dropdown.

    (e) Click on Create.

  6. Parameterize the AzureSQLTables dataset:

    (a) In the Parameters tab, enter the name of your new parameter, tableName:

    Figure 2.7 – Parameterizing the dataset

    Figure 2.7 – Parameterizing the dataset

    (b) Next, in the Connection tab, click on the Edit checkbox and enter dbo as schema and @dataset().tableName in the table text field, as shown in the following screenshot:

    Figure 2.8 – Specifying a value for the dataset parameter

    Figure 2.8 – Specifying a value for the dataset parameter

  7. In the same way, parameterize and add dynamic content in the Connection tab for the CsvData dataset:

    (a) Select your dataset, open the Parameters tab, and create a parameter named filename.

    (b) In the Connections tab, in the File Path section, click inside the File text box, then click on the Add Dynamic Content link. This will bring up the Dynamic Content interface. In that interface, find the Parameters section and click on filename. This will generate the correct code to refer to the dataset's filename parameter in the dynamic content text box:

    Figure 2.9 – Dynamic content interface

    Figure 2.9 – Dynamic content interface

    Click on the Finish button to finalize your choice.

    Verify that you can see both datasets under the Datasets tab:

    Figure 2.10 – Datasets in the Author tab

    Figure 2.10 – Datasets in the Author tab

  8. We are now ready to design the pipeline.

    In the Author tab, create a new pipeline. Change its name to pl_orchestration_recipe_1.

  9. From the Move and Transform menu in the Activities pane (on the left), drag a Copy activity onto the canvas:
    Figure 2.11 – Pipeline canvas with a Copy activity

    Figure 2.11 – Pipeline canvas with a Copy activity

    On the bottom of the canvas, you will see some tabs: General, Source, Sink, and so on. Configure your Copy activity.

    In the General tab, you can configure the name for your activity. Call it Copy From Blob to Azure SQL.

    In the Source tab, select the CsvData dataset and specify countries.csv in the filename textbox.

    In the Sink tab, select the AzureSQLTables dataset and specify Country in the tableName text field.

  10. We are ready to run the pipeline in Debug mode:

    Note

    You will learn more about using the debug capabilities of Azure Data Factory in Chapter 10, Monitoring and Troubleshooting Data Pipelines. In this recipe, we introduce you to the Output pane, which will help you understand the design and function of this pipeline.

    (a) Click the Debug button in the top panel. This will run your pipeline.

    (b) Put your cursor anywhere on the pipeline canvas. You will see the report with the status of the activities in the bottom panel in the Output tab:

    Figure 2.12 – Debug output

    Figure 2.12 – Debug output

    Hover your cursor over the row representing the activity to see the inputs and outputs buttons. We shall make use of these in later chapters.

    After your pipeline has run, you should see that the dbo.Country table in your Azure SQL database has been populated with the countries data:

    Figure 2.13 – Contents of the Country table in Azure SQL Database

    Figure 2.13 – Contents of the Country table in Azure SQL Database

    We have copied the contents of the Countries.csv file into the database. In the next steps, we shall demonstrate how parameterizing the datasets gives us the flexibility to define which file we want to copy and which SQL table we want as the destination without redesigning the pipeline.

  11. Edit the pipeline: click on the Copy from Blob To Azure SQL activity to select it, and specify airlines.csv for the filename in the Source tab and Airline for the table name in the Sink tab. Run your pipeline again (in Debug mode), and you should see that the second table is populated with the data – using the same pipeline!
  12. Now, let's say we want to back up the contents of the tables in an Azure SQL database before overwriting them with data from .csv files. We can easily enhance the existing pipeline to accomplish this.
  13. Drag another instance of the Copy activity from the Activities pane, name it Backup Copy Activity, and configure it in the following way:

    (a) For the source, select AzureSQLDatabase for the linked service, and add Airline in the text box for the table name.

    (b) In Sink, specify CsvData as the linked service, and enter the following formula into the filename textbox: @concat('Airlines-', utcnow(), '.backup' ).

    (c) Connect Backup Copy Activity to the Copy from Blob to AzureSQL copy activity:

    Figure 2.14 – Adding backup functionality to the pipeline

    Figure 2.14 – Adding backup functionality to the pipeline

  14. Run the pipeline in debug mode. After the run is complete, you should see the backup file in your storage account.
  15. We have created two linked services and two datasets, and we have a functioning pipeline. Click on the Publish All button at the top to save your work.

Let's look at how this works!

How it works…

In this recipe, we became familiar with all the major components of an Azure Data Factory pipeline: linked services, datasets, and activities:

  • Linked services represent configured connections between your Data Factory instance and the service that you want to use.
  • Datasets are more granular: they represent the specific view of the data that your activities will use as input and output.
  • Activities represent the actions that are performed on the data. Many activities require you to specify where the data is extracted from and where it is loaded to. The ADF terms for these entities are source and sink.

Every pipeline that you design will have those components.

In step 1 and step 2, we created the linked services to connect to Azure Blob Storage and Azure SQL Database. Then, in step 3 and step 4, we created datasets that connected to those linked services and referred to specific files or tables. We created parameters that represented the data we referred to in step 5 and step 6, and this allowed us to change which files we wanted to load into tables without creating additional pipelines. In the remaining steps, we worked with instances of the Copy activity, specifying the inputs and outputs (sources and sinks) for the data.

There's more…

We used a built-in function for generating UTC timestamps in step 12. Data Factory provides many convenient built-in functions and expressions, as well as system variables, for your use. To see them, click on Backup SQL Data activity in your pipeline and go to the Source tab below it. Put your cursor inside the tableName text field. You will see an Add dynamic content link appear underneath. Click on it, and you will see the Add dynamic content blade:

Figure 2.15 – Data Factory functions and system variables

Figure 2.15 – Data Factory functions and system variables

This blade lists many useful functions and system variables to explore. We will use some of them in later recipes.

See also

Microsoft keeps extensive documentation on Data Factory. For a more detailed explanation of the concepts used in this recipe, refer to the following pages:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn how to load and transform data from various sources, both on-premises and on cloud
  • Use Azure Data Factory’s visual environment to build and manage hybrid ETL pipelines
  • Discover how to prepare, transform, process, and enrich data to generate key insights

Description

Azure Data Factory (ADF) is a modern data integration tool available on Microsoft Azure. This Azure Data Factory Cookbook helps you get up and running by showing you how to create and execute your first job in ADF. You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines. This book will help you to discover the benefits of cloud data warehousing, Azure Synapse Analytics, and Azure Data Lake Gen2 Storage, which are frequently used for big data analytics. With practical recipes, you’ll learn how to actively engage with analytical tools from Azure Data Services and leverage your on-premise infrastructure with cloud-native tools to get relevant business insights. As you advance, you’ll be able to integrate the most commonly used Azure Services into ADF and understand how Azure services can be useful in designing ETL pipelines. The book will take you through the common errors that you may encounter while working with ADF and show you how to use the Azure portal to monitor pipelines. You’ll also understand error messages and resolve problems in connectors and data flows with the debugging capabilities of ADF. By the end of this book, you’ll be able to use ADF as the main ETL and orchestration tool for your data warehouse or data platform projects.

Who is this book for?

This book is for ETL developers, data warehouse and ETL architects, software professionals, and anyone who wants to learn about the common and not-so-common challenges faced while developing traditional and hybrid ETL solutions using Microsoft's Azure Data Factory. You’ll also find this book useful if you are looking for recipes to improve or enhance your existing ETL pipelines. Basic knowledge of data warehousing is expected.

What you will learn

  • Create an orchestration and transformation job in ADF
  • Develop, execute, and monitor data flows using Azure Synapse
  • Create big data pipelines using Azure Data Lake and ADF
  • Build a machine learning app with Apache Spark and ADF
  • Migrate on-premises SSIS jobs to ADF
  • Integrate ADF with commonly used Azure services such as Azure ML, Azure Logic Apps, and Azure Functions
  • Run big data compute jobs within HDInsight and Azure Databricks
  • Copy data from AWS S3 and Google Cloud Storage to Azure Storage using ADF s built-in connectors

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 24, 2020
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781800565296
Vendor :
Microsoft
Category :
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Dec 24, 2020
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781800565296
Vendor :
Microsoft
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 152.97
Azure Data Engineering Cookbook
$48.99
Azure Databricks Cookbook
$54.99
Azure Data Factory Cookbook
$48.99
Total $ 152.97 Stars icon

Table of Contents

11 Chapters
Chapter 1: Getting Started with ADF Chevron down icon Chevron up icon
Chapter 2: Orchestration and Control Flow Chevron down icon Chevron up icon
Chapter 3: Setting Up a Cloud Data Warehouse Chevron down icon Chevron up icon
Chapter 4: Working with Azure Data Lake Chevron down icon Chevron up icon
Chapter 5: Working with Big Data – HDInsight and Databricks Chevron down icon Chevron up icon
Chapter 6: Integration with MS SSIS Chevron down icon Chevron up icon
Chapter 7: Data Migration – Azure Data Factory and Other Cloud Services Chevron down icon Chevron up icon
Chapter 8: Working with Azure Services Integration Chevron down icon Chevron up icon
Chapter 9: Managing Deployment Processes with Azure DevOps Chevron down icon Chevron up icon
Chapter 10: Monitoring and Troubleshooting Data Pipelines Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2
(13 Ratings)
5 star 38.5%
4 star 53.8%
3 star 0%
2 star 0%
1 star 7.7%
Filter icon Filter
Top Reviews

Filter reviews by




Botta Feb 09, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have read this whole book and I liked the way author approached topics . This has every basics that is required for a beginner . Starting from creating Linked Service , Datasets , Integration runtimes , triggers , data flows and pipelines . As I have gone through whole book , you would find almost every scenario based example that you would be required in the day to day job duties . It also covers all the various activities that we have in ADF today along with different types of triggers and their usages !! Just go for it . Thank you. I really liked it .
Amazon Verified review Amazon
Naga Pakalapati Jan 16, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
DISCLAIMER: I received this book from the publisher for review. My opinion is completely unbiased and solely from an developer/reader point of view.Overall Review: I found this book to be quite resourceful to quickly introduce you to all the important concepts around ADF with hands-on examples and scenarios. This will be very useful to someone starting new with ADF and also to those who are experienced to get familiar with new concepts and new ways to do things. (***I myself used some of the concepts to improve some of the pipelines at work**)What I like:- Neatly structured for easy reading and understanding. Exaclty how a cookbook should be.- Liked the way how technical requirements are well layed out before each chapter to make things easy.- Probably the best thing about this book is, along with covering important how-to concepts of ADF, this book also get's you familiar with ADF integration with other Azure service like Synapse, ML, Databricks, ADLS etc.- Provides links to helpful resources if you want to dig deep into the concepts introduced.What I don't like:There isn't much that is totally off with this book, but I personally felt that in places where the book says to "use this piece of code" needed better explanation on how the code works. Though every chapter has its own "How it works" section to explain what was done, additional explanation would be helpful.One other thing is, in Chapter 3 following the instructions as is may not give you straight results as expected. I had to make minor adjustments based on the errors recived (Probably this is a best way to learn for someone who likes to debug issue, but will be frustrating for those who are new or like things to work as mentioned).What I would like to see:As mentioned earlier, this book is resourceful in many ways but I would have liked to see some information on below points:- A section collating all roles and permissions with access level. (This is explained as when required in some places but a dedicated section would be more informative).- Features that are missing or not availbe out of the box (but acheivable with workarounds) in ADF compared to traditional ETL/ELT tools. This will be useful to those who are experienced in other integration/orchestration tools and trying out ADF. (Though this may not be a need to have in a cookbook but its nice to have).
Amazon Verified review Amazon
AG Apr 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Overview: This book is targeted for people with some basic understanding of data warehousing and who is looking to enhance their level of understanding in modern data warehousing tools provided by Microsoft Azure.What I like about this book: I really like how this book is laid out. There is a brief overview of the topics listed in bullet points, in the beginning of each chapter which sets your expectations on what will be covered in the upcoming pages. I also like how this book exhibits some examples before going too much in detail, which is very helpful in understanding the concepts. Whenever there is a process or concept this book is showing how to do it in four different ways: doing it in Azure, using a copy data tool, using python, PowerShell, and templates. Writers have tried their best to explain the concepts using diagrams and screenshots.This is a cookbook and hence it covers various recipes. In the first few chapters of this book, the author starts with a very basic recipe of how to create an account on Microsoft Azure, GitHub links of scripts are also provided to make the user ready to be able to execute all the steps in their MS Azure account all by themselves. Recipe of creating and executing first job in ADF, designing ETL processes, features of cloud data warehousing and Azure Synapse Analytics.As the book advances, they dive into Azure data lake concepts. Connection to Azure data lake and loading data etc. This book also provides good content on when businesses need to move data between different cloud providers including AWS and Google Cloud.This book nicely ends covering how to manage deployment processes with Azure devops. Also, how to monitor and troubleshoot data pipelines.What I disliked: If I really have to tell what is the one thing I would like to add in this book, I would say more screenshots. Other than that it’s a very well written book.
Amazon Verified review Amazon
Sathishkumar Chinnadurai Jan 31, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good
Amazon Verified review Amazon
Rachael F. Jul 29, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Learned enough to learn about ADF and get me comfortable at work
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.