Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Hands-On Data Warehousing with Azure Data Factory
Hands-On Data Warehousing with Azure Data Factory

Hands-On Data Warehousing with Azure Data Factory: ETL techniques to load and transform data from various sources, both on-premises and on cloud

Arrow left icon
Profile Icon Cote Profile Icon Michelle Gutzait Profile Icon Giuseppe Ciaburro
Arrow right icon
NZ$14.99 NZ$57.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.8 (10 Ratings)
eBook May 2018 284 pages 1st Edition
eBook
NZ$14.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial
Arrow left icon
Profile Icon Cote Profile Icon Michelle Gutzait Profile Icon Giuseppe Ciaburro
Arrow right icon
NZ$14.99 NZ$57.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.8 (10 Ratings)
eBook May 2018 284 pages 1st Edition
eBook
NZ$14.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial
eBook
NZ$14.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Hands-On Data Warehousing with Azure Data Factory

The Modern Data Warehouse

Azure Data Factory (ADF) is a service that is available in the Microsoft Azure ecosystem. This service allows the orchestration of different data loads and transfers in Azure.

Back in 2014, there were hardly any easy ways to schedule data transfers in Azure. There were a few open source solutions available, such as Apache Falcon and Oozie, but nothing was easily available as a service in Azure. Microsoft introduced ADF in public preview in October 2014, and the service went to general availability in July 2015.

The service allows the following actions:

  • Copying data from various sources and destinations
  • Calling various computation services, such as HDInsight and Azure data warehouse data transformations
  • Orchestrating the preceding activities using time slices and retrying the activities when there is an error

All these activities were available via the Azure portal at first, and in Visual Studio 2013 before general availability (GA).

The need for a data warehouse

A data warehouse is a repository of enterprise data used for reporting and analysis. There have been three waves of data warehouses so far, which we will cover in the upcoming subsections.

Driven by IT

This is the first wave of business intelligence (BI). IT needed to separate operational data and databases from its origin for the following reasons:

  • Keep data changes history. Some operational applications purge the data after a while.
  • When users wanted to report on the application's data, they were often affecting the performance of the system. IT replicated the operational data to another server to avoid any performance impact on applications.
  • Things got more complex when users wanted to do analysis and reports on databases from multiple enterprise's applications. IT had to replicate all the needed systems and make them speak together. This implied that new structures had to be built and new patterns emerged from there: star schemas, decision support systems (DSS), OLAP cubes, and so on.

Self-service BI

Analysts and users always need data warehouses to evolve at a faster pace. This is the second wave of BI and it happened when major BI players such as Microsoft and Click came with tools that enabled users to merge some data with or without data warehouses. In many enterprises, this is used as a temporary source of analytics or proof of concept. On the other hand, not every data could fit at that time in data warehouses. Many ad hoc reports were, and are still, using self-service BI tools. Here is a short list of such tools:

  • Microsoft Power Pivot
  • Microsoft Power BI
  • Click

Cloud-based BI – big data and artificial intelligence

This is the third wave of BI. The cloud capabilities enable enterprises to do more accurate analysis. Big data technologies allows users to base their analysis on much bigger data volumes. This helps them deriving patterns form the data and have technologies that incorporate and modify these patterns. This leads to artificial intelligence or AI.

Technologies used in big data are not that new. They were used by many search engines in the early 21st century such as Yahoo! and Google. They have also been used quite a lot in research faculties in different enterprises. The third wave of BI broaden the usage of these technologies. Vendors such as Microsoft, Amazon, or Google make it available to almost everyone with their cloud offer.]

The modern data warehouse

Microsoft, as well as many other service providers, have listed the concepts of the modern data warehouse as follows:

Here are some of the many features a modern data warehouse should have:

  • Integration of relational as well as non-relational sources: The data warehouse should be able to ingest data that is not easily integrable in the traditional data warehouse, such as big data, non-relational crunched data, and so on.
  • Hybrid deployment: The data warehouse should be able to extend the data warehouse from on-premises storage to the cloud.
  • Advanced analytics: The data warehouse should be able to analyze the data from all kinds of datasets using different modern machine learning tools.
  • In-database analytics: The data warehouse should be able to use Microsoft software that is integrated with some very powerful analytics open tools, such as R and Python, in its database. Also, with PolyBase integration, the data warehouse can integrate more data sources when it's based on SQL Server.

Main components of a data warehouse

This section will discuss the various parts of a data warehouse.

Staging area

In a classic data warehouse, this zone is usually a database and/or a schema in it that used to hold a copy of the data from the source systems. The staging area is necessary because most of the time, data sources are not stored on the same server as the data warehouse. Even if they are on the same server, we prefer a copy of them for the following reasons:

  • Preserve data integrity. All data is copied over from a specific point in time. This ensures that we have consistency between tables.
  • We might need specific indexes that we could not create in the source system. When we query the data, we're not necessarily making the same links (joins) in the source system. Therefore, we might have to create indexes to increase query performance.
  • Querying the source might have an impact on the performance of the source application. Usually, the staging area is used to bring just the changes from the source systems. This prevents processing too much data from the data source.

Not to mention that the data source might be files: CSV, XML, and so on. It’s much easier to bring their content in relational tables. From a modern data warehouse perspective, this means storing the files in HDFS and separating them using dates.

In a modern data warehouse, if we’re in the cloud only, relational data can still be stored in databases. The only difference might be in the location of the databases. In Azure, we can use Azure SQL tables or Azure data warehouse.

Data warehouse

This is where the data is copied over from the staging area. There are several schools of thought that define the data warehouse:

  • Kimball group data warehouse bus: Ralph Kimball was a pioneer in data warehousing. He and his colleagues wrote many books and articles on their method. It consists of conformed dimensions that can be used by many business processes. For example, if we have a dimension named DimCustomer, we should link it to all fact tables that store customers. We should not create another dimension that redefines our customers. The following link gives more information on the Kimball group method: https://www.kimballgroup.com.
  • Inmon CIF: Bill Inmon and his colleagues defined the corporate information factory at the end of 1990s. This consisted of modeling the source systems commonly using the third normal form. All the data in the table was dated, which means that any changes in the data sources were inserted in the data warehouse tables. The following link gives more information on CIF: http://www.inmoncif.com.
  • Data Vault: Created by Dan Linsted in the 21st century, this is the latest and more efficient modeling method in data warehousing. It consists of breaking down the source data into many different entities. This gives a lot of flexibility when the data is consumed. We have to reconstruct the data and use the necessary pieces for our analysis. Here is a link that gives more information on Data Vault: http://learndatavault.com.

Cubes

In addition to the relational data warehouse, we might have a cube such as SQL Server Analysis Services. Cubes don't replace the relational data warehouses, they extend it. They can also connect to the other part of the warehouse that is not necessarily stored in a relational database. By doing this, they become a semantic layer that can be used by the consumption layer described next.

Consumption layer – BI and analytics

This area is where the data is consumed from the data warehouse and/or the data lake. This book has a chapter dedicated to data lake. In short, the data lake is composed of several areas (data ponds) that classify the data inside of it. The data warehouse is a part of the data lake; it contains the certified data. The data outside the data warehouse in the data lake is most of the time noncertified. It is used to do ad hoc analysis or data discovery.

The BI part can be stored in relational databases, analytic cubes, or models. It can also consist of views on top of the data warehouse when the data is suitable for it.

What is Azure Data Factory

Azure data factories are composed of the following components:

  • Linked services: Connectors to the various storage and compute services. For example, we can have a pipeline that will use the following artifacts:
    • HDInsight cluster on demand: Access to the HDInsight compute service to run a Hive script that uses HDFS external storage
    • Azure Blob storage/SQL Azure: As the Hive job runs, this will retrieve the data from Azure and copy it to an SQL Azure database
  • Datasets: There are layers for the data used in pipelines. A dataset uses a linked service.
  • Pipeline: The pipeline is the link between all datasets. It contains activities that initiate data movements and transformations. It is the engine of the factory; without pipelines, nothing will move in the factory.

Limitations of ADF V1.0

As good as ADF was, and although a lot of features have been added to it since its GA in 2015, there were a few limitations. At first, we relied on JSON quite a lot to define various ADF abstracts. The number of data stores and compute capabilities were quite limited.

The development experience is very different compared to V2.0. As shown in the following screenshot, we could use the Author and Deploy capability, but it only gave us JSON templates.

As we will see later in this book, the new V2.0 factory has a much better development experience.

When it came to source control, we had to rely on Visual Studio integration. From Visual Studio, we could create or import an existing factory and therefore, use the source control of our choice to version it.

What's new in V2.0?

With V2, ADF has now been overhauled. This section will describe the main novelties of ADF V2.

Integration runtime

This is one of the main features of version 2.0. It represents the compute infrastructure and performs data integration across networks. Here are some enhancements it can provide:

  • Data movements between public and private networks either on-premises or using a virtual private network (VPN). They were known as data management gateways in V1 and Power BI.
    • Public: They are used by Azure and other cloud connections. There's a default integration runtime that comes with ADF.
    • Private: They are used to connect private computer resources such as SQL Server on-premises to ADF. We need to install a service on one Windows machine in the private network. That machine can connect to the enterprise resources and send the data to ADF via the service installed on it.
  • SSIS package execution—managing SSIS packages in Azure. This is one of the main topics of this book. Chapter 3, SSIS Lift and Shift, is completely dedicated to this feature.

Linked services

Linked services now have a connectVia property to be able to use the Integration Runtimes that we mentioned in this chapter before. They can now connect to a lot more of data stores than it was possible before.

Datasets

Datasets are the same as they were in V1, but we don't need to define any availability schedules in them now. This means that they have more flexibility in their usage. In conjunction with Linked Services, the datasets have now access to a whole lot of new data stores: sources and destinations.

Pipelines

Pipelines have been modified quite a lot in V2. They don't have any windows of execution, with start times and end times. Pipelines can now be executed using the following technique:

  • On demand via .NET, PowerShell, REST API, or Python
  • Trigger:
    • Schedule trigger: This trigger uses a wall clock kind of schedule, for example, a pipeline can be executed on a weekly basis every Tuesday and Thursday at 10:00 AM
    • Tumbling window trigger: This works on a periodic interval, for example, every 15 minutes between two specific dates

Activities

Pipelines now have the following control activities:

  • Execute pipeline: Calls another pipeline in the same factory.
  • For each activity: Executes activities in a loop similar to any for each loop in structured programming languages.
  • Web activity: Used to call custom REST endpoints.
  • Lookup activity: Gets a record from any external data. The output can later be used by subsequent activities.
  • Get metadata activity: Gets the metadata of activities in ADF.
  • Until activity: Loops the execution of activity sets until the condition is evaluated to true.
  • If condition activity: This is like any if statement in standard programming languages.
  • Wait activity: Pauses the pipeline for a time before resuming other activities.

Parameters

Parameters can be used in pipelines. They are read-only values that are passed when the pipeline is executed manually or when they are scheduled to be executed.

Expressions

In V1, functions could be used to filter out dataset queries. In V2, expressions can be used anywhere in JSON-defined factory objects.

Controlling the flow of activities

Calling activities is more flexible in V2 than in the previous one (V1). As stated in the Pipeline section, there are many new activities, such as for each, if, until, lookup, and so on.

SSIS package deployment in Azure

There is now a new SSI runtime that completely manages clusters of Azure VMs dedicated to running SSIS in the cloud. Packages are deployed in the same manner that they are deployed on-premises when using the Azure SSIS integration runtime. SQL Server Data Tools (SSDT) or SQL Server Management Studio (SSMS) can be used to deploy SSIS packages.

Spark cluster data store

There are many more data stores available now.

Spark clusters are now available in V2. Since Spark is very performant and now integrates more functionalities, it has become an almost essential player in the big data world. In the previous version of ADF, Spark clusters were available via MapReduce custom activities. In this version, Spark is now a first-class citizen, so there will be no more headaches when it comes to integrating it in our data flow.

Summary

In this chapter, we saw the features of a modern data warehouse. We also saw the new features added in the version 2.0 of ADF.

In the next chapter, we will use the data factory to move data from Azure SQL to Azure storage.

 

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • • Combine the power of Azure Data Factory v2 and SQL Server Integration Services
  • • Design and enhance performance and scalability of a modern ETL hybrid solution
  • • Interact with the loaded data in data warehouse and data lake using Power BI

Description

ETL is one of the essential techniques in data processing. Given data is everywhere, ETL will always be the vital process to handle data from different sources. Hands-On Data Warehousing with Azure Data Factory starts with the basic concepts of data warehousing and ETL process. You will learn how Azure Data Factory and SSIS can be used to understand the key components of an ETL solution. You will go through different services offered by Azure that can be used by ADF and SSIS, such as Azure Data Lake Analytics, Machine Learning and Databrick’s Spark with the help of practical examples. You will explore how to design and implement ETL hybrid solutions using different integration services with a step-by-step approach. Once you get to grips with all this, you will use Power BI to interact with data coming from different sources in order to reveal valuable insights. By the end of this book, you will not only learn how to build your own ETL solutions but also address the key challenges that are faced while building them.

Who is this book for?

This book is for you if you are a software professional who develops and implements ETL solutions using Microsoft SQL Server or Azure cloud. It will be an added advantage if you are a software engineer, DW/ETL architect, or ETL developer, and know how to create a new ETL implementation or enhance an existing one with ADF or SSIS.

What you will learn

  • Understand the key components of an ETL solution using Azure Data Factory and Integration Services
  • Design the architecture of a modern ETL hybrid solution
  • Implement ETL solutions for both on-premises and Azure data
  • Improve the performance and scalability of your ETL solution
  • Gain thorough knowledge of new capabilities and features added to Azure Data Factory and Integration Services

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 31, 2018
Length: 284 pages
Edition : 1st
Language : English
ISBN-13 : 9781789130096
Category :
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : May 31, 2018
Length: 284 pages
Edition : 1st
Language : English
ISBN-13 : 9781789130096
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 224.97
Hands-On Machine Learning with Azure
NZ$71.99
Professional Azure SQL Database Administration
NZ$80.99
Hands-On Data Warehousing with Azure Data Factory
NZ$71.99
Total NZ$ 224.97 Stars icon
Banner background image

Table of Contents

7 Chapters
The Modern Data Warehouse Chevron down icon Chevron up icon
Getting Started with Our First Data Factory Chevron down icon Chevron up icon
SSIS Lift and Shift Chevron down icon Chevron up icon
Azure Data Lake Chevron down icon Chevron up icon
Machine Learning on the Cloud Chevron down icon Chevron up icon
Introduction to Azure Databricks Chevron down icon Chevron up icon
Reporting on the Modern Data Warehouse Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.8
(10 Ratings)
5 star 10%
4 star 30%
3 star 10%
2 star 30%
1 star 20%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Dec 22, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
good
Amazon Verified review Amazon
Marie Conti Sep 27, 2020
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Als Hands-On kann man durch die praktischen Beispiele mit Screenshots direkt einsteigen. Die farblichen Grafiken kann man auf der Website zum Buch herunterladen. Kleines Manko: ich hätte mir an manchen Stellen mehr Tiefe gewünscht; bei weiterführenden Themen kommt man ohne die Doku von Microsoft nicht aus. Wer sich aber einen schnellen, praxisorientierten Einstieg verschaffen möchte, ist hier richtig.
Amazon Verified review Amazon
Mario Anzaldua Aug 23, 2018
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Yes, this book is written for the Azure Data Factory (ADF) beginner. It is filled with detailed instructions on how to work with the various Azure modules/services to perform data warehousing activities. The book contains ~275 pages of DW background and setup information across the introduction and 6 project chapters. Each chapter covers nearly every step necessary to complete a specific project but I would recommend that readers have a minimum level of competence with Azure. Readers should be comfortable navigating the various Azure services--or at least Azure SQL db, Storage and Data Factory. You might want to work through a few tutorials so that you can follow along with the book examples. The book contains numerous screen-shots that duplicate the detailed configurations in each of the Azure service blades. Chapter 5 contains a nice intro to Azure Machine Learning that stands on its own.I was happy to find that every exercise in the book succeeded. Following the instructions in each chapter, I ran my jobs to success. Now, that said, I did run into a few challenges. Chapter 4, Azure Data Lake, requires a Service Principal Id and Authentication. The book points readers to Microsoft documentation for this. I spent a day online before getting this to work. I also ran into a couple of hurdles with delimiters in the sample data. One dataset is .csv and has a City, ST field that needs to be worked around for proper processing. You will have to tweak the DataBricks queries a bit. Easy enough to fix but beware that your solution must preserve this column in tact as you will be joining on this field in the final PowerBI report for Chapter 7.Hands-on Data Warehousing is full of useful tips/tricks/steps that get you going with ADF v2. It was great for me as a beginner and saved many hours (days) of online tutorials and research. I finished the book in 1 week. If you need a jump-start on Data Factory, this does the job and the few hurdles keep you aware of what you are trying to achieve. The last chapter brings all your effort together for a nice finish. 4-Stars.
Amazon Verified review Amazon
KingDragonfly Apr 16, 2019
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Like a lot of Azure books, this one's a little shorter than I'd like. It was let down on its Machine Learning section, which somehow was too broad and too specific at the same time. Its target audience seems to be mathematicians. This section should have probably been removed, and expanded into its own book. For those wanting to learn machine learning, search Google for "Machine learning algorithm cheat sheet for Microsoft Azure Machine Learning Studio." Otherwise the rest of the book was good, in particular its mention of how to use existing SSIS packages.
Amazon Verified review Amazon
Sean Forgatch Aug 11, 2018
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
This book provides a good overview of some data warehousing related tools in Azure, it does not go in depth with any particular took and provides a 100 level overview. If you are just starting out this is a good choice for you, if your experienced at all with Azure it will not bring much value.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.