Chapter 5: Working with Big Data – HDInsight and Databricks
Azure Data Factory (ADF) is known for its efficient utilization of big data tools. This allows building fast and scalable ETL/ELT pipelines and easily managing the storage of petabytes of data. Often, setting up a production-ready cluster used for data engineering jobs is a daunting task. On top of this, estimating loads and planning for an autoscaling capacity can be tricky. Azure with HDInsight clusters and Databricks make these tasks obsolete. Now, any Azure practitioner can set up an Apache Hive, Apache Spark, or Apache Kafka cluster in minutes.
In this chapter, we are going to cover the following recipes that will help build your ETL infrastructure:
- Setting up an HDInsight cluster
- Processing data from Azure Data Lake with HDInsight and Hive
- Processing big data with Apache Spark
- Building a machine learning app with Databricks and Azure Data Lake Storage