Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Data Engineering with Databricks Cookbook
Data Engineering with Databricks Cookbook

Data Engineering with Databricks Cookbook: Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake

Arrow left icon
Profile Icon Pulkit Chadha
Arrow right icon
zł39.99 zł161.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (7 Ratings)
eBook May 2024 438 pages 1st Edition
eBook
zł39.99 zł161.99
Paperback
zł201.99
Subscription
Free Trial
Arrow left icon
Profile Icon Pulkit Chadha
Arrow right icon
zł39.99 zł161.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (7 Ratings)
eBook May 2024 438 pages 1st Edition
eBook
zł39.99 zł161.99
Paperback
zł201.99
Subscription
Free Trial
eBook
zł39.99 zł161.99
Paperback
zł201.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Data Engineering with Databricks Cookbook

Data Ingestion and Data Extraction with Apache Spark

Apache Spark is a powerful distributed computing framework that can handle large-scale data processing tasks. One of the most common tasks when working with data is loading it from various sources and writing it into various formats. In this hands-on chapter, you will learn how to load and write data files with Apache Spark using Python.

In this chapter, we’re going to cover the following recipes:

  • Reading CSV data with Apache Spark
  • Reading JSON data with Apache Spark
  • Reading Parquet data with Apache Spark
  • Parsing XML data with Apache Spark
  • Working with nested data structures in Apache Spark
  • Processing text data in Apache Spark
  • Writing data with Apache Spark

By the end of this chapter, you will have learned how to read, write, parse, and manipulate data in CSV, JSON, Parquet, and XML formats. You will have also learned how to analyze text data with natural language processing (NLP)...

Technical requirements

Before starting, make sure that your docker-compose images are up and running, and open the JupyterLab server running on the localhost (http: //127 .0.0. 1:8888/ lab). Also, ensure that you have cloned the Git repo for this book and have access to the notebook and data used in this chapter.

Remember to stop all services defined in the docker-compose file for this book when you are done running the code examples. You can do this by executing this command:

$ docker-compose stop

You can find the notebooks and data for this chapter at https://github.com/PacktPublishing/Data-Engineering-with-Databricks-Cookbook/tree/main/Chapter01.

Reading CSV data with Apache Spark

Reading CSV data is a common task in data engineering and analysis, and Apache Spark provides a powerful and efficient way to process such data. Apache Spark supports various file formats, including CSV, and it provides many options for reading and processing such data. In this recipe, we will learn how to read CSV data with Apache Spark using Python.

How to do it...

  1. Import libraries: Import the required libraries and create a SparkSession object:
    from pyspark.sql import SparkSession
    spark = (SparkSession.builder
        .appName("read-csv-data")
        .master("spark://spark-master:7077")
        .config("spark.executor.memory", "512m")
        .getOrCreate())
    spark.sparkContext.setLogLevel("ERROR")
  2. Read the CSV data with an inferred schema: Read the CSV file using the read method of SparkSession. In the following code, we specify...

Reading JSON data with Apache Spark

In this recipe, we will learn how to ingest and load JSON data with Apache Spark. Finally, we will cover some common tasks in data engineering with JSON data.

How to do it...

  1. Import libraries: Import the required libraries and create a SparkSession object:
    from pyspark.sql import SparkSession
    from pyspark.sql.functions import *
    spark = (SparkSession.builder
        .appName("read-json-data")
        .master("spark://spark-master:7077")
        .config("spark.executor.memory", "512m")
        .getOrCreate())
    spark.sparkContext.setLogLevel("ERROR")
  2. Load the JSON data into a Spark DataFrame: The read method of the SparkSession object can be used to load JSON data from a file or a directory. The multiLine option is set to true to parse records that span multiple lines. We need to pass the path to the JSON file as a parameter:
    df = ...

Reading Parquet data with Apache Spark

Apache Parquet is a columnar storage format designed to handle large datasets. It is optimized for the efficient compression and encoding of complex data types. Apache Spark, on the other hand, is a fast and general-purpose cluster computing system that is designed for large-scale data processing.

In this recipe, we will explore how to read Parquet data with Apache Spark using Python.

How to do it...

  1. Import libraries: Import the required libraries and create a SparkSession object:
    from pyspark.sql import SparkSession
    spark = (SparkSession.builder
        .appName("read-parquet-data")
        .master("spark://spark-master:7077")
        .config("spark.executor.memory", "512m")
        .getOrCreate())
    spark.sparkContext.setLogLevel("ERROR")
  2. Load the Parquet data: We use the spark.read.format("parquet") method to...

Parsing XML data with Apache Spark

Reading XML data is a common task in big data processing, and Apache Spark provides several options for reading and processing XML data. In this recipe, we will explore how to read XML data with Apache Spark using the built-in XML data source. We will also cover some common issues faced while working with JSON data and how to solve them. Finally, we will cover some common tasks in data engineering with JSON data.

Note

We also need to install the spark-xml package on our cluster. The spark-xml package is a third-party library for Apache Spark released by Databricks. The package enables the processing of XML data in Spark applications and provides the ability to read and write XML files using the Spark DataFrame API, which makes it easy to integrate with other Spark components and perform complex data analysis tasks. We can install the package by running the following command:

$SPARK_HOME/bin/spark-shell –packages com.databricks:spark...

Working with nested data structures in Apache Spark

In this recipe, we will walk you through the step-by-step process of handling nested data structures such as arrays, maps, and so on with Apache Spark. This recipe will equip you with the essential knowledge and practical skills needed to work with complex data types using Apache Spark’s distributed computing capabilities.

How to do it…

  1. Import libraries: Import the required libraries and create a SparkSession object: SparkSession is a unified entry point for Spark applications. It provides a simplified way to interact with various Spark functionalities, such as resilient distributed datasets (RDDs), DataFrames, datasets, SQL queries, streaming, and more. You can create a SparkSession object using the builder method, which allows you to configure the application name, master URL, and other options. We will also define SparkContext, which is the entry point to any Spark functionality. It represents the connection...

Processing text data in Apache Spark

In this recipe, we will walk you through the step-by-step process of leveraging the power of Spark to handle and manipulate textual information efficiently. This recipe will equip you with the essential knowledge and practical skills needed to tackle text-based challenges using Apache Spark’s distributed computing capabilities.

How to do it…

  1. Import libraries: Import the required libraries and create a SparkSession object:
    from pyspark.sql import SparkSession
    from pyspark.sql.functions import *
    spark = (SparkSession.builder
        .appName("text-processing")
        .master("spark://spark-master:7077")
        .config("spark.executor.memory", "512m")
        .getOrCreate())
    spark.sparkContext.setLogLevel("ERROR")
  2. Load the data: We use the spark.read.format("csv") method to load the CSV data into a Spark...

Writing data with Apache Spark

In this recipe, we will walk you through the step-by-step process of leveraging the power of Spark to write data in various formats. This recipe will equip you with the essential knowledge and practical skills needed to write data using Apache Spark’s distributed computing capabilities.

How to do it…

  1. Import libraries: Import the required libraries and create a SparkSession object:
    from pyspark.sql import SparkSession
    spark = (SparkSession.builder
        .appName("write-data")
        .master("spark://spark-master:7077")
        .config("spark.executor.memory", "512m")
        .getOrCreate())
    spark.sparkContext.setLogLevel("ERROR")
  2. Read a CSV file using the read method of SparkSession:
    from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DateType
    df = (spark.read.format("csv")
     ...
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn data ingestion, data transformation, and data management techniques using Apache Spark and Delta Lake
  • Gain practical guidance on using Delta Lake tables and orchestrating data pipelines
  • Implement reliable DataOps and DevOps practices, and enforce data governance policies on Databricks
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Written by a Senior Solutions Architect at Databricks, Data Engineering with Databricks Cookbook will show you how to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, starting with comprehensive introduction to data ingestion and loading with Apache Spark. What makes this book unique is its recipe-based approach, which will help you put your knowledge to use straight away and tackle common problems. You’ll be introduced to various data manipulation and data transformation solutions that can be applied to data, find out how to manage and optimize Delta tables, and get to grips with ingesting and processing streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Advanced recipes later in the book will teach you how to use Databricks to implement DataOps and DevOps practices, as well as how to orchestrate and schedule data pipelines using Databricks Workflows. You’ll also go through the full process of setup and configuration of the Unity Catalog for data governance. By the end of this book, you’ll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.

Who is this book for?

This book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming.

What you will learn

  • Perform data loading, ingestion, and processing with Apache Spark
  • Discover data transformation techniques and custom user-defined functions (UDFs) in Apache Spark
  • Manage and optimize Delta tables with Apache Spark and Delta Lake APIs
  • Use Spark Structured Streaming for real-time data processing
  • Optimize Apache Spark application and Delta table query performance
  • Implement DataOps and DevOps practices on Databricks
  • Orchestrate data pipelines with Delta Live Tables and Databricks Workflows
  • Implement data governance policies with Unity Catalog

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 31, 2024
Length: 438 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632060
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : May 31, 2024
Length: 438 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632060
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 488.97 525.97 37.00 saved
Learn Microsoft Fabric
zł144.99 zł181.99
Data Engineering with Databricks Cookbook
zł201.99
Databricks Certified Associate Developer for Apache Spark Using Python
zł141.99
Total 488.97 525.97 37.00 saved Stars icon
Banner background image

Table of Contents

15 Chapters
Part 1 – Working with Apache Spark and Delta Lake Chevron down icon Chevron up icon
Chapter 1: Data Ingestion and Data Extraction with Apache Spark Chevron down icon Chevron up icon
Chapter 2: Data Transformation and Data Manipulation with Apache Spark Chevron down icon Chevron up icon
Chapter 3: Data Management with Delta Lake Chevron down icon Chevron up icon
Chapter 4: Ingesting Streaming Data Chevron down icon Chevron up icon
Chapter 5: Processing Streaming Data Chevron down icon Chevron up icon
Chapter 6: Performance Tuning with Apache Spark Chevron down icon Chevron up icon
Chapter 7: Performance Tuning in Delta Lake Chevron down icon Chevron up icon
Part 2 – Data Engineering Capabilities within Databricks Chevron down icon Chevron up icon
Chapter 8: Orchestration and Scheduling Data Pipeline with Databricks Workflows Chevron down icon Chevron up icon
Chapter 9: Building Data Pipelines with Delta Live Tables Chevron down icon Chevron up icon
Chapter 10: Data Governance with Unity Catalog Chevron down icon Chevron up icon
Chapter 11: Implementing DataOps and DevOps on Databricks Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(7 Ratings)
5 star 85.7%
4 star 0%
3 star 0%
2 star 0%
1 star 14.3%
Filter icon Filter
Top Reviews

Filter reviews by




Kieran O'Driscoll Jun 25, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The lakehouse can get confusing, but the author breaks it down and explains how to stitch everything together. High;y recommend for anyone looking to optimize their pipelines and improve data engineering efficiency.
Amazon Verified review Amazon
Amazon Customer Aug 05, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Data engineering continues to expand into every data persona's required skillset, whether you know it or not. Data Scientists need to preprocess their data to serve their Machine Learning use cases, Data Analysts need to further clean their data for their Data Warehousing needs. This skillset will continue to become more and more valuable as more enterprises rely on their data to produce actionable insights.If you're looking for example code and reference frameworks for basic to intermediate data engineering tasks, this book is for you. The author does a great job balancing between open source tools (Apache Spark and Delta Lake) and more managed technologies (i.e. Spark on Databricks, Unity Catalog). It even goes through some CI/CD concepts and orchestration best practices which helps you take the basic skills you learn in the early chapters into production.Overall, I find myself coming back to this book any time I need to quickly ingest some new data or optimize my pipelines. I'd recommend this book to anyone looking to get a strong foundation in data engineering.
Amazon Verified review Amazon
Sivanagaraju Gadiparthi Jul 31, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
"Data Engineering with Databricks Cookbook" by Pulkit Chadha is an essential read for both novice and seasoned data engineers. The book offers a pragmatic approach to mastering the Databricks Lakehouse Platform through a series of well-structured recipes. Chadha’s extensive experience is evident in the practical insights and real-world examples that cover the full data engineering lifecycle—from data ingestion and transformation to data management and performance tuning. Each recipe is detailed, providing step-by-step instructions, code snippets, and explanations that make complex concepts accessible.Technically, the book excels with its thorough coverage of Apache Spark, Delta Lake, and Databricks. It addresses the intricacies of data ingestion, transformation, streaming, and performance optimization with clarity. The vocabulary is precise, avoiding unnecessary jargon, which makes the content approachable without diluting its technical depth. The book's organization allows readers to easily navigate through different sections based on their immediate needs.In summary, this cookbook is a valuable resource, offering a blend of foundational knowledge and advanced techniques. It stands out for its clarity, practical focus, and the depth of expertise shared by Chadha.
Amazon Verified review Amazon
Jewell Jul 31, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Consider this book a back-to-basics. It covers all of your data engineering fundamentals while walking you through the best approach to building systems on Databricks. You'll be able to build and solidify your data engineering foundation with this cookbook. I highly recommend this book to anyone new to the Databricks platform or someone who needs a quick refresher on best practices.
Amazon Verified review Amazon
Satish Nadendla Aug 05, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Pulkit Chadha has crafted "Data Engineering with Databricks Cookbook" with remarkable clarity and precision. The book includes comprehensive guides on implementing Delta Lake for ACID transactions, designing ETL pipelines with Apache Spark, optimizing Databricks clusters for performance, and managing real-time data streams using Structured Streaming. It breaks down complex data engineering tasks into easily digestible recipes, making advanced topics accessible even for those new to the field.Chadha’s articulation of complex concepts in a concise and understandable manner makes this book incredibly user-friendly. From data ingestion to performance tuning, each chapter provides practical, step-by-step guidance. The integration of Apache Spark, Databricks, and Delta Lake is seamlessly explained, offering best practices and optimization techniques that are immediately applicable. This book is an essential addition to any data engineer's library and a must-have for mastering the Databricks Lakehouse Platform.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.