Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Data Ingestion with Python Cookbook
Data Ingestion with Python Cookbook

Data Ingestion with Python Cookbook: A practical guide to ingesting, monitoring, and identifying errors in the data ingestion process

Arrow left icon
Profile Icon Gláucia Esppenchutz
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (4 Ratings)
Paperback May 2023 414 pages 1st Edition
eBook
$27.99 $31.99
Paperback
$39.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Gláucia Esppenchutz
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (4 Ratings)
Paperback May 2023 414 pages 1st Edition
eBook
$27.99 $31.99
Paperback
$39.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$27.99 $31.99
Paperback
$39.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Data Ingestion with Python Cookbook

Introduction to Data Ingestion

Welcome to the fantastic world of data! Are you ready to embark on a thrilling journey into data ingestion? If so, this is the perfect book to start! Ingesting data is the first step into the big data world.

Data ingestion is a process that involves gathering and importing data and also storing it properly so that the subsequent extract, transform, and load (ETL) pipeline can utilize the data. To make it happen, we must be cautious about the tools we will use and how to configure them properly.

In our book journey, we will use Python and PySpark to retrieve data from different data sources and learn how to store them properly. To orchestrate all this, the basic concepts of Airflow will be implemented, along with efficient monitoring to guarantee that our pipelines are covered.

This chapter will introduce some basic concepts about data ingestion and how to set up your environment to start the tasks.

In this chapter, you will build and learn the following recipes:

  • Setting up Python and the environment
  • Installing PySpark
  • Configuring Docker for MongoDB
  • Configuring Docker for Airflow
  • Logging libraries
  • Creating schemas
  • Applying data governance in ingestion
  • Implementing data replication

Technical requirements

The commands inside the recipes of this chapter use Linux syntax. If you don’t use a Linux-based system, you may need to adapt the commands:

  • Docker or Docker Desktop
  • The SQL client of your choice (recommended); we recommend DBeaver, since it has a community-free version

You can find the code from this chapter in this GitHub repository: https://github.com/PacktPublishing/Data-Ingestion-with-Python-Cookbook.

Note

Windows users might get an error message such as Docker Desktop requires a newer WSL kernel version. This can be fixed by following the steps here: https://docs.docker.com/desktop/windows/wsl/.

Setting up Python and its environment

In the data world, languages such as Java, Scala, or Python are commonly used. The first two languages are used due to their compatibility with the big data tools environment, such as Hadoop and Spark, the central core of which runs on a Java Virtual Machine (JVM). However, in the past few years, the use of Python for data engineering and data science has increased significantly due to the language’s versatility, ease of understanding, and many open source libraries built by the community.

Getting ready

Let’s create a folder for our project:

  1. First, open your system command line. Since I use the Windows Subsystem for Linux (WSL), I will open the WSL application.
  2. Go to your home directory and create a folder as follows:
    $ mkdir my-project
  3. Go inside this folder:
    $ cd my-project
  4. Check your Python version on your operating system as follows:
    $ python -–version

Depending on your operational system, you might or might not have output here – for example, WSL 20.04 users might have the following output:

Command 'python' not found, did you mean:
 command 'python3' from deb python3
 command 'python' from deb python-is-python3

If your Python path is configured to use the python command, you will see output similar to this:

Python 3.9.0

Sometimes, your Python path might be configured to be invoked using python3. You can try it using the following command:

$ python3 --version

The output will be similar to the python command, as follows:

Python 3.9.0
  1. Now, let’s check our pip version. This check is essential, since some operating systems have more than one Python version installed:
    $ pip --version

You should see similar output:

pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.9)

If your operating system (OS) uses a Python version below 3.8x or doesn’t have the language installed, proceed to the How to do it steps; otherwise, you are ready to start the following Installing PySpark recipe.

How to do it…

We are going to use the official installer from Python.org. You can find the link for it here: https://www.python.org/downloads/:

Note

For Windows users, it is important to check your OS version, since Python 3.10 may not be yet compatible with Windows 7, or your processor type (32-bits or 64-bits).

  1. Download one of the stable versions.

At the time of writing, the stable recommended versions compatible with the tools and resources presented here are 3.8, 3.9, and 3.10. I will use the 3.9 version and download it using the following link: https://www.python.org/downloads/release/python-390/. Scrolling down the page, you will find a list of links to Python installers according to OS, as shown in the following screenshot.

Figure 1.1 – Python.org download files for version 3.9

Figure 1.1 – Python.org download files for version 3.9

  1. After downloading the installation file, double-click it and follow the instructions in the wizard window. To avoid complexity, choose the recommended settings displayed.

The following screenshot shows how it looks on Windows:

Figure 1.2 – The Python Installer for Windows

Figure 1.2 – The Python Installer for Windows

  1. If you are a Linux user, you can install it from the source using the following commands:
    $ wget https://www.python.org/ftp/python/3.9.1/Python-3.9.1.tgz
    $ tar -xf Python-3.9.1.tgz
    $ ./configure –enable-optimizations
    $ make -j 9

After installing Python, you should be able to execute the pip command. If not, refer to the pip official documentation page here: https://pip.pypa.io/en/stable/installation/.

How it works…

Python is an interpreted language, and its interpreter extends several functions made with C or C++. The language package also comes with several built-in libraries and, of course, the interpreter.

The interpreter works like a Unix shell and can be found in the usr/local/bin directory: https://docs.python.org/3/tutorial/interpreter.html.

Lastly, note that many Python third-party packages in this book require the pip command to be installed. This is because pip (an acronym for Pip Installs Packages) is the default package manager for Python; therefore, it is used to install, upgrade, and manage the Python packages and dependencies from the Python Package Index (PyPI).

There’s more…

Even if you don’t have any Python versions on your machine, you can still install them using the command line or HomeBrew (for macOS users). Windows users can also download them from the MS Windows Store.

Note

If you choose to download Python from the Windows Store, ensure you use an application made by the Python Software Foundation.

See also

You can use pip to install convenient third-party applications, such as Jupyter. This is an open source, web-based, interactive (and user-friendly) computing platform, often used by data scientists and data engineers. You can install it from the official website here: https://jupyter.org/install.

Left arrow icon Right arrow icon

Key benefits

  • Harness best practices to create a Python and PySpark data ingestion pipeline
  • Seamlessly automate and orchestrate your data pipelines using Apache Airflow
  • Build a monitoring framework by integrating the concept of data observability into your pipelines

Description

Data Ingestion with Python Cookbook offers a practical approach to designing and implementing data ingestion pipelines. It presents real-world examples with the most widely recognized open source tools on the market to answer commonly asked questions and overcome challenges. You’ll be introduced to designing and working with or without data schemas, as well as creating monitored pipelines with Airflow and data observability principles, all while following industry best practices. The book also addresses challenges associated with reading different data sources and data formats. As you progress through the book, you’ll gain a broader understanding of error logging best practices, troubleshooting techniques, data orchestration, monitoring, and storing logs for further consultation. By the end of the book, you’ll have a fully automated set that enables you to start ingesting and monitoring your data pipeline effortlessly, facilitating seamless integration with subsequent stages of the ETL process.

Who is this book for?

This book is for data engineers and data enthusiasts seeking a comprehensive understanding of the data ingestion process using popular tools in the open source community. For more advanced learners, this book takes on the theoretical pillars of data governance while providing practical examples of real-world scenarios commonly encountered by data engineers.

What you will learn

  • Implement data observability using monitoring tools
  • Automate your data ingestion pipeline
  • Read analytical and partitioned data, whether schema or non-schema based
  • Debug and prevent data loss through efficient data monitoring and logging
  • Establish data access policies using a data governance framework
  • Construct a data orchestration framework to improve data quality

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 31, 2023
Length: 414 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632602
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : May 31, 2023
Length: 414 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632602
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 124.97
Exploratory Data Analysis with Python Cookbook
$49.99
Data Ingestion with Python Cookbook
$39.99
Building ETL Pipelines with Python
$34.99
Total $ 124.97 Stars icon

Table of Contents

16 Chapters
Part 1: Fundamentals of Data Ingestion Chevron down icon Chevron up icon
Chapter 1: Introduction to Data Ingestion Chevron down icon Chevron up icon
Chapter 2: Principals of Data Access – Accessing Your Data Chevron down icon Chevron up icon
Chapter 3: Data Discovery – Understanding Our Data before Ingesting It Chevron down icon Chevron up icon
Chapter 4: Reading CSV and JSON Files and Solving Problems Chevron down icon Chevron up icon
Chapter 5: Ingesting Data from Structured and Unstructured Databases Chevron down icon Chevron up icon
Chapter 6: Using PySpark with Defined and Non-Defined Schemas Chevron down icon Chevron up icon
Chapter 7: Ingesting Analytical Data Chevron down icon Chevron up icon
Part 2: Structuring the Ingestion Pipeline Chevron down icon Chevron up icon
Chapter 8: Designing Monitored Data Workflows Chevron down icon Chevron up icon
Chapter 9: Putting Everything Together with Airflow Chevron down icon Chevron up icon
Chapter 10: Logging and Monitoring Your Data Ingest in Airflow Chevron down icon Chevron up icon
Chapter 11: Automating Your Data Ingestion Pipelines Chevron down icon Chevron up icon
Chapter 12: Using Data Observability for Debugging, Error Handling, and Preventing Downtime Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(4 Ratings)
5 star 50%
4 star 50%
3 star 0%
2 star 0%
1 star 0%
Lincoln Nascimento Jun 26, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is amazing, it helped me through daily situations and also gave me new perspectives. The mention to OpenMetadata helped to solve a problem I was facing in my current company. I recommend for new and seasoned Data Engineers.
Amazon Verified review Amazon
Om S Jul 10, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
"Data Ingestion with Python Cookbook" is a practical and comprehensive guide that equips data engineers and enthusiasts with the knowledge and skills to design and implement efficient data ingestion pipelines. The book seamlessly blends industry best practices with real-world examples, utilizing popular open-source tools to overcome common challenges in the field.From data schema design to creating monitored pipelines with Apache Airflow, the book covers a wide range of topics essential for building robust data ingestion processes. Readers learn to integrate data observability principles into their pipelines, ensuring data quality and facilitating troubleshooting. The book also addresses the complexities of reading various data sources and formats, offering practical solutions and techniques.Throughout the book, readers gain insights into error logging best practices, data orchestration, monitoring, and establishing data access policies through a data governance framework. With a focus on automation and efficiency, the book empowers readers to effortlessly ingest and monitor their data pipelines, laying the foundation for seamless integration with subsequent stages of the ETL process."Data Ingestion with Python Cookbook" is a valuable resource for data engineers and enthusiasts seeking a comprehensive understanding of data ingestion using popular open-source tools. Whether you're a beginner or an advanced learner, this book provides practical examples and theoretical pillars of data governance to address real-world scenarios encountered in data engineering projects.
Amazon Verified review Amazon
Vinoth Jun 28, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book covers life cycle of Data ingestion process (ingesting, monitoring & errors). Author goes in depth about Data Discovery ,Ingesting Data from Structured and Unstructured Databases, PySpark with Defined and Non-Defined Schemas, Designing Monitored Data Workflows, Airflow, Logging and Monitoring Data Ingest in Airflow, Automating Data Ingestion Pipelines, Data Observability for Debugging, Error Handling, and Preventing Downtime. Throughout the book there are lots of examples with code. Highly recommend this book for entry level and experienced data engineers.
Amazon Verified review Amazon
Rashi Garg Aug 07, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I just finished reading this book last week. Having worked in data engineering field more than 10 years now , I found this book a great resource for someone who wants to build skills as a hands-on data-engineer . It offers a practical explanation of the data ingestion lifecycle, covering aspects like data discovery, ingestion from various databases, PySpark usage with different schemas, monitored workflows design, Airflow integration, error handling. The book's focus on open-source tools and real-world scenarios equips readers with skills to build efficient pipelines, ensure data quality, and troubleshoot effectively. I really liked the last section of data observability in the book where it covers the statsd , prometheus and grafana setup.Though I would have loved to more details on another popular data engineering stack Dbt,Snowflake, Yet fundamentals of building scalable data pipelines will remain the same regardless of the tech stack. I would highly recommend this book for data engineers seeking practical guidance and understanding of data ingestion challenges and solutions.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.