Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building ETL Pipelines with Python

You're reading from   Building ETL Pipelines with Python Create and deploy enterprise-ready ETL pipelines by employing modern methods

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781804615256
Length 246 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Brij Kishore Pandey Brij Kishore Pandey
Author Profile Icon Brij Kishore Pandey
Brij Kishore Pandey
Emily Ro Schoof Emily Ro Schoof
Author Profile Icon Emily Ro Schoof
Emily Ro Schoof
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1:Introduction to ETL, Data Pipelines, and Design Principles
2. Chapter 1: A Primer on Python and the Development Environment FREE CHAPTER 3. Chapter 2: Understanding the ETL Process and Data Pipelines 4. Chapter 3: Design Principles for Creating Scalable and Resilient Pipelines 5. Part 2:Designing ETL Pipelines with Python
6. Chapter 4: Sourcing Insightful Data and Data Extraction Strategies 7. Chapter 5: Data Cleansing and Transformation 8. Chapter 6: Loading Transformed Data 9. Chapter 7: Tutorial – Building an End-to-End ETL Pipeline in Python 10. Chapter 8: Powerful ETL Libraries and Tools in Python 11. Part 3:Creating ETL Pipelines in AWS
12. Chapter 9: A Primer on AWS Tools for ETL Processes 13. Chapter 10: Tutorial – Creating an ETL Pipeline in AWS 14. Chapter 11: Building Robust Deployment Pipelines in AWS 15. Part 4:Automating and Scaling ETL Pipelines
16. Chapter 12: Orchestration and Scaling in ETL Pipelines 17. Chapter 13: Testing Strategies for ETL Pipelines 18. Chapter 14: Best Practices for ETL Pipelines 19. Chapter 15: Use Cases and Further Reading 20. Index 21. Other Books You May Enjoy

What this book covers

Chapter 1, A Primer on Python and the Development Environment, introduces Python, the core of this book. You must have prior experience with Python to understand this book. This chapter will not cover anything in detail; instead, it will give a primer on Python that’s needed for this book. Also, it will illustrate how to set up a development environment with an IDE and check out the code in Git.

Chapter 2, Understanding the ETL Process and Data Pipelines, explains the ETL process and the significance of a robust ETL pipeline. This starts with an example of how and when to implement an ETL process and how a good pipeline can help automate the ETL process. This also explains the difference between ETL and ELT.

Chapter 3, Design Principles for Creating Scalable and Resilient Pipelines, deals with the implementation of the best design patterns with open source Python libraries to create an enterprise-grade ETL pipeline. It illustrates how to install these libraries and primers on all the functions available to create robust pipelines. This also explains all the design patterns and approaches available to create an ETL process.

Chapter 4, Sourcing Insightful Data and Data Extraction Strategies, deals with sourcing data from different source systems. Firstly, we identify an open source to get high-quality, insightful data that can act as an input for ETL pipelines. Secondly, we discuss various strategies to ingest the sourced data.

Chapter 5, Data Cleansing and Transformation, deals with various data transformation techniques in Python. We start with a hands-on example of data cleansing and massaging. We also learn how to handle missing data. Finally, we apply various transformation techniques to transform the data in the desired format

Chapter 6, Loading Transformed Data, deals with various data loading techniques in Python. We start with a hands-on example of data loading in an RDBMS and then we repeat this process for NoSQL databases. We’ll also learn about various use cases of data loading. Finally, we’ll look into some of the best practices for data loading.

Chapter 7, Tutorial – Building an End-to-End ETL Pipeline in Python, creates a full-fledged ETL pipeline using various tools and technologies we have learned about so far. We’ll source data, ingest data, transform data, and finally, load the data into final tables. We use the MySQL database for the example.

Chapter 8, Powerful ETL Libraries and Tools in Python, explores various open source tools to create a modern data pipeline. First, we’ll explore Python libraries such as Bonobo, Odo, mETL, and Riko. We’ll go through the pros and cons and create an ETL pipeline by applying these libraries. Finally, we’ll move to big data and study tools such as Apache Airflow, Luigi, and pETL.

Chapter 9, A Primer on AWS Tools for ETL Processes, explains various AWS tools for creating ETL pipelines. It goes from explaining various strategies to selecting the best tools and design patterns. You’ll learn how to create a development environment for AWS and deploy the code locally. We’ll also explore the best strategies for deployment and testing. Finally, we’ll use some automation techniques to automate boring stuff.

Chapter 10, Tutorial – Creating an ETL Pipeline in AWS, creates an ETL pipeline in AWS in conjunction with Python. We start by creating a mini pipeline using a step function and AWS Lambda. Then, we go on to create a full-fledged pipeline using Bonobo, EC2, and RDS.

Chapter 11, Building Robust Deployment Pipelines in AWS, creates a basic CI/CD pipeline for ETL jobs. We’ll use AWS CodePipeline, CodeDeploy, and CodeCommit to create a robust CI/CD pipeline to automate the code deployment. We’ll see an example of how Git can be leveraged for a CI/CD pipeline in AWS. We’ll also get familiar with using Terraform for code deployment.

Chapter 12, Orchestration and Scaling in ETL Pipelines, covers the limitations of ETL pipelines and how to scale ETL pipelines to handle increasing demand seamlessly. It goes on to explain how to choose the best scaling strategies. It also explains how to create robust orchestration for ETL pipelines. Finally, we’ll work on a hands-on exercise to create an ETL pipeline and apply scaling and orchestration strategies.

Chapter 13, Testing Strategies for ETL Pipelines, deals with ETL testing strategies. A pipeline may contain bugs and it is very important to catch them before they make it to production. Unit testing using pytest does cover most errors, but an external ETL testing strategy is central to creating a high-performance, resilient ETL pipeline.

Chapter 14, Best Practices for ETL Pipelines, covers some of the industry best practices for creating ETL pipelines in production. It also identifies some of the common pitfalls that users should avoid while building ETL pipelines.

Chapter 15, Use Cases and Further Reading, covers practical exercises and mini-project outlines, with further reading suggestions included in this chapter. Also, it exposes you to a use case of creating a robust ETL pipeline for New York yellow-taxi data for analysis. Finally, we’ll get US construction market data through AWS Marketplace and create a production-ready, fault-tolerant, high-quality data pipeline in AWS.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime