Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Engineering with Python

You're reading from   Data Engineering with Python Work with massive datasets to design data models and automate data pipelines using Python

Arrow left icon
Product type Paperback
Published in Oct 2020
Publisher Packt
ISBN-13 9781839214189
Length 356 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Paul Crickard Paul Crickard
Author Profile Icon Paul Crickard
Paul Crickard
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Section 1: Building Data Pipelines – Extract Transform, and Load
2. Chapter 1: What is Data Engineering? FREE CHAPTER 3. Chapter 2: Building Our Data Engineering Infrastructure 4. Chapter 3: Reading and Writing Files 5. Chapter 4: Working with Databases 6. Chapter 5: Cleaning, Transforming, and Enriching Data 7. Chapter 6: Building a 311 Data Pipeline 8. Section 2:Deploying Data Pipelines in Production
9. Chapter 7: Features of a Production Pipeline 10. Chapter 8: Version Control with the NiFi Registry 11. Chapter 9: Monitoring Data Pipelines 12. Chapter 10: Deploying Data Pipelines 13. Chapter 11: Building a Production Data Pipeline 14. Section 3:Beyond Batch – Building Real-Time Data Pipelines
15. Chapter 12: Building a Kafka Cluster 16. Chapter 13: Streaming Data with Apache Kafka 17. Chapter 14: Data Processing with Apache Spark 18. Chapter 15: Real-Time Edge Data with MiNiFi, Kafka, and Spark 19. Other Books You May Enjoy Appendix

Data engineering tools

To build data pipelines, data engineers need to choose the right tools for the job. Data engineering is part of the overall big data ecosystem and has to account for the three Vs of big data:

  • Volume: The volume of data has grown substantially. Moving a thousand records from a database requires different tools and techniques than moving millions of rows or handling millions of transactions a minute.
  • Variety: Data engineers need tools that handle a variety of data formats in different locations (databases, APIs, files).
  • Velocity: The velocity of data is always increasing. Tracking the activity of millions of users on a social network or the purchases of users all over the world requires data engineers to operate often in near real time.

Programming languages

The lingua franca of data engineering is SQL. Whether you use low-code tools or a specific programming language, there is almost no way to get around knowing SQL. A strong foundation in SQL allows the data engineer to optimize queries for speed and can assist in data transformations. SQL is so prevalent in data engineering that data lakes and non-SQL databases have tools to allow the data engineer to query them in SQL.

A large number of open source data engineering tools use Java and Scala (Apache projects). Java is a popular, mainstream, object-oriented programming language. While debatable, Java is slowly being replaced by other languages that run on the Java Virtual Machine (JVM). Scala is one of these languages. Other languages that run on the JVM include Clojure and Groovy. In the next chapter, you will be introduced to Apache NiFi. NiFi allows you to develop custom processers in Java, Clojure, Groovy, and Jython. While Java is an object-oriented language, there has been a movement toward functional programming languages, of which Clojure and Scala are members.

The focus of this book is on data engineering with Python. It is well-documented with a larger user base and cross-platform. Python has become the default language for data science and data engineering. Python has an extensive collection of standard libraries and third-party libraries. The data science environment in Python is unmatched in other languages. Libraries such as pandas, matplotlib, numpy, scipy, scikit-learn, tensorflow, pytorch, and NLTK make up an extremely powerful data engineering and data science environment.

Databases

In most production systems, data will be stored in relational databases. Most proprietary solutions will use either Oracle or Microsoft SQL Server, while open source solutions tend to use MySQL or PostgreSQL. These databases store data in rows and are well-suited to recording transactions. There are also relationships between tables, utilizing primary keys to join data from one table to another – thus making them relational. The following table diagram shows a simple data model and the relationships between the tables:

Figure 1.2 – Relational tables joined on Region = RegionID.

Figure 1.2 – Relational tables joined on Region = RegionID.

The most common databases used in data warehousing are Amazon Redshift, Google BigQuery, Apache Cassandra, and other NoSQL databases, such as Elasticsearch. Amazon Redshift, Google BigQuery, and Cassandra deviate from the traditional rows of relational databases and store data in a columnar format, as shown:

Figure 1.3 – Rows stored in a columnar format

Figure 1.3 – Rows stored in a columnar format

Columnar databases are better suited for fast queries – therefore making them well-suited for data warehouses. All three of the columnar databases can be queried using SQL – although Cassandra uses the Cassandra Query Language, it is similar.

In contrast to columnar databases, there are document, or NoSQL, databases, such as Elasticsearch. Elasticsearch is actually a search engine based on Apache Lucene. It is similar to Apache Solr but is more user-friendly. Elasticsearch is open source, but it does have proprietary components – most notably, the X-Pack plugins for machine learning, graphs, security, and alerting/monitoring. Elasticsearch uses the Elastic Query DSL (Domain-Specific Language). It is not SQL, but rather a JSON query. Elasticsearch stores data as documents, and while it has parent-child documents, it is non-relational (like the columnar databases).

Once a data engineer extracts data from a database, they will need to transform or process it. With big data, it helps to use a data processing engine.

Data processing engines

Data processing engines allow data engineers to transform data whether it is in batches or streams. These engines allow the parallel execution of transformation tasks. The most popular engine is Apache Spark. Apache Spark allows data engineers to write transformations in Python, Java, and Scala.

Apache Spark works with Python DataFrames, making it an ideal tool for Python programmers. Spark also has Resilient Distributed Datasets (RDDs). RDDs are an immutable and distributed collection of objects. You create them mainly by loading in an external data source. RDDs allow fast and distributed processing. The tasks in an RDD are run on different nodes within the cluster. Unlike DataFrames, they do not try to guess the schema in your data.

Other popular process engines include Apache Storm, which utilizes spouts to read data and bolts to perform transformations. By connecting them, you build a processing pipeline. Apache Flink and Samza are more modern stream and batch processing frameworks that allow you to process unbounded streams. An unbounded stream is data that comes in with no known end – a temperature sensor, for example, is an unbounded stream. It is constantly reporting temperatures. Flink and Samza are excellent choices if you are using Apache Kafka to stream data from a system. You will learn more about Apache Kafka later in this book.

Data pipelines

Combining a transactional database, a programming language, a processing engine, and a data warehouse results in a pipeline. For example, if you select all the records of widget sales from the database, run it through Spark to reduce the data to widgets and counts, then dump the result to the data warehouse, you have a pipeline. But this pipeline is not very useful if you have to execute manually every time you want it to run. Data pipelines need a scheduler to allow them to run at specified intervals. The simplest way to accomplish this is by using crontab. Schedule a cron job for your Python file and sit back and watch it run every X number of hours.

Managing all the pipelines in crontab becomes difficult fast. How do you keep track of pipelines' successes and failures? How do you know what ran and what didn't? How do you handle backpressure – if one task runs faster than the next, how do you hold data back, so it doesn't overwhelm the task? As your pipelines become more advanced, you will quickly outgrow crontab and will need a better framework.

Apache Airflow

The most popular framework for building data engineering pipelines in Python is Apache Airflow. Airflow is a workflow management platform built by Airbnb. Airflow is made up of a web server, a scheduler, a metastore, a queueing system, and executors. You can run Airflow as a single instance, or you can break it up into a cluster with many executor nodes – this is most likely how you would run it in production. Airflow uses Directed Acyclic Graphs (DAGs).

A DAG is Python code that specifies tasks. A graph is a series of nodes connected by a relationship or dependency. In Airflow, they are directed because they flow in a direction with each task coming after its dependency. Using the preceding example pipeline, the first node would be to execute a SQL statement grabbing all the widget sales. This node would connect downstream to another node, which would aggregate the widgets and counts. Lastly, this node would connect to the final node, which loads the data into the warehouse. The pipeline DAG would look as in the following diagram:

Figure 1.4 – A DAG showing the flow of data between nodes. The task follows the arrows (is directed) from left to right

Figure 1.4 – A DAG showing the flow of data between nodes. The task follows the arrows (is directed) from left to right

This book will cover the basics of Apache Airflow but will primarily use Apache NiFi to demonstrate the principles of data engineering. The following is a screenshot of a DAG in Airflow:

Figure 1.5 – The Airflow GUI showing the details of a DAG

Figure 1.5 – The Airflow GUI showing the details of a DAG

The GUI is not as polished as NiFi, which we will discuss next.

Apache NiFi

Apache NiFi is another framework for building data engineering pipelines, and it too utilizes DAGs. Apache NiFi was built by the National Security Agency and is used at several federal agencies. Apache NiFi is easier to set up and is useful for new data engineers. The GUI is excellent and while you can use Jython, Clojure, Scala, or Groovy to write processors, you can accomplish a lot with a simple configuration of existing processors. The following screenshot shows the NiFi GUI and a sample DAG:

Figure 1.6 – A sample NiFi flow extracting data from a database and sending it to Elasticsearch

Figure 1.6 – A sample NiFi flow extracting data from a database and sending it to Elasticsearch

Apache NiFi also allows clustering and the remote execution of pipelines. It has a built-in scheduler and provides the backpressure and monitoring of pipelines. Furthermore, Apache NiFi has version control using the NiFi Registry and can be used to collect data on the edge using MiNiFi.

Another Python-based tool for data engineering pipelines is Luigi – developed by Spotify. Luigi also uses a graph structure and allows you to connect tasks. It has a GUI much like Airflow. Luigi will not be covered in this book but is an excellent option for Python-based data engineering.

You have been reading a chapter from
Data Engineering with Python
Published in: Oct 2020
Publisher: Packt
ISBN-13: 9781839214189
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image