Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Data Engineering with Python
Data Engineering with Python

Data Engineering with Python: Work with massive datasets to design data models and automate data pipelines using Python

eBook
$9.99 $41.99
Paperback
$51.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Data Engineering with Python

Chapter 1: What is Data Engineering?

Welcome to Data Engineering with Python. While data engineering is not a new field, it seems to have stepped out from the background recently and started to take center stage. This book will introduce you to the field of data engineering. You will learn about the tools and techniques employed by data engineers and you will learn how to combine them to build data pipelines. After completing this book, you will be able to connect to multiple data sources, extract the data, transform it, and load it into new locations. You will be able to build your own data engineering infrastructure, including clustering applications to increase their capacity to process data.

In this chapter, you will learn about the roles and responsibilities of data engineers and how data engineering works to support data science. You will be introduced to the tools used by data engineers, as well as the different areas of technology that you will need to be proficient in to become a data engineer.

In this chapter, we're going to cover the following main topics:

  • What data engineers do
  • Data engineering versus data science
  • Data engineering tools

What data engineers do

Data engineering is part of the big data ecosystem and is closely linked to data science. Data engineers work in the background and do not get the same level of attention as data scientists, but they are critical to the process of data science. The roles and responsibilities of a data engineer vary depending on an organization's level of data maturity and staffing levels; however, there are some tasks, such as the extracting, loading, and transforming of data, that are foundational to the role of a data engineer.

At the lowest level, data engineering involves the movement of data from one system or format to another system or format. Using more common terms, data engineers query data from a source (extract), they perform some modifications to the data (transform), and then they put that data in a location where users can access it and know that it is production quality (load). The terms extract, transform, and load will be used a lot throughout this book and will often be abbreviated to ETL. This definition of data engineering is broad and simplistic. With the help of an example, let's dig deeper into what data engineers do.

An online retailer has a website where you can purchase widgets in a variety of colors. The website is backed by a relational database. Every transaction is stored in the database. How many blue widgets did the retailer sell in the last quarter?

To answer this question, you could run a SQL query on the database. This doesn't rise to the level of needing a data engineer. But as the site grows, running queries on the production database is no longer practical. Furthermore, there may be more than one database that records transactions. There may be a database at different geographical locations – for example, the retailers in North America may have a different database than the retailers in Asia, Africa, and Europe.

Now you have entered the realm of data engineering. To answer the preceding question, a data engineer would create connections to all of the transactional databases for each region, extract the data, and load it into a data warehouse. From there, you could now count the number of all the blue widgets sold.

Rather than finding the number of blue widgets sold, companies would prefer to find the answer to the following questions:

  • How do we find out which locations sell the most widgets?
  • How do we find out the peak times for selling widgets?
  • How many users put widgets in their carts and remove them later?
  • How do we find out the combinations of widgets that are sold together?

Answering these questions requires more than just extracting the data and loading it into a single system. There is a transformation required in between the extract and load. There is also the difference in times zones in different regions. For instance, the United States alone has four time zones. Because of this, you would need to transform time fields to a standard. You will also need a way to distinguish sales in each region. This could be accomplished by adding a location field to the data. Should this field be spatial – in coordinates or as well-known text – or will it just be text that could be transformed in a data engineering pipeline?

Here, the data engineer would need to extract the data from each database, then transform the data by adding an additional field for the location. To compare the time zones, the data engineer would need to be familiar with data standards. For the time, the International Organization for Standardization (ISO) has a standard – ISO 8601.

Let's now answer the questions in the preceding list one by one:

  • Extract the data from each database.
  • Add a field to tag the location for each transaction in the data
  • Transform the date from local time to ISO 8601.
  • Load the data into the data warehouse.

The combination of extracting, loading, and transforming data is accomplished by the creation of a data pipeline. The data comes into the pipeline raw, or dirty in the sense that there may be missing data or typos in the data, which is then cleaned as it flows through the pipe. After that, it comes out the other side into a data warehouse, where it can be queried. The following diagram shows the pipeline required to accomplish the task:

Figure 1.1 – A pipeline that adds a location and modifies the date

Figure 1.1 – A pipeline that adds a location and modifies the date

Knowing a little more about what data engineering is, and what data engineers do, you should start to get a sense of the responsibilities and skills that data engineers need to acquire. The following section will elaborate on these skills.

Required skills and knowledge to be a data engineer

In the preceding example, it should be clear that data engineers need to be familiar with many different technologies, and we haven't even mentioned the business processes or needs.

At the start of a data pipeline, data engineers need to know how to extract data from files in different formats or different types of databases. This means data engineers need to know several languages used to perform many different tasks, such as SQL and Python.

During the transformation phase of the data pipeline, data engineers need to be familiar with data modeling and structures. They will also need to understand the business and what knowledge and insight they are hoping to extract from the data because this will impact the design of the data models.

The loading of data into the data warehouse means there needs to be a data warehouse with a schema to hold the data. This is also usually the responsibility of the data engineer. Data engineers will need to know the basics of data warehouse design, as well as the types of databases used in their construction.

Lastly, the entire infrastructure that the data pipeline runs on could be the responsibility of the data engineer. They need to know how to manage Linux servers, as well as how to install and configure software such as Apache Airflow or NiFi. As organizations move to the cloud, the data engineer now needs to be familiar with spinning up the infrastructure on the cloud platform used by the organization – Amazon, Google Cloud Platform, or Azure.

Having walked through an example of what data engineers do, we can now develop a broader definition of data engineering.

Information

Data engineering is the development, operation, and maintenance of data infrastructure, either on-premises or in the cloud (or hybrid or multi-cloud), comprising databases and pipelines to extract, transform, and load data.

Data engineering versus data science

Data engineering is what makes data science possible. Again, depending on the maturity of an organization, data scientists may be expected to clean and move the data required for analysis. This is not the best use of a data scientist's time. Data scientists and data engineers use similar tools (Python, for instance), but they specialize in different areas. Data engineers need to understand data formats, models, and structures to efficiently transport data, whereas data scientists utilize them for building statistical models and mathematical computation.

Data scientists will connect to the data warehouses built by data engineers. From there, they can extract the data required for machine learning models and analysis. Data scientists may have their models incorporated into a data engineering pipeline. A close relationship should exist between data engineers and data scientists. Understanding what data scientists need in the data will only serve to help the data engineers deliver a better product.

In the next section, you will learn more about the most common tools used by data engineers.

Data engineering tools

To build data pipelines, data engineers need to choose the right tools for the job. Data engineering is part of the overall big data ecosystem and has to account for the three Vs of big data:

  • Volume: The volume of data has grown substantially. Moving a thousand records from a database requires different tools and techniques than moving millions of rows or handling millions of transactions a minute.
  • Variety: Data engineers need tools that handle a variety of data formats in different locations (databases, APIs, files).
  • Velocity: The velocity of data is always increasing. Tracking the activity of millions of users on a social network or the purchases of users all over the world requires data engineers to operate often in near real time.

Programming languages

The lingua franca of data engineering is SQL. Whether you use low-code tools or a specific programming language, there is almost no way to get around knowing SQL. A strong foundation in SQL allows the data engineer to optimize queries for speed and can assist in data transformations. SQL is so prevalent in data engineering that data lakes and non-SQL databases have tools to allow the data engineer to query them in SQL.

A large number of open source data engineering tools use Java and Scala (Apache projects). Java is a popular, mainstream, object-oriented programming language. While debatable, Java is slowly being replaced by other languages that run on the Java Virtual Machine (JVM). Scala is one of these languages. Other languages that run on the JVM include Clojure and Groovy. In the next chapter, you will be introduced to Apache NiFi. NiFi allows you to develop custom processers in Java, Clojure, Groovy, and Jython. While Java is an object-oriented language, there has been a movement toward functional programming languages, of which Clojure and Scala are members.

The focus of this book is on data engineering with Python. It is well-documented with a larger user base and cross-platform. Python has become the default language for data science and data engineering. Python has an extensive collection of standard libraries and third-party libraries. The data science environment in Python is unmatched in other languages. Libraries such as pandas, matplotlib, numpy, scipy, scikit-learn, tensorflow, pytorch, and NLTK make up an extremely powerful data engineering and data science environment.

Databases

In most production systems, data will be stored in relational databases. Most proprietary solutions will use either Oracle or Microsoft SQL Server, while open source solutions tend to use MySQL or PostgreSQL. These databases store data in rows and are well-suited to recording transactions. There are also relationships between tables, utilizing primary keys to join data from one table to another – thus making them relational. The following table diagram shows a simple data model and the relationships between the tables:

Figure 1.2 – Relational tables joined on Region = RegionID.

Figure 1.2 – Relational tables joined on Region = RegionID.

The most common databases used in data warehousing are Amazon Redshift, Google BigQuery, Apache Cassandra, and other NoSQL databases, such as Elasticsearch. Amazon Redshift, Google BigQuery, and Cassandra deviate from the traditional rows of relational databases and store data in a columnar format, as shown:

Figure 1.3 – Rows stored in a columnar format

Figure 1.3 – Rows stored in a columnar format

Columnar databases are better suited for fast queries – therefore making them well-suited for data warehouses. All three of the columnar databases can be queried using SQL – although Cassandra uses the Cassandra Query Language, it is similar.

In contrast to columnar databases, there are document, or NoSQL, databases, such as Elasticsearch. Elasticsearch is actually a search engine based on Apache Lucene. It is similar to Apache Solr but is more user-friendly. Elasticsearch is open source, but it does have proprietary components – most notably, the X-Pack plugins for machine learning, graphs, security, and alerting/monitoring. Elasticsearch uses the Elastic Query DSL (Domain-Specific Language). It is not SQL, but rather a JSON query. Elasticsearch stores data as documents, and while it has parent-child documents, it is non-relational (like the columnar databases).

Once a data engineer extracts data from a database, they will need to transform or process it. With big data, it helps to use a data processing engine.

Data processing engines

Data processing engines allow data engineers to transform data whether it is in batches or streams. These engines allow the parallel execution of transformation tasks. The most popular engine is Apache Spark. Apache Spark allows data engineers to write transformations in Python, Java, and Scala.

Apache Spark works with Python DataFrames, making it an ideal tool for Python programmers. Spark also has Resilient Distributed Datasets (RDDs). RDDs are an immutable and distributed collection of objects. You create them mainly by loading in an external data source. RDDs allow fast and distributed processing. The tasks in an RDD are run on different nodes within the cluster. Unlike DataFrames, they do not try to guess the schema in your data.

Other popular process engines include Apache Storm, which utilizes spouts to read data and bolts to perform transformations. By connecting them, you build a processing pipeline. Apache Flink and Samza are more modern stream and batch processing frameworks that allow you to process unbounded streams. An unbounded stream is data that comes in with no known end – a temperature sensor, for example, is an unbounded stream. It is constantly reporting temperatures. Flink and Samza are excellent choices if you are using Apache Kafka to stream data from a system. You will learn more about Apache Kafka later in this book.

Data pipelines

Combining a transactional database, a programming language, a processing engine, and a data warehouse results in a pipeline. For example, if you select all the records of widget sales from the database, run it through Spark to reduce the data to widgets and counts, then dump the result to the data warehouse, you have a pipeline. But this pipeline is not very useful if you have to execute manually every time you want it to run. Data pipelines need a scheduler to allow them to run at specified intervals. The simplest way to accomplish this is by using crontab. Schedule a cron job for your Python file and sit back and watch it run every X number of hours.

Managing all the pipelines in crontab becomes difficult fast. How do you keep track of pipelines' successes and failures? How do you know what ran and what didn't? How do you handle backpressure – if one task runs faster than the next, how do you hold data back, so it doesn't overwhelm the task? As your pipelines become more advanced, you will quickly outgrow crontab and will need a better framework.

Apache Airflow

The most popular framework for building data engineering pipelines in Python is Apache Airflow. Airflow is a workflow management platform built by Airbnb. Airflow is made up of a web server, a scheduler, a metastore, a queueing system, and executors. You can run Airflow as a single instance, or you can break it up into a cluster with many executor nodes – this is most likely how you would run it in production. Airflow uses Directed Acyclic Graphs (DAGs).

A DAG is Python code that specifies tasks. A graph is a series of nodes connected by a relationship or dependency. In Airflow, they are directed because they flow in a direction with each task coming after its dependency. Using the preceding example pipeline, the first node would be to execute a SQL statement grabbing all the widget sales. This node would connect downstream to another node, which would aggregate the widgets and counts. Lastly, this node would connect to the final node, which loads the data into the warehouse. The pipeline DAG would look as in the following diagram:

Figure 1.4 – A DAG showing the flow of data between nodes. The task follows the arrows (is directed) from left to right

Figure 1.4 – A DAG showing the flow of data between nodes. The task follows the arrows (is directed) from left to right

This book will cover the basics of Apache Airflow but will primarily use Apache NiFi to demonstrate the principles of data engineering. The following is a screenshot of a DAG in Airflow:

Figure 1.5 – The Airflow GUI showing the details of a DAG

Figure 1.5 – The Airflow GUI showing the details of a DAG

The GUI is not as polished as NiFi, which we will discuss next.

Apache NiFi

Apache NiFi is another framework for building data engineering pipelines, and it too utilizes DAGs. Apache NiFi was built by the National Security Agency and is used at several federal agencies. Apache NiFi is easier to set up and is useful for new data engineers. The GUI is excellent and while you can use Jython, Clojure, Scala, or Groovy to write processors, you can accomplish a lot with a simple configuration of existing processors. The following screenshot shows the NiFi GUI and a sample DAG:

Figure 1.6 – A sample NiFi flow extracting data from a database and sending it to Elasticsearch

Figure 1.6 – A sample NiFi flow extracting data from a database and sending it to Elasticsearch

Apache NiFi also allows clustering and the remote execution of pipelines. It has a built-in scheduler and provides the backpressure and monitoring of pipelines. Furthermore, Apache NiFi has version control using the NiFi Registry and can be used to collect data on the edge using MiNiFi.

Another Python-based tool for data engineering pipelines is Luigi – developed by Spotify. Luigi also uses a graph structure and allows you to connect tasks. It has a GUI much like Airflow. Luigi will not be covered in this book but is an excellent option for Python-based data engineering.

Summary

In this chapter, you learned what data engineering is. Data engineering roles and responsibilities vary depending on the maturity of an organization's data infrastructure. But data engineering, at its simplest, is the creation of pipelines to move data from one source or format to another. This may or may not involve data transformations, processing engines, and the maintenance of infrastructure.

Data engineers use a variety of programming languages, but most commonly Python, Java, or Scala, as well as proprietary and open source transactional databases and data warehouses, both on-premises and in the cloud, or a mixture. Data engineers need to be knowledgeable in many areas – programming, operations, data modeling, databases, and operating systems. The breadth of the field is part of what makes it fun, exciting, and challenging. To those willing to accept the challenge, data engineering is a rewarding career.

In the next chapter, we will begin by setting up an environment to start building data pipelines.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
  • Design data models and learn how to extract, transform, and load (ETL) data using Python
  • Schedule, automate, and monitor complex data pipelines in production

Description

Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.

Who is this book for?

This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.

What you will learn

  • Understand how data engineering supports data science workflows
  • Discover how to extract data from files and databases and then clean, transform, and enrich it
  • Configure processors for handling different file formats as well as both relational and NoSQL databases
  • Find out how to implement a data pipeline and dashboard to visualize results
  • Use staging and validation to check data before landing in the warehouse
  • Build real-time pipelines with staging areas that perform validation and handle failures
  • Get to grips with deploying pipelines in the production environment
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 23, 2020
Length: 356 pages
Edition : 1st
Language : English
ISBN-13 : 9781839214189
Category :
Languages :
Concepts :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Oct 23, 2020
Length: 356 pages
Edition : 1st
Language : English
ISBN-13 : 9781839214189
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 149.97
Python Data Cleaning Cookbook
$48.99
Data Engineering with Python
$51.99
Data Engineering with Apache Spark, Delta Lake, and Lakehouse
$48.99
Total $ 149.97 Stars icon
Banner background image

Table of Contents

19 Chapters
Section 1: Building Data Pipelines – Extract Transform, and Load Chevron down icon Chevron up icon
Chapter 1: What is Data Engineering? Chevron down icon Chevron up icon
Chapter 2: Building Our Data Engineering Infrastructure Chevron down icon Chevron up icon
Chapter 3: Reading and Writing Files Chevron down icon Chevron up icon
Chapter 4: Working with Databases Chevron down icon Chevron up icon
Chapter 5: Cleaning, Transforming, and Enriching Data Chevron down icon Chevron up icon
Chapter 6: Building a 311 Data Pipeline Chevron down icon Chevron up icon
Section 2:Deploying Data Pipelines in Production Chevron down icon Chevron up icon
Chapter 7: Features of a Production Pipeline Chevron down icon Chevron up icon
Chapter 8: Version Control with the NiFi Registry Chevron down icon Chevron up icon
Chapter 9: Monitoring Data Pipelines Chevron down icon Chevron up icon
Chapter 10: Deploying Data Pipelines Chevron down icon Chevron up icon
Chapter 11: Building a Production Data Pipeline Chevron down icon Chevron up icon
Section 3:Beyond Batch – Building Real-Time Data Pipelines Chevron down icon Chevron up icon
Chapter 12: Building a Kafka Cluster Chevron down icon Chevron up icon
Chapter 13: Streaming Data with Apache Kafka Chevron down icon Chevron up icon
Chapter 14: Data Processing with Apache Spark Chevron down icon Chevron up icon
Chapter 15: Real-Time Edge Data with MiNiFi, Kafka, and Spark Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.6
(24 Ratings)
5 star 12.5%
4 star 12.5%
3 star 25%
2 star 25%
1 star 25%
Filter icon Filter
Top Reviews

Filter reviews by




Christoph Oct 03, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book does a good job introducing the use of NiFi and Airflow as data orchestrators for data pipelines as well as briefly mentioning Kafka and Spark.NiFiNiFi is explained in detail in this book. It covers subjects that go beyond the basics and are not easily googled. Since NiFi tutorials are rather rare compared to many other technologies, this is a valuable resource for learning it. It covers topics such as deploying to production, versioning and monitoring.Online you find some false information that versioning and automated deployment wouldn't be possible with NiFi but this book shows how it's done!AirflowIn this book Airflow is also introduced and you're shown how to setup DAGs with it. If you're mainly interested in Airflow there are better books for you, e.g. "Data Pipelines With Apache Airflow". Nevertheless this book will teach you how to get started with Airflow as well and how to setup simple project sized data pipelines from extraction to visualization.Real-time processingAt the end of the book Kafka and Spark are introduced as well. The content on these is not comprehensive. You will definitely need another resource to learn more on those in case that's what you're intetested in.What I didn't likeThis book shows you how to install the tools mentioned by hand. I would have much preferred it if a Docker setup was used. Others pointed out that they had problems installing tools and copying examples 1:1. I didn't have such problems since I installed everything using Docker and since it often is a relevant skill in Data Engineering it would not have been out of place to use it here as well.The title can be criticized as well since it's broader than the content. Something like "Data Pipelines with NiFi and Airflow" might have been more fitting.SummaryI'm giving this book 5 stars since it's hard to find a good resource on NiFi and this book is it. After you read it, you will see that both Airflow and NiFi are good orchestration tools for very similar use cases.
Amazon Verified review Amazon
Amir Esmaeili Nov 15, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Einfach Super!
Amazon Verified review Amazon
jml Dec 20, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Data Engineering With Python provides a solid overview of pipelining and database connections for those tasked with processing both batch and stream data flows. Not only for the data miners, this book will be useful as well in a CI/CD environment using Kafka and Spark. It’s very readable and contains lots of practical, illustrative examples.Hits — solid explanations and demonstrations of Pandas, Zookeeper, Kafka, and Spark. Also introduces Great Expectations, NiFi, Airflow, and Faker, all of which are tied together in a usable demonstration environment. Pipeline implementation’s thoroughly covered as well.Misses — the book could use a little fine-tuning as to Python 3; some of the instructions are rather downrev, and concepts like 311 / SeeClickFix are sort of dropped in without a lot of explanation. Also, there’s a heavy focus on SQL and almost no coverage of noSQL databases.Overall, a good addition to the bookshelf if you’re using any of these Python packages. Readable and useful for anyone supporting data analysis.
Amazon Verified review Amazon
Michael Porter Dec 17, 2020
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a very hands on guide to building data pipelines with Python and a number of other tools that would be very useful for anyone looking to handle large data sets, clean or enhance data. Paul does a great job of breaking down the difference between a Data Scientist and a Data Engineer while also covering areas of overlap.His break down of the tools that are available and a number of quick start style tutorials would be useful to anyone who's experienced enough to understand them. This book is clearly not for beginners but is clear and concise enough to ensure that's it's not only for experts either.My only real compliant is that the book skips over a whole category of NoSQL databases, namely Graph Databases i.e. Neo4j, Neptune, etc. which are a fast growing part of data science as a whole.All in all a very through introduction to data engineering that focuses mostly on SQL and would help any experienced Python developer get their toes wet in the field.
Amazon Verified review Amazon
Habib Hezarehee Nov 30, 2020
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book is a little bit (not so remarkable) damaged. But since I love the content so I stay away from returning :p
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela