Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Engineering with AWS

You're reading from   Data Engineering with AWS Learn how to design and build cloud-based data transformation pipelines using AWS

Arrow left icon
Product type Paperback
Published in Dec 2021
Publisher Packt
ISBN-13 9781800560413
Length 482 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Gareth Eagar Gareth Eagar
Author Profile Icon Gareth Eagar
Gareth Eagar
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: AWS Data Engineering Concepts and Trends
2. Chapter 1: An Introduction to Data Engineering FREE CHAPTER 3. Chapter 2: Data Management Architectures for Analytics 4. Chapter 3: The AWS Data Engineer's Toolkit 5. Chapter 4: Data Cataloging, Security, and Governance 6. Section 2: Architecting and Implementing Data Lakes and Data Lake Houses
7. Chapter 5: Architecting Data Engineering Pipelines 8. Chapter 6: Ingesting Batch and Streaming Data 9. Chapter 7: Transforming Data to Optimize for Analytics 10. Chapter 8: Identifying and Enabling Data Consumers 11. Chapter 9: Loading Data into a Data Mart 12. Chapter 10: Orchestrating the Data Pipeline 13. Section 3: The Bigger Picture: Data Analytics, Data Visualization, and Machine Learning
14. Chapter 11: Ad Hoc Queries with Amazon Athena 15. Chapter 12: Visualizing Data with Amazon QuickSight 16. Chapter 13: Enabling Artificial Intelligence and Machine Learning 17. Chapter 14: Wrapping Up the First Part of Your Learning Journey 18. Other Books You May Enjoy

The challenges of ever-growing datasets

Organizations have many assets, such as physical assets, intellectual property, the knowledge of their employees, and trade secrets. But for too long, organizations did not fully recognize that they had another extremely valuable asset, and they failed to maximize the use of it—the vast quantities of data that they had gathered over time.

That is not to say that organizations ignored these data assets, but rather, due to the expense and complex nature of storing and managing this data, organizations tended to only keep a subset of data.

Initially, data may have been stored in a single database, but as organizations, and their data requirements, grew, the number of databases exponentially increased. Today, with the modern application development approach of microservices, companies commonly have hundreds, or even thousands, of databases. Faced with many data silos, organizations invested in data warehousing systems that would enable them to ingest data from multiple siloed databases into a central location for analytics. But due to the expense of these systems, there were limitations on how much data could be stored, and some datasets would either be excluded or only aggregate data would be loaded into the data warehouse. Data would also only be kept for a limited period of time as data storage for these systems was expensive, and therefore it was not economical to keep historical data for long periods. There was also a lack of widely available tools and compute power to enable the analysis of extremely large, comprehensive datasets.

As an organization continued to grow, multiple data warehouses and data marts would be implemented for different business units or groups, and organizations still lacked a centralized, single-source-of-truth repository for their data. Organizations were also faced with new types of data, such as semi-structured or even unstructured data, and analyzing these datasets with traditional tooling was a challenge.

As a result, new technologies were invented that were able to better work with very large datasets and different data types. Hadoop was a technology created in the early 2000s at Yahoo as part of a search engine project that wanted to index 1 billion web pages. Over the next few years, Hadoop, and the underlying MapReduce technology, became a popular way for all types of companies to store and process much larger datasets. However, running a Hadoop cluster was a complex and expensive operation requiring specialized skills.

The next evolution for big data processing was the development of Spark (later taken on as an Apache project and now known as Apache Spark), a new processing framework for working with big data. Spark showed significant increases in performance when working with large datasets due to the fact that it did most processing in memory, significantly reducing the amount of reading and writing to and from disks. Today, Apache Spark is often regarded as the gold standard for processing large datasets and is used by a wide array of companies, although there are still a lot of Hadoop MapReduce clusters in production in many companies.

In parallel with the rise of Apache Spark as a popular big data processing tool was the rise of the concept of data lakes—an approach that uses low-cost object storage as a physical storage layer for a variety of data types, provides a central catalog of all the datasets, and makes that data available for processing with a wide variety of tools, including Apache Spark. AWS uses the following definition when talking about data lakes:

A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. You can store your data as-is, without having to first structure the data, and run different types of analytics—from dashboards and visualizations to big data processing, real-time analytics, and machine learning to guide better decisions.

You can find this definition here: https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/.

Having looked at how data analytics became an essential tool in organizations, let's now look at the roles that enable maximizing the value of data for a modern organization.

You have been reading a chapter from
Data Engineering with AWS
Published in: Dec 2021
Publisher: Packt
ISBN-13: 9781800560413
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime