Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Simplify Big Data Analytics with Amazon EMR

You're reading from   Simplify Big Data Analytics with Amazon EMR A beginner's guide to learning and implementing Amazon EMR for building data analytics solutions

Arrow left icon
Product type Paperback
Published in Mar 2022
Publisher Packt
ISBN-13 9781801071079
Length 430 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Sakti Mishra Sakti Mishra
Author Profile Icon Sakti Mishra
Sakti Mishra
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Overview, Architecture, Big Data Applications, and Common Use Cases of Amazon EMR
2. Chapter 1: An Overview of Amazon EMR FREE CHAPTER 3. Chapter 2: Exploring the Architecture and Deployment Options 4. Chapter 3: Common Use Cases and Architecture Patterns 5. Chapter 4: Big Data Applications and Notebooks Available in Amazon EMR 6. Section 2: Configuration, Scaling, Data Security, and Governance
7. Chapter 5: Setting Up and Configuring EMR Clusters 8. Chapter 6: Monitoring, Scaling, and High Availability 9. Chapter 7: Understanding Security in Amazon EMR 10. Chapter 8: Understanding Data Governance in Amazon EMR 11. Section 3: Implementing Common Use Cases and Best Practices
12. Chapter 9: Implementing Batch ETL Pipeline with Amazon EMR and Apache Spark 13. Chapter 10: Implementing Real-Time Streaming with Amazon EMR and Spark Streaming 14. Chapter 11: Implementing UPSERT on S3 Data Lake with Apache Spark and Apache Hudi 15. Chapter 12: Orchestrating Amazon EMR Jobs with AWS Step Functions and Apache Airflow/MWAA 16. Chapter 13: Migrating On-Premises Hadoop Workloads to Amazon EMR 17. Chapter 14: Best Practices and Cost-Optimization Techniques 18. Other Books You May Enjoy

Validating output using Amazon Athena

The Parquet format data is already available in Amazon S3 partition columns, but to make it more consumable for data analysts or data scientists, it would be great if we can enable querying the data through SQL by making it available as a database table.

To make that integration, we will follow a two-step approach:

  1. First, we will run Glue Crawler to create a Glue Catalog table on top of the S3 data.
  2. Then, we will run a query in Athena to validate the output.

Let's see how you can integrate that.

Defining a virtual Glue Catalog table on top of Amazon S3 data

You can follow these steps to create and run Glue Crawler, which will create a Glue Data Catalog table:

  1. Navigate to AWS Glue Crawler at https://console.aws.amazon.com/glue/home?region=us-east-1#catalog:tab=crawlers.
  2. Then, click Add crawler, which will open a form to configure the crawler.
  3. Configure the crawler, where the data source should...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime