Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Serverless ETL and Analytics with AWS Glue

You're reading from  Serverless ETL and Analytics with AWS Glue

Product type Book
Published in Aug 2022
Publisher Packt
ISBN-13 9781800564985
Pages 434 pages
Edition 1st Edition
Languages
Authors (6):
Vishal Pathak Vishal Pathak
Profile icon Vishal Pathak
Subramanya Vajiraya Subramanya Vajiraya
Profile icon Subramanya Vajiraya
Noritaka Sekiyama Noritaka Sekiyama
Profile icon Noritaka Sekiyama
Tomohiro Tanaka Tomohiro Tanaka
Profile icon Tomohiro Tanaka
Albert Quiroga Albert Quiroga
Profile icon Albert Quiroga
Ishan Gaur Ishan Gaur
Profile icon Ishan Gaur
View More author details
Toc

Table of Contents (20) Chapters close

Preface 1. Section 1 – Introduction, Concepts, and the Basics of AWS Glue
2. Chapter 1: Data Management – Introduction and Concepts 3. Chapter 2: Introduction to Important AWS Glue Features 4. Chapter 3: Data Ingestion 5. Section 2 – Data Preparation, Management, and Security
6. Chapter 4: Data Preparation 7. Chapter 5: Data Layouts 8. Chapter 6: Data Management 9. Chapter 7: Metadata Management 10. Chapter 8: Data Security 11. Chapter 9: Data Sharing 12. Chapter 10: Data Pipeline Management 13. Section 3 – Tuning, Monitoring, Data Lake Common Scenarios, and Interesting Edge Cases
14. Chapter 11: Monitoring 15. Chapter 12: Tuning, Debugging, and Troubleshooting 16. Chapter 13: Data Analysis 17. Chapter 14: Machine Learning Integration 18. Chapter 15: Architecting Data Lakes for Real-World Scenarios and Edge Cases 19. Other Books You May Enjoy

Orchestrating your pipelines with workflow tools

After selecting the data processing services for your data, you must build data processing pipelines using these services. For example, you can build a pipeline similar to the one shown in the following diagram. In this pipeline, four Glue Spark jobs extract the data from four databases. Then, each job writes data to S3. In terms of the data stored in S3, the next Glue Spark job processes the four tables’ data and generates an analytic report:

Figure 10.4 – A pipeline that extracts data from four databases, stores S3, and generates an analytic report by the aggregation job

So, after building a pipeline, how do you run each job? You can manually run multiple jobs to extract multiple databases. Once this has happened, you can run the job to generate a report. However, this can cause problems. One such problem is not getting a result if you run the generating report job before all the extracting jobs...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime