Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Essential PySpark for Scalable Data Analytics

You're reading from   Essential PySpark for Scalable Data Analytics A beginner's guide to harnessing the power and ease of PySpark 3

Arrow left icon
Product type Paperback
Published in Oct 2021
Publisher Packt
ISBN-13 9781800568877
Length 322 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Sreeram Nudurupati Sreeram Nudurupati
Author Profile Icon Sreeram Nudurupati
Sreeram Nudurupati
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Data Engineering
2. Chapter 1: Distributed Computing Primer FREE CHAPTER 3. Chapter 2: Data Ingestion 4. Chapter 3: Data Cleansing and Integration 5. Chapter 4: Real-Time Data Analytics 6. Section 2: Data Science
7. Chapter 5: Scalable Machine Learning with PySpark 8. Chapter 6: Feature Engineering – Extraction, Transformation, and Selection 9. Chapter 7: Supervised Machine Learning 10. Chapter 8: Unsupervised Machine Learning 11. Chapter 9: Machine Learning Life Cycle Management 12. Chapter 10: Scaling Out Single-Node Machine Learning Using PySpark 13. Section 3: Data Analysis
14. Chapter 11: Data Visualization with PySpark 15. Chapter 12: Spark SQL Primer 16. Chapter 13: Integrating External Tools with Spark SQL 17. Chapter 14: The Data Lakehouse 18. Other Books You May Enjoy

Summary

In this chapter, you learned about two prominent methodologies of data processing known as ETL and ELT and saw the advantages of using ETL to unlock more analytics use cases than what's possible with ETL. By doing this, you understood the scalable storage and compute requirements of ETL and how modern cloud technologies help enable the ELT way of data processing. Then, you learned about the shortcomings of using cloud-based data lakes as analytics data stores, such as having a lack of atomic transactional and durability guarantees. After, you were introduced to Delta Lake as a modern data storage layer designed to overcome the shortcomings of cloud-based data lakes. You learned about the data integration and data cleansing techniques, which help consolidate raw transactional data from disparate sources to produce clean, pristine data that is ready to be presented to end users to generate meaningful insights. You also learned how to implement each of the techniques used...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime