What this book covers
Chapter 1, Scala Essentials for Data Engineers, introduces Scala in data engineering, recognizing its importance due to type safety, adoption by major companies such as Netflix and Airbnb, native integration with Spark, fostering a software engineering mindset, and its versatility in both object-oriented and functional programming. The chapter covers concepts such as functional programming, objects, classes, higher-order functions, polymorphism, variance, option types, collections, pattern matching, and implicits in Scala.
Chapter 2, Environment Setup, presents two data engineering pipeline development environments. The first, a cloud-based setup, offers portability and easy access but incurs costs for system maintenance. The second involves local machine utilization, requiring a setup but avoiding cloud expenses.
Chapter 3, An Introduction to Apache Spark and Its APIs – DataFrame, Dataset, and Spark SQL, focuses on Apache Spark as a leading distributed data processing framework. It emphasizes handling large data volumes across machine clusters. Topics include working with Spark, building Spark applications with Scala, and comprehending Spark’s Dataset and DataFrame APIs for effective data processing.
Chapter 4, Working with Databases, dives into relational databases’ utilization within data pipelines, emphasizing efficiency in reading from and writing to databases. It covers the Spark API and building a straightforward database library, exploring Spark’s JDBC API, loading configurations, creating an interface, and executing multiple database operations.
Chapter 5, Object Stores and Data Lakes, discusses the evolution from traditional databases to the era of data lakes and lakehouses, due to surges in data volumes. The focus will be on object stores, which are fundamental for both data lakes and lake houses.
Chapter 6, Understanding Data Transformation, goes deeper into essential Spark skills for data engineers aiming to transform data for downstream use cases. It covers advanced Spark topics such as the distinctions between transformations and actions, aggregation, grouping, joining data, utilizing window functions, and handling complex dataset types.
Chapter 7, Data Profiling and Data Quality, stresses the importance of data quality checks in preventing issues downstream. It introduces the Deequ library, an open source tool by Amazon, for defining checks, performing analysis, suggesting constraints, and storing metrics.
Chapter 8, Test-Driven Development, Code Health, and Maintainability discusses software development best practices applied to data engineering, defect identification, code consistency, and security. It introduces Test-Driven Development (TDD), unit tests, integration tests, code coverage checks, static code analysis, and the importance of linting and code style for development practices.
Chapter 9, CI/CD with GitHub, introduces Continuous Integration/Continuous Delivery (CI/CD) concepts in Scala data engineering projects using GitHub. It explains CI/CD as automated testing and deployment, aiming for rapid iteration, error reduction, and consistent quality.
Chapter 10, Data Pipeline Orchestration, focuses on data pipeline orchestration, emphasizing the need for seamless task coordination and failure notification. It introduces tools such as Apache Airflow, Argo, Databricks Workflows, and Azure Data Factory.
Chapter 11, Performance Tuning, emphasizes the critical role of the Spark UI in optimizing performance. It covers topics such as the Spark UI basics, performance tuning, computing resource optimization, understanding data skewing, indexing, and partitioning.
Chapter 12, Building Batch Pipelines Using Spark and Scala, combines all of your previously learned skills to construct a batch pipeline. It stresses the significance of batch processing, leveraging Apache Spark’s distributed processing and Scala’s versatility. The topics cover a typical business use case, medallion architecture, batch data ingestion, transformation, quality checks, loading into a serving layer, and pipeline orchestration.
Chapter 13, Building Streaming Pipelines Using Spark and Scala, focuses on constructing a streaming pipeline, emphasizing real-time data ingestion using Azure Event Hubs, configured as Apache Kafka for Spark integration. It employs Spark’s Structured Streaming and Scala for efficient data handling. Topics include use case understanding, streaming data ingestion, transformation, serving layer loading, and orchestration, aiming to equip you with the skills to develop and implement similar pipelines in your organizations.