What this book covers
Chapter 1, Introduction to Data Processing with Apache Beam, provides a description of batch and streaming processing semantics and key insights on how to unify them.
Chapter 2, Implementing, Testing, and Deploying Basic Pipelines, provides an examples-driven approach to understanding how to implement and verify some of the most common data processing pipelines.
Chapter 3, Implementing Pipelines Using Stateful Processing, explains how to implement more sophisticated data processing requiring the use of user-defined states.
Chapter 4, Structuring Code for Reusability, details best practices for structuring code so that it can be reused in multiple data processing pipelines and even for building Domain-Specific Languages (DSLs).
Chapter 5, Using SQL for Pipeline Implementation, covers how to make life even easier with a well-known data query language – Structured Query Language (SQL).
Chapter 6, Using Your Preferred Language with Portability, explains how Apache Beam handles the portability of runners among different languages and how to use different SDKs (the Apache Beam Python SDK).
Chapter 7, Extending Apache Beam's I/O Connectors, provides a detailed description of how Apache Beam I/O connectors are written using splittable DoFn work and how they can be used for non-I/O applications.
Chapter 8, Understanding How Runners Execute Pipelines, performs a deep dive into the anatomy of an Apache Beam runner.