Writing your first pipeline
Let's jump right into writing our first pipeline. The first part of this book will focus on Beam's Java SDK. We assume that you are familiar with programming in Java and building a project using Apache Maven (or any similar tool). The following code can be found in the com.packtpub.beam.chapter1.FirstPipeline
class in the chapter1
module in the GitHub repository. We would like you to go through all of the code, but we will highlight the most important parts here:
- We need some (demo) input for our pipeline. We will read this input from the resource called
lorem.txt
. The code is standard Java, as follows:ClassLoader loader = FirstPipeline.class.getClassLoader(); String file = loader.getResource("lorem.txt").getFile(); List<String> lines = Files.readAllLines( Paths.get(file), StandardCharsets.UTF_8);
- Next, we need to create a
Pipeline
object, which is a container for a Directed Acyclic Graph (DAG) that represents the data transformations needed to produce output from input data:Pipeline pipeline = Pipeline.create();
Important note
There are multiple ways to create a pipeline, and this is the simplest. We will see different approaches to pipelines in Chapter 2, Implementing, Testing, and Deploying Basic Pipelines.
- After we create a pipeline, we can start filling it with data. In Beam, data is represented by a
PCollection
object. EachPCollection
object (that is, parallel collection) can be imagined as a line (an edge) connecting two vertices (PTransforms
, or parallel transforms) in the pipeline's DAG. - Therefore, the following code creates the first node in the pipeline. The node is a transform that takes raw input from the list and creates a new
PCollection
:PCollection<String> input = pipeline.apply(Create.of(lines));
Our DAG will then look like the following diagram:
- Each
PTransform
can have one main output and possibly multiple side output PCollections. EachPCollection
has to be consumed by anotherPTransform
or it might be excluded from the execution. As we can see, our main output (PCollection
ofPTransform
, calledCreate
) is not presently consumed by anyPTransform
. We connectPTransform
to aPCollection
by applying thisPTransform
on thePCollection
. We do that by using the following code:PCollection<String> words = input.apply(Tokenize.of());
This creates a new
PTransform
(Tokenize
) and connects it to our inputPCollection
, as shown in the following figure:We'll skip the details of how the
Tokenize PTransform
is implemented for now (we will return to that in Chapter 5, Using SQL for Pipeline Implementation, which describes how to structure code in general). Currently, all we have to remember is that theTokenize
PTransform
takes input lines of text and splits each line into words, which produces a newPCollection
that contains all of the words from all the lines of the inputPCollection
. - We finish the pipeline by adding two more
PTransforms
. One will produce the well-known word count example, so popular in every big data textbook. And the last one will simply print the outputPCollection
to standard output:PCollection<KV<String, Long>> result = words.apply(Count.perElement()); result.apply(PrintElements.of());
Details of both the
Count
PTransform
(which is Beam's built-inPTransform
) andPrintElements
(which is a user-definedPTransform
) will be discussed later. For now, if we focus on the pipeline construction process, we can see that our pipeline looks as follows: - After we define this pipeline, we should run it. This is done with the following line:
pipeline.run().waitUntilFinish();
This causes the pipeline to be passed to a runner (configured in the pipeline; if omitted, it defaults to a runner available on
Classpath
). The standard default runner is theDirectRunner
, which executes the pipeline in the local Java Virtual Machine (JVM) only. This runner is mostly only suitable for testing, as we will see in the next chapter. - We can run this pipeline by executing the following command in the code examples for the
chapter1
module, which will yield the expected output on standard output:chapter1$ ../mvnw exec:java \ -Dexec.mainClass=com.packtpub.beam.chapter1.FirstPipeline
Important note
The ordering of output is not defined and is likely to vary over multiple runs. This is to be expected and is due to the fact that the pipeline underneath is executed in multiple threads.
- A very useful feature is that the application of
PTransform
toPCollection
can be chained, so the preceding code can be simplified to the following:ClassLoader loader = ... FirstPipeline.class.getClassLoader(); String file = loader.getResource("lorem.txt").getFile(); List<String> lines = Files.readAllLines( Paths.get(file), StandardCharsets.UTF_8); Pipeline pipeline = Pipeline.create(); pipeline.apply(Create.of(lines)) .apply(Tokenize.of()) .apply(Count.perElement()) .apply(PrintElements.of()); pipeline.run().waitUntilFinish();
When used with care, this style greatly improves the readability of the code.
Now that we have written our first pipeline, let's see how to port it from a bounded data source to a streaming source!