Running our pipeline against streaming data
Let's discuss how we can change this code to enable it to run against a streaming data source. We first have to define what we mean by a data stream. A data stream is a continuous flow of data without any prior information about the cardinality of the dataset. The dataset can be either finite or infinite, but we do not know which in advance. Because of this property, the streaming data is often called unbounded data, because, as opposed to bounded data, no prior bounds regarding the cardinality of the dataset can be made.
The absence of bounds is one property that makes the processing of data streams trickier (the other is that bounded data sets can be viewed as static, while unbounded data is, by definition, changing over time). We'll investigate these properties later in this chapter, and we'll see how we can leverage them to define a Beam unified model for data processing.
For now, let's imagine our pipeline is given a source, which gives one line of text at a time but does not give any signal of how many more elements there are going to be. How do we need to change our data processing logic to extract information from such a source?
- We'll update our pipeline to use a streaming source. To do this, we need to change the way we created our input
PCollection
of lines coming from aList
viaCreate PTransform
to a streaming input. Beam has a utility for this called TestStream, which works as follows.Create a
TestStream
(a utility that emulates an unbounded data source). TheTestStream
needs aCoder
(details of which will be skipped for now and will be discussed in Chapter 2, Implementing, Testing, and Deploying Basic Pipelines):TestStream.Builder<String> streamBuilder = TestStream.create(StringUtf8Coder.of());
- Next, we fill the
TestStream
with data. Note that we need a timestamp for each record so that theTestStream
can emulate a real stream, which should have timestamps assigned for every input element:Instant now = Instant.now(); // add all lines with timestamps to the TestStream List<TimestampedValue<String>> timestamped = IntStream.range(0, lines.size()) .mapToObj(i -> TimestampedValue.of( lines.get(i), now.plus(i))) .collect(Collectors.toList()); for (TimestampedValue<String> value : timestamped) { streamBuilder = streamBuilder.addElements(value); }
- Then, we will apply this to the pipeline:
// create the unbounded PCollection from TestStream PCollection<String> input = pipeline.apply(streamBuilder.advanceWatermarkToInfinity());
We encourage you to investigate the complete source code of the
com.packtpub.beam.chapter1.MissingWindowPipeline
class to make sure everything is properly understood in the preceding example. - Next, we run the class with the following command:
chapter1$ ../mvnw exec:java \ -Dexec.mainClass=\ com.packtpub.beam.chapter1.MissingWindowPipeline
This will result in the following exception:
java.lang.IllegalStateException: GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger. Use a Window.into or Window.triggering transform prior to GroupByKey.
This is because we need a way to identify the (at least partial) completeness of the data. That is to say, the data needs (explicit or implicit) markers that define a condition that (when met) triggers a completion of a computation and outputs data from a
PTransform
. The computation can then continue from the values already computed or be reset to the initial state.There are multiple ways to define such a condition. One of them is to define time-constrained intervals called windows. A time-constrained window might be defined as data arriving within a specific time interval – for example, between 1 P.M. and 2 P.M.
- As the exception suggests, we need to define a window to be applied to the input data stream in order to complete the definition of the pipeline. The definition of a
Window
is somewhat complex, and we will dive into all its parameters later in this book. But for now, we'll define the followingWindow
:PCollection<String> windowed = words.apply( Window.<String>into(new GlobalWindows()) .discardingFiredPanes() .triggering(AfterWatermark.pastEndOfWindow()));
This code applies
Window.into
PTransform
by usingGlobalWindows
, which is a specificWindow
that contains whole data (which means that it can be viewed as aWindow
containing the whole history and future of the universe).The complete code can be viewed in the
com.packtpub.beam.chapter1.FirstStreamingPipeline
class. - As usual, we can run this code using the following command:
chapter1$ ../mvnw exec:java \ -Dexec.mainClass=\ com.packtpub.beam.chapter1.FirstStreamingPipeline
This results in the same outcome as in the first example and with the same caveat – the order of output is not defined and will vary over multiple runs of the same code against the same data. The values will be absolutely deterministic, though.
Once we have successfully run our first streaming pipeline, let's dive into what exactly this streaming data is, and what to expect when we try to process it!