Applying window aggregations to streaming data with Apache Spark Structured Streaming
In this recipe, we will learn how to configure window aggregations on streaming queries in Apache Spark. Window aggregations are a common operation in stream processing, where we want to compute some aggregate function (such as count, sum, and average) over a sliding window of time or rows. For example, we might want to know the number of orders per minute, the average revenue per hour, or the top 10 products per day.
Getting ready
Before we start, we need to make sure that we have a Kafka cluster running and a topic that produces some streaming data. For simplicity, we will use a single-node Kafka cluster and a topic named events
. Open the 4.0 events-gen-kafka.ipynb
notebook and execute the cell. This notebook produces an event record every second and puts it on a Kafka topic called events
.
Make sure you have run this notebook and that it is producing records as shown here: