Building Streaming Pipelines Using Spark and Scala
The final chapter of this book is another combination of all we’ve learned, but in this case, we’ll be building a streaming pipeline. You can think of streaming as continuous or “real-time” ingestion of data into your analytics system. There are many ways to accomplish this, but usually, this involves an event bus or message queuing system. We’ll be using Azure Event Hubs as our streaming ingestion source because it can be configured to appear as Apache Kafka, which Spark can easily use due to its open source connectors. As a data engineer, you need to understand how to handle data efficiently and reliably in real time. Again, we’ll leverage Spark, using its structured streaming capabilities, and Scala, as a versatile and expressive programming language. This time, we’ll bring Apache Kafka into the picture to provide an event bus for our streaming pipeline.
In this chapter, we’...