Introduction to Dataflow
Dataflow is a data processing engine that can handle both batch and streaming data pipelines. If we want to compare with technologies that we already learned about in this book, Dataflow is comparable with Spark – in terms of positioning, both technologies can process big data. Both technologies process data in parallel and can handle almost any kind of data or file.
But in terms of underlying technologies, they are different. From the user perspective, the main difference is the serverless nature of Dataflow. Using Dataflow, we don’t need to set up any cluster. We just submit jobs to Dataflow, and the data pipeline will run automatically on the cloud. How we write the data pipeline is by using Apache Beam.
If you have finished reading Chapter 5, Building a Data Lake Using Dataproc, you will know that Dataproc is also available with Spark Serverless. At the time of writing, this feature is relatively new compared to Dataflow. There are still...