Summary
In this chapter, we learned about streaming data and how to handle incoming data as soon as it is created. Data is created using the Pub/Sub publisher client. In practice, you can use this approach by requesting the application developer to send messages to Pub/Sub as the data source, though a second option is to use a CDC tool. In GCP, you can use the Google-provided tool for CDC called Datastream. CDC tools can be attached to the backend database such as CloudSQL to publish data changes such as insert, update, and delete operations.
The second part of streaming data is how to process the data. In this chapter, we learned how to use Dataflow to handle continuously incoming data from Pub/Sub to aggregate it on the fly and store it in BigQuery tables. Keep in mind that you can also handle data from Pub/Sub using Dataflow in a batch manner.
With experience in creating streaming data pipelines on GCP, you will realize how easy it is to start creating one from an infrastructure...