Understanding offsets and checkpoints
In this recipe, you will learn how a Spark Streaming query recovers from failure or any unexpected server crash using checkpointing, where it stores the process and the state of the query as and when it is executing. The information that the checkpointing stores is the range of offsets that are processed in each trigger (you can refer to the Understanding trigger options recipe to learn more about triggers).
Getting ready
We will be using Event Hubs for Kafka as the source for streaming data.
You can use the Python script available at https://github.com/PacktPublishing/Azure-Databricks-Cookbook/blob/main/Chapter04/PythonCode/KafkaEventHub_Windows.py, which will push the data to Event Hubs for Kafka as the streaming data producer. Change the topic name in the Python script to kafkaenabledhub2
.
You can refer to the Reading data from Kafka-enabled Event Hubs recipe to understand how to get the bootstrap server details and other configuration...