Summary
In this chapter, we learned about batch and stream processing. We started with the differences between the two processing paradigms and then progressed to mounting Azure storage on Databricks. This was followed by a deep dive into batch processing and Spark transformations. We also looked at a real-world example of a batch ETL process, where we read data in Parquet, transformed it, and wrote it back to Delta Lake.
Last but not the least, we also learned about Spark Structured Streaming, with an example. Spark Structured Streaming is ideal for reading and writing data in real time. Several downstream applications require real-time data, such as real-time dashboards.
In the next chapter, we will learn about machine learning and graph processing in Databricks. We will also go through plenty of examples to aid the learning process.