Summary
So, as our Confluent Kafka Chapter approaches, let’s reflect on where we have gone and what we have done. We reviewed the fundamentals of Kafka’s architecture and how to set up Confluent Kafka. We looked at writing producers and consumers and working with Schema Registry and Connect. Lastly, we looked at integrating with Spark and Delta Lake. Kafka is an essential component of streaming data. Streaming data has become an in-demand skill and an important technique. We will delve deep into machine learning operations (MLOps) and several other AI technologies as we advance.