If you are someone who uses Spark for batch processing, Spark Structured Streaming is a tool you should try, as its API is similar to its batch processing counterpart.
Now, if we compare Spark to Kafka for stream processing, we must remember that Spark streaming is designed to handle throughput, not latency, and it becomes very complicated to handle streams with low latency.
The Spark Kafka connector has always been a complicated issue. For example, we have to use previous versions of both, because with each new version, there are too many changes on both sides.
In Spark, the deployment model is always much more complicated than with Kafka Streams. Although Spark, Flink, and Beam can perform tasks much more complex tasks, than Kafka Streams, the easiest to learn and implement has always been Kafka.