We have a huge number of data processing tools available in the market. Most of them are open sourced and a few of them are commercial. The question is, how many processing tools or engines do we need? Can't we have just one processing framework that can fulfill the processing requirement of each and every use case that has different processing patterns? Apache Spark was built for the purpose of solving these problem and came up with a unified system architecture where use cases ranging from batch, near-real-time, machine learning models, and so on can be solved using the rich Spark API.
Apache Spark was not suitable for real-time processing use cases where event-by-event processing is needed. Apache Flink came up with a few new design models to solve similar problems that Spark was trying to solve in addition to its real-time processing capability. ...