Apache Spark is a general-purpose cluster computing system. It's very well suited for large-scale data processing. It performs 100 times better than Hadoop when run completely in-memory and 10 times better when run entirely from disk. It has a sophisticated directed acyclic graph execution engine that supports an acyclic data flow model.
Apache Spark has first-class support for writing programs in Java, Scala, Python, and R programming languages to cater to a wider audience. It offers more than 80 different operators to build parallel applications without worrying about the underlying infrastructure.
Apache Spark has libraries catering to Structured Query Language, known as Spark SQL; this supports writing queries in programs using ANSI SQL. It also has support for computing streaming data, which is very much needed in today's real-time data processing...