Apache Spark is a fast in-memory data processing engine with elegant and expressive development APIs to allow data workers to efficiently execute streaming machine learning or SQL workloads that require fast interactive access to datasets. Apache Spark consists of Spark core and a set of libraries. The core is the distributed execution engine and the Java, Scala, and Python APIs offer a platform for distributed application development.
Additional libraries built on top of the core allow the workloads for streaming, SQL, Graph processing, and machine learning. SparkML, for instance, is designed for Data science and its abstraction makes Data science easier.
In order to plan and carry out the distributed computations, Spark uses the concept of a job, which is executed across the worker nodes using Stages and Tasks. Spark consists of a driver, which orchestrates...