An introduction to Apache Spark
Apache Spark is an open source cluster computer system with implicit data parallelism and fault tolerance. Spark was originally created at AMPlab from UC Berkeley; the main goal of Spark is to be fast to run and read and to apply in-memory processing. Spark allows you to manipulate distributed datasets, such as local collections. In this section, we will present the basic operations with Spark programming model and its ecosystem.
The Spark ecosystem
Spark comes with a lot of high-level libraries for SQL querying, machine learning, graph processing, and streaming data. These libraries provide all inclusive environment ready to use. The following figure shows the complete Spark ecosystem:
Take a look at the following:
- Spark Core API:
The characteristics of Spark Core API are as follows:
- It is the execution engine that allows all the other functionalities built on top
- It provides...