Introducing RDDs
The RDD is the core data structure in Apache Spark. This data structure is not only a distributed collection of objects but is also partitioned in such a way that each dataset can be processed and computed on different nodes of a cluster. This makes the RDD a core element of distributed data processing. Moreover, an RDD object is resilient in the sense that it is fault-tolerant and the framework can rebuild the data in the case of a failure. When we create an RDD object, the master node replicates the RDD object to multiple executors or worker nodes. If any executor process or worker node fails, the master node detects the failure and enables an executor process on another node to take over the execution. The new executor node will already have a copy of the RDD object, and it can start the execution immediately. Any data processed by the original executor node before failing will be lost data that will be computed again by the new executor node.
In the next subsections...