We have seen how Apache Spark can handle distributed computing much better than Hadoop. We also saw the inner workings, mainly the fundamental data structure known as Resilient Distributed Dataset (RDD). RDDs are immutable collections representing datasets and have the inbuilt capability of reliability and failure recovery. RDDs operate on data not as a single blob of data, rather RDDs manage and operate data in partitions spread across the cluster. Hence, the concept of data partitioning is critical to the proper functioning of Apache Spark Jobs and can have a big effect on the performance as well as how the resources are utilized.
RDD consists of partitions of data and all operations are performed on the partitions of data in the RDD. Several operations like transformations are functions executed by an executor on the specific partition of data being...