Handling persistence in Spark
In this section, we will discuss how the persistence or caching is being handled in Spark. We will talk about various persistence and caching mechanisms provided by Spark along with their significance.
Persistence/caching is one the important components or features of Spark. Earlier, we talked about the computations/transformations are lazy in Spark and the actual computations do not take place unless any action is invoked on the RDD. Though this is a default behavior and provides fault tolerance, sometimes it also impacts the overall performance of the job, especially when we have common datasets that are leveraged and used across the computations.
Persistence/caching helps us in solving this problem by exposing the persist()
or cache()
operations in the RDD
. The persist()
or cache()
operations store the computed partition of the invoking RDD in the memory of all nodes and reuses them in other actions on that dataset (or datasets derived from it). This enables...