Caching and persistence
Caching and persistence are two key areas that developers can use to improve performance of Spark applications. We've looked at caching in RDDs, and while DStreams also provide the persist()
method, the persist()
method on a DStream will persist all RDDs within the DStream in memory. This is especially useful if the computation happens multiple times on a DStream, which is especially true in window-based operations.
It is for this reason that developers do not explicitly need to call a persist()
on window-based operations and they are automatically persisted. The data persistence mechanism depends on the source of the data, for example, for data coming from network sources such as sockets or Kafka, data is replicated across a minimum of two nodes by default.
The difference between cache()
and persist()
are:
cache()
: Persists the RDDs of the DStream with the default storage level(MEMORY_ONLY_SER). Cache()
under the hood and calls thepersist()
method with the default...