Shared variable
Spark segregates the job into the smallest possible operation, a closure, running on different nodes and each having a copy of all the variables of the Spark job. Any changes made to these variables do not get reflected in the driver program and so, to overcome this limitation, Spark provides two special variables: broadcast variables and accumulators (also called shared variables).