Tuning shuffle partitions
Spark uses a technique called shuffle to move data between its executors or nodes while performing operations such as join
, union
, groupby
, and reduceby
. The shuffle operation is very expensive as it involves the movement of data between nodes. Hence, it is usually preferable to reduce the amount of shuffle involved in a Spark query. The number of partition splits that Spark performs while shuffling data is determined by the following configuration:
spark.conf.set("spark.sql.shuffle.partitions",200)
200
is the default value and you can tune it to a number that suits your query the best. If you have too much data and too few partitions, this might result in longer tasks. But, on the other hand, if you have too little data and too many shuffle partitions, the overhead of shuffle tasks will degrade performance. So, you will have to run your query multiple times with different shuffle partition numbers to arrive at an optimum number.
You can...