Reduce and grouping transformation with paired key-value RDDs
In this recipe, we explore reduce and group by key. The reduceByKey()
and groupbyKey()
operations are much more efficient and preferred to reduce()
and groupBy()
in most cases. The functions provide convenient facilities to aggregate values and combine them by key with less shuffling, which is problematic on large data sets.
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside
package spark.ml.cookbook.chapter3
- Import the necessary packages
import org.apache.spark.sql.SparkSession
- Import the packages for setting up the logging level for
log4j
. This step is optional, but we highly recommend it (change the level appropriately as you move through the development cycle).
import org.apache.log4j.Logger import org.apache.log4j.Level
- Set up the logging level to warning and error to cut down on output. See the previous...