Getting access to SparkContext vis-a-vis SparkSession object in Spark 2.0
In this recipe, we demonstrate how to get hold of SparkContext using a SparkSession object in Spark 2.0. This recipe will demonstrate the creation, usage, and back and forth conversion of RDD to Dataset. The reason this is important is that even though we prefer Dataset going forward, we must still be able to use and augment the legacy (pre-Spark 2.0) code mostly utilizing RDD.
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter4
- Import the necessary packages for the Spark session to gain access to the cluster and
log4j.Logger
to reduce the amount of output produced by Spark:
import org.apache.log4j.{Level, Logger} import org.apache.spark.sql.SparkSession import scala.util.Random
- Set the output level to
ERROR
to reduce Spark's logging output:
Logger.getLogger...