Variable identification
In this recipe, we will see how to identify predictor (input) and target (output) variables for data at scale in Spark. Then the next step is to identify the category of the variables.
Getting ready
To step through this recipe, you will need Ubuntu 14.04 (Linux flavor) installed on the machine. Also, you need to have Apache Hadoop 2.6 and Apache Spark 1.6.0 installed.
How to do it…
Let's take an example of student's data, using which we want to predict whether a student will play cricket or not. Here is what the sample data looks like:
The preceding data resides in HDFS and load the data into Spark as follows:
import org.apache.spark._ import org.apache.spark.sql._ object tricky_Stats { def main(args:Array[String]): Unit = { val conf = new SparkConf() .setMaster("spark://master:7077") .setAppName("Variable_Identification") val sc = new SparkContext...