Data exploration
In this recipe, we'll see how to explore data.
Getting ready
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, and Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
How to do it…
After variable identification, let's try do some data exploration and come up with inferences about the data. Here is the code which does data exploration:
/*Summary statistics*/ val summary = selected_Data.describe() println("Summary Statistics") summary.show() /* Unique values for each Field */ val columnNames = selected_Data.columns val uniqueValues_PerField = columnNames...