Basic statistics
Let's read the car mileage data and then compute some basic statistics. In Spark 2.0.0, DataFrameReader has the capability to read CSV files and create Datasets. And the Dataset has the describe()
function, which calculates the count, mean, standard deviation, min, and max values. For correlation and covariance, we use the stat.corr()
and stat.cov()
methods. Spark 2.0.0 Datasets have made our statistics work a lot easier.
Now let's run the program, parse the code, and compare the results.
The code files are in fdps-v3/code
and the data files in fdps-v3/data
. You can run the code either from a Scala IDE or just from the Spark shell startup.
Start the Spark shell from the bin
directory where you have installed Spark:
/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell
Inside the shell, you'll find this command:
load /Users/ksankar/fdps-v3/code/ML01v2.scala
This command loads the source as follows:
It creates the ML01v2
object. To run the object, use the following command:
ML01v2.main...