Variable identification
In this recipe, we'll see how to identify the required variables for analysis and understand their description.
Getting ready
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
How to do it…
Let's take Bank Marketing data which contains information related to a direct marketing campaign of a Portuguese bank institute and its attempts to make their clients subscribe for a term deposit. The data originally contains 41,188 rows and 21 columns. For our analysis, we'll use 10 variables. Let's see the properties of the variables.
Tip
Please download the dataset from the following location: https://github...