Building a KMeans classifying system in Spark 2.0
In this recipe, we will load a set of features (for example, x, y, z coordinates) using a LIBSVM file and then proceed to use KMeans()
to instantiate an object. We will then set the number of desired clusters to three and then use kmeans.fit()
to action the algorithm. Finally, we will print the centers for the three clusters that we found.
It is really important to note that Spark does not implement KMeans++, contrary to popular literature, instead it implements KMeans || (pronounced as KMeans Parallel). See the following recipe and the sections following the code for a complete explanation of the algorithm as it is implemented in Spark.
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter8
- Import the necessary packages for Spark context to get access to the cluster and
Log4j.Logger...