Naïve Bayes is one of the most widely used classification algorithms which can be trained and optimized quite efficiently. Spark’s machine learning library, MLlib, primarily focuses on simplifying machine learning and has great support for multinomial naïve Bayes and Bernoulli naïve Bayes.
Here we use the famous Iris dataset and use Apache Spark API NaiveBayes()
to classify/predict which of the three classes of flower a given set of observations belongs to. This is an example of a multi-class classifier and requires multi-class metrics for measurements of fit. Let’s have a look at the steps to achieve this:
iris.data
, which can be obtained from UCI. The dataset was originally introduced in the 1930s by R. Fisher. The set is a multivariate dataset with flower attribute measurements classified into three groups. In short, by measuring four columns, we attempt to classify a species into one of the three classes of Iris flower (that is, Iris Setosa, Iris Versicolour, Iris Virginica).We can download the data from here: https://archive.ics.uci.edu/ml/datasets/Iris/ The column definition is as follows:
distinct()
call.package spark.ml.cookbook.chapter6
Log4j.Logger
to reduce the amount of output produced by Spark:
import org.apache.spark.mllib.linalg.{Vector, Vectors} import org.apache.spark.mllib.regression.LabeledPoint import org.apache.spark.mllib.classification.{NaiveBayes, NaiveBayesModel}
import
org.apache.spark.mllib.evaluation.{BinaryClassificationMetrics,
MulticlassMetrics, MultilabelMetrics, binary}
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.log4j.Logger
import org.apache.log4j.Level
val spark = SparkSession
.builder
.master("local[4]")
.appName("myNaiveBayes08")
.config("spark.sql.warehouse.dir", ".")
.getOrCreate()
val data =
sc.textFile("../data/sparkml2/chapter6/iris.data.prepared.txt")
map()
and then build a LabeledPoint data structure. In this case, the last column is the Label and the first four columns are the features. Again, we replace the text in the last column (that is, the class of Iris) with numeric values (that is, 0, 1, 2) accordingly:
val NaiveBayesDataSet = data.map { line => val
columns = line.split(',')
LabeledPoint(columns(4).toDouble ,
Vectors.dense(columns(0).toDouble,columns(1).toDouble,columns(2).to
Double,columns(3).toDouble ))
}
println(" Total number of data vectors =",
NaiveBayesDataSet.count())
val distinctNaiveBayesData = NaiveBayesDataSet.distinct()
println("Distinct number of data vectors = ",
distinctNaiveBayesData.count())
Output:
(Total number of data vectors =,150)
(Distinct number of data vectors = ,147)
distinctNaiveBayesData.collect().take(10).foreach(println(_))
Output:
(2.0,[6.3,2.9,5.6,1.8])
(2.0,[7.6,3.0,6.6,2.1])
(1.0,[4.9,2.4,3.3,1.0])
(0.0,[5.1,3.7,1.5,0.4])
(0.0,[5.5,3.5,1.3,0.2])
(0.0,[4.8,3.1,1.6,0.2])
(0.0,[5.0,3.6,1.4,0.2])
(2.0,[7.2,3.6,6.1,2.5])
..............
................
.............
randomSplit()
method:
val allDistinctData =
distinctNaiveBayesData.randomSplit(Array(.30,.70),13L)
val trainingDataSet = allDistinctData(0)
val testingDataSet = allDistinctData(1)
println("number of training data =",trainingDataSet.count())
println("number of test data =",testingDataSet.count())
Output:
(number of training data =,44)
(number of test data =,103)
train()
and the training dataset:
val myNaiveBayesModel = NaiveBayes.train(trainingDataSet
map()
and predict()
methods to classify the flowers based on their features:
val predictedClassification = testingDataSet.map( x =>
(myNaiveBayesModel.predict(x.features), x.label))
predictedClassification.collect().foreach(println(_))
(2.0,2.0)
(1.0,1.0)
(0.0,0.0)
(0.0,0.0)
(0.0,0.0)
(2.0,2.0)
.......
.......
.......
MulticlassMetrics()
to create metrics for the multi-class classifier. As a reminder, this is different from the previous recipe, in which we used BinaryClassificationMetrics()
:
val metrics = new MulticlassMetrics(predictedClassification)
val confusionMatrix = metrics.confusionMatrix
println("Confusion Matrix= n",confusionMatrix)
Output:
(Confusion Matrix=
,35.0 0.0 0.0
0.0 34.0 0.0
0.0 14.0 20.0 )
val
myModelStat=Seq(metrics.precision,metrics.fMeasure,metrics.recall)
myModelStat.foreach(println(_))
Output:
0.8640776699029126
0.8640776699029126
0.8640776699029126
We used the IRIS dataset for this recipe, but we prepared the data ahead of time and then selected the distinct number of rows by using the NaiveBayesDataSet.distinct()
API. We then proceeded to train the model using the NaiveBayes.train()
API. In the last step, we predicted using .predict()
and then evaluated the model performance via MulticlassMetrics()
by outputting the confusion matrix, precision, and F-Measure metrics.
The idea here was to classify the observations based on a selected feature set (that is, feature engineering) into classes that correspond to the left-hand label. The difference here was that we are applying joint probability given conditional probability to the classification. This concept is known as Bayes' theorem, which was originally proposed by Thomas Bayes in the 18th century. There is a strong assumption of independence that must hold true for the underlying features to make Bayes' classifier work properly.
At a high level, the way we achieved this method of classification was to simply apply Bayes' rule to our dataset. As a refresher from basic statistics, Bayes' rule can be written as follows:
The formula states that the probability of A given B is true is equal to probability of B given A is true times probability of A being true divided by probability of B being true. It is a complicated sentence, but if we step back and think about it, it will make sense.
The Bayes' classifier is a simple yet powerful one that allows the user to take the entire probability feature space into consideration. To appreciate its simplicity, one must remember that probability and frequency are two sides of the same coin. The Bayes' classifier belongs to the incremental learner class in which it updates itself upon encountering a new sample. This allows the model to update itself on-the-fly as the new observation arrives rather than only operating in batch mode.
We evaluated a model with different metrics. Since this is a multi-class classifier, we have to use MulticlassMetrics()
to examine model accuracy.
[box type="download" align="" class="" width=""]Download exercise and code files here.
Exercise Files_Implementing Naive Bayes algorithm with Spark MLlib[/box]
For more information on Multiclass Metrics, please see the following link:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib .evaluation.MulticlassMetrics
Documentation for constructor can be found here:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.classification.NaiveBayes
If you enjoyed this article, you should have a look at Apache Spark 2.0 Machine Learning Cookbook which contains this excerpt.