Model evaluation
In the last section, we completed our model estimation. Now, it is the time for us to evaluate these estimated models to see whether they fit our client's criteria so that we can either move to results explanation or go back to some previous stages to refine our predictive models.
To perform our model evaluation, in this section, we will focus on utilizing confusion matrix and FalsePositive numbers to assess the goodness of fit for our models. To calculate them, we need to use our test data rather than training data.
A quick evaluation
As discussed before, both MLlib and R have algorithms to return a confusion matrix and even false positive numbers.
MLlib has confusionMatrix
and numFalseNegatives()
to use.
The following code calculates error ratios:
// Evaluate model on test instances and compute test error val testErr = testData.map { point => val prediction = model.predict(point.features) if (point.label == prediction) 1.0 else 0.0 }.mean() println("Test Error = " + testErr...