The most important differences of this implementation from multiple linear regression are the following:
- Normalization is required only for feature matrix x, and not for the target vector y, because the output has range (0, 1)
- The hypothesis is different
- The cost function looks different, but the cost gradient remains the same
Again, we'll need some accelerate functions:
import Accelerate
The logistic regression class definition looks similar to multiple linear regression:
public class LogisticRegression { public var weights: [Double]! public init(normalization: Bool) { self.normalization = normalization } private(set) var normalization: Bool private(set) var xMeanVec = [Double]() private(set) var xStdVec = [Double]()