Probability
We have encountered probability in several guises so far in this book: as p-values, confidence intervals, and most recently as the output of logistic regression where the result can be considered as the probability of the output class being positive. The probabilities we calculated for the kappa statistic were the result of adding up counts and dividing by totals. The probability of agreement, for example, was calculated as the number of times the model and the data agreed divided by the number of samples. This way of calculating probabilities is referred to as frequentist, because it is concerned with the rates at which things happen.
An output of 1.0
from logistic regression (pre-rounding) corresponds to the certainty that the input is in the positive class; an output of 0.0
corresponds to the certainty that the input isn't in the positive class. An output of 0.5
corresponds to complete uncertainty about the output class. For example, if ŷ = 0.7 the probability of y = 1 is 70...