Handling bias and variance in data
We encounter several types of errors in insight generation when using an analytic function. They typically fall into three main categories – that is, bias, variance, and irreducible errors:
- Bias is defined as the difference between the "predicted" and "expected" values of an analytic function. The ML algorithm is unable to capture the true relationship between the features and the target. An example of this is model underfitting.
- Variance is the result of the model making too many assumptions. An example of this is model overfitting, which means that the training is not generalized enough and should have stopped earlier.
- Irreducible errors are random and not directly controlled by the model.
Increasing bias reduces variance and vice versa. In other words, they are indirectly proportional. So, the total prediction error is the sum of all these errors. This can be depicted as follows:
Prediction error...