In a linear model, the correlation between the features increases the variance for the associated parameters (the parameters related to those variables). The more correlation we have, the worse it is. The situation is even worse when we have almost perfect correlation between a subset of variables: in that case, the algorithm that we use to fit linear models doesn't even work. The intuition is the following: if we want to model the impact of a discount (yes-no) and the weather (rain–not rain) on the ice cream sales for a restaurant, and we only have promotions on every rainy day, we would have the following design matrix (where Promotion=1 is yes and Weather=1 is rain):
Promotion | Weather |
1 | 1 |
1 | 1 |
0 | 0 |
0 | 0 |
This is problematic, because every time one of them is 1, the other is 1 as well. The model cannot identify...