Multicollinearity: Problem, Detection and Solution
This article was published as a part of the Data Science Blogathon.
What is Multicollinearity?
One of the key assumptions for a regression-based model is that the independent/explanatory variables should not be correlated amongst themselves. Now let’s try to understand why this assumption was made in the first place. The key purpose of a regression equation is to tell us the individual impact of each of the explanatory variables on the dependent/target variable and that is captured by the regression coefficients.
So, a regression coefficient captures the average change in the dependent variable for 1 unit change in the explanatory variable, keeping all the other explanatory variables constant. Hence, if the explanatory variables are correlated it will not be possible to disentangle their individual effects on the target variable. This problem is known as multicollinearity.
Why is Multicollinearity a problem?
Multicollinearity causes the following 2 primary issues –
1. Multicollinearity generates high variance of the estimated coefficients and hence, the coefficient estimates corresponding to those interrelated explanatory variables will not be accurate in giving us the actual picture. They can become very sensitive to small changes in the model.
2. Consecutively the t-ratios for each of the individual slopes might get impacted leading to insignificant coefficients. It is also possible that the adjusted R squared for a model is pretty good and even the overall F-test statistic is also significant but some of the individual coefficients are statistically insignificant. This scenario can be a possible indication of the presence of multicollinearity as multicollinearity affects the coefficients and corresponding p-values, but it does not affect the goodness-of-fit statistics or the overall model significance.
How do we measure Multicollinearity?
A very simple test known as the VIF test is used to assess multicollinearity in our regression model. The variance inflation factor (VIF) identifies the strength of correlation among the predictors.
Now we may think about why we need to use ‘VIF’s and why we are simply not using the Pairwise Correlations.
Since multicollinearity is the correlation amongst the explanatory variables it seems quite logical to use the pairwise correlation between all predictors in the model to assess the degree of correlation. However, we may observe a scenario when we have five predictors and the pairwise correlations between each pair are not exceptionally high and it is still possible that three predictors together could explain a very high proportion of the variance in the fourth predictor.
I know this sounds like a multiple regression model itself and this is exactly what VIFs do. Of course, the original model has a dependent variable (Y), but we don’t need to worry about it while calculating multicollinearity. The formula of VIF is
VIF = 1 /(1- Rj2)
Here the Rj2 is the R squared of the model of one individual predictor against all the other predictors. The subscript j indicates the predictors and each predictor has one VIF. So more precisely, VIFs use a multiple regression model to calculate the degree of multicollinearity. Suppose we have four predictors – X1, X2, X3, and X4. So, to calculate VIF, all the independent variables will become dependent variables one by one. Each model will produce an R-squared value indicating the percentage of the variance in the individual predictor that the set of other predictors explain.
The name “variance inflation factor” was coined because VIF tells us the factor by which the correlations amongst the predictors inflate the variance. For example, a VIF of 10 indicates that the existing multicollinearity is inflating the variance of the coefficients 10 times compared to a no multicollinearity model. The variances that we are talking about here are the standard errors of the coefficient estimates which indicates the precision of these estimates. These standard errors are used to calculate the confidence interval of the coefficient estimates.
Larger standard errors will produce wider confident intervals leading to less precise coefficient estimates. Additionally, wide confidence intervals may sometimes flip the coefficient signs as well.
VIFs do not have any upper limit. The lower the value the better. VIFs between 1 and 5 suggest that the correlation is not severe enough to warrant corrective measures. VIFs greater than 5 represent critical levels of multicollinearity where the coefficient estimates may not be trusted and the statistical significance is questionable. Well, the need to reduce multicollinearity depends on its severity.
A general industry rule is to keep VIF < 5. However, in many econometric textbooks, you will find that multicollinearity is considered to be severe only when VIF >10. It is a little subjective call here and will depend on a case-by-case basis and the researcher’s judgment.
How can we fix Multi-Collinearity in our model?
The potential solutions include the following:
1. Simply drop some of the correlated predictors. From a practical point of view, there is no point in keeping 2 very similar predictors in our model. Hence, VIF is widely used as variable selection criteria as well when we have a lot of predictors to choose from.
2. We can try to standardize the predictors by subtracting their mean from each of the observations. We can directly use these standardized variables in our model. The advantage of standardizing the variables is that the coefficients continue to represent the average change in the dependent variable given a 1 unit change in the predictor.
3. Do some linear transformation e.g., add/subtract 2 predictors to create a new bespoke predictor.
4. As an extension of the previous 2 points, another very popular technique is to perform Principal components analysis (PCA). PCA is used when we want to reduce the number of variables in our data but we are not sure which variable to drop. It is a type of transformation where it combines the existing predictors in a way only to keep the most informative part.
It then creates new variables known as Principal components that are uncorrelated. So, if we have 10-dimensional data then a PCA transformation will give us 10 principal components and will squeeze maximum possible information in the first component and then the maximum remaining information in the second component and so on. The primary limitation of this method is the interpretability of the results as the original predictors lose their identity and there is a chance of information loss. At the end of the day, it is a trade-off between accuracy and interpretability.
How to calculate VIF (R and Python Code):
I am using a subset of the house price data from Kaggle. The dependent/target variable in this dataset is “SalePrice”. There are around 80 predictors (both quantitative and qualitative) in the actual dataset. For Simplicity’s purpose, I have selected 10 predictors based on my intuition that I feel will be suitable predictors for the Sale price of the houses. Please note that I did not do any treatment e.g., creating dummies for the qualitative variables. This example is just for representation purposes.
The following table describes the predictors I chose and their description.

The below code shows how to calculate VIF in R. For this we need to install the ‘car’ package. There are other packages available in R as well.
The output is shown below. As we can see most of the predictors have VIF <= 5
Now if we want to do the same thing in python then please see the code and output below




Please note that in the python code I have added a column of intercept/constant to my data set before calculating the VIFs. This is because the variance_inflation_factor function in python does not assume the intercept by default while calculating the VIFs. Hence, often we may come across very different results in R and Python output. For details, please see this discussion here.