How does multicollinearity affect neural networks?

13

3

Multicollinearity is a problem for linear regression because the results become unstable / depend too much on single elements (source).

(Also, the inverse of $X^TX$ doesn't exist so the standard OLS estimator does not exist ... I have no idea how, but sklearn deals with it just fine)

Is (perfect) multicollinearity also a problem for neural networks?

Martin Thoma

Posted 2018-02-26T19:20:41.930

Reputation: 15 590

I asked a similar question, let me know if it helps https://datascience.stackexchange.com/questions/85130/repeated-features-in-neural-networks-with-tabular-data

– Carlos Mougan – 2021-01-10T08:54:23.607

Answers

2

Multi colinearity affects the learning of Artificial Neural network. Since the information in the dependent variable is very less compared to the other variables, the neural network will take more time to converge.

In packages like sklearn, the dependent variables are identified and omitted from the calculation. I have used the lm function in R and it marks the coefficient of the dependent variable with NA. one can remove the variable from the calculation and still the coefficients are going to be same. In these cases, the rank of the x matrix will be less than the number of columns.

Even though there are no inverse exists for xTx, most of the packages will not calculate the inverse directly, but they will calculate the pseudo inverse.

narasimman

Posted 2018-02-26T19:20:41.930

Reputation: 131

3"Multi colinearity affects the learning of Artificial Neural network. Since the information in the dependent variable is very less compared to the other variables, the neural network will take more time to converge."

Do you have a source for that? – Martin Thoma – 2018-02-26T20:42:35.853

0

I just came across a research paper that answers this question. In case this helps anyone in the future, the paper Multicollinearity: A tale of two nonparametric regressions mentions that neural networks generally do not suffer from multicollinearity because they tend to be overparameterized. The extra learned weights create redundancies that make things that affect any small subset of features (such as multicollinearity) unimportant.

Due to its overparameterization, the coefficients or weights of a neural network are inherently difficult to interpret. However, it is this very redundancy that makes the individual weights unimportant. That is, at each level of the network, the inputs are linear combinations of the inputs of the previous level. The final output is a functions of very many combinations of sigmoidal functions involving high order interactions of the original predictors. Thus neural networks guard against the problems of multicollinearity at the expense of interpretability.

user3667125

Posted 2018-02-26T19:20:41.930

Reputation: 101