Negative correlations are just as valid and useful as a positive correlation.

In your example, the 0.2 correlation and the -0.2 correlation have equal value in your model. A negative correlation just means that as one value goes up, the other goes down.

Also, the closer to 1 for a positive correlation and closer to -1 for a negative correlation, the more useful it will be for a modelling algorithm.

For most algorithms, the independent variables do not have to be un-correlated to be useful in a model. Most models will handle the cross-correlation between features, and in some cases, dropping one of them could actually be detrimental, possibly losing some information that would have been useful to the model.

Usually we drop features if we have too many of them, they are too sparse, or if our feature to row ratio is too high.

Both of these facts apply for Classification and Regression equally.

Closer the correlation coefficient is to 1 or -1 between the independent and dependent variable, the better right? But we want lower correlation between the independent variables. – user100552 – 2020-07-14T14:05:02.993

1Yes, your first statement is correct. To address your second statement: For most algorithms, the independent variables do not have to be un-correlated to be useful in a model. Most models will handle the cross-correlation between features, and in some cases, dropping one of them will actually be detrimental, possibly losing some information that would have been useful to the model. Both of these facts apply for Classification and Regression equally. Usually we drop features if we have too many of them, they are too sparse, or if our feature to row ratio is too high. – Donald S – 2020-07-14T14:21:15.313