89

57

Can someone practically explain the rationale behind Gini impurity vs Information gain (based on Entropy)?

Which metric is better to use in different scenarios while using decision trees?

89

57

Can someone practically explain the rationale behind Gini impurity vs Information gain (based on Entropy)?

Which metric is better to use in different scenarios while using decision trees?

64

Gini impurity and Information Gain Entropy are pretty much the same. And people do use the values interchangeably. Below are the formulae of both:

- $\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$
- $\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$

Given a choice, I would use the Gini impurity, as it doesn't require me to compute logarithmic functions, which are computationally intensive. The closed form of it's solution can also be found.

Which metric is better to use in different scenarios while using decision trees ?

The Gini impurity, for reasons stated above.

So, **they are pretty much same when it comes to CART analytics.**

Helpful reference for computational comparison of the two methods

1It is so common to see formula of entropy, while what is really used in decision tree looks like conditional entropy. I think it is important distinction or am missing something? – user1700890 – 2017-08-12T13:01:18.253

@user1700890 The ID3 algorithm uses Info. gain entropy. I need to read up on conditional entropy. Probably an improvement over ID3 :) – Dawny33 – 2017-08-12T13:55:34.813

2

I think your definition of the gini impurtiy might be wrong: https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity

– Martin Thoma – 2017-10-19T11:30:58.387@Dawny33 what is so computationally intensive about computing logarithms? it's just a button on the calculator, done. and entropy does have a closed form solution for the Gaussian distribution and others – develarist – 2020-10-21T00:03:11.950

31

Generally, your performance will not change whether you use Gini impurity or Entropy.

Laura Elena Raileanu and Kilian Stoffel compared both in "Theoretical comparison between the gini index and information gain criteria". The most important remarks were:

- It only matters in 2% of the cases whether you use gini impurity or entropy.
- Entropy might be a little slower to compute (because it makes use of the logarithm).

I was once told that both metrics exist because they emerged in different disciplines of science.

19

Gini is intended for continuous attributes and Entropy is for attributes that occur in classes

**Gini** is to minimize misclassification

**Entropy** is for exploratory analysis

Entropy is a little slower to compute

9

To add upon the fact that there are more or less the same, consider also the fact that: $$ \begin{split} \forall \; 0 < u < 1,\; \log (1-u) &= -u - u^2/2 - u^3/3 \, + \, \cdots\\ \forall \; 0 < p < 1,\; \log (p) &= p-1 - (1-p)^2/2 - (1-p)^3/3 \, + \, \cdots\\ \end{split} $$ so that: $$ \forall \; 0 < p < 1,\; -p \log (p) = p(1-p) + p(1-p)^2/2 + p(1-p)^3/3 \, + \, \cdots $$ See the following plot of the two functions normalised to get 1 as maximum value: red curve is for Gini while black one is for entropy.

In the end as explained by @NIMISHAN Gini is more suitable to minimise misclassfication as it is symetric to 0.5, while entropy will more penalised small probabilities.

7

Entropy takes slightly more computation time than Gini Index because of the log calculation, maybe that's why Gini Index has become the default option for many ML algorithms. But, from Tan et. al book Introduction to Data Mining

"Impurity measure are quite consistent with each other... Indeed, the strategy used to prune the tree has a greater impact on the final tree than the choice of impurity measure."

So, it looks like the selection of impurity measure has little effect on the performance of single decision tree algorithms.

Also. "Gini method works only when the target variable is a binary variable." - Learning Predictive Analytics with Python.

5

I've been doing optimizations on binary classification for the past week+, and in every case, entropy significantly outperforms gini. This may be data set specific, but it would seem like trying both while tuning hyperparameters is a rational choice, rather than making assumptions about the model ahead of time.

You never know how data will react until you've run the statistics.

3

As per parsimony, principal Gini outperform entropy as of computation ease (log is obvious has more computations involved rather that plain multiplication at processor/machine level).

But, entropy definitely has an edge in some data cases involving high imbalance.

Since entropy uses log of probabilities and multiplying with probabilities of event, what is happening at background is value of lower probabilities are getting scaled up.

If your data probability distribution is exponential or Laplace (like in case of deep learning where we need probability distribution at sharp point) entropy outperform Gini.

To give an example if you have $2$ events one $.01$ probability and other $.99$ probability.

In Gini probability squared will be $.01^2+.99^2$, $.0001 + .9801$ means that lower probability does not play any role as everything is governed by the majority probability.

Now in case of entropy $.01*log(.01)+.99*log(.99)= .01*(-2)+ .99*(-.00436) = -.02-.00432$ now in this case clearly seen lower probabilities are given better weight-age.

I have proposed "Information gain" instead of "Entropy", since it is quite closer (IMHO), as marked in the related links. Then, the question was asked in a different form in When to use Gini impurity and when to use information gain?

– Laurent Duval – 2016-02-16T07:23:36.8271

I have posted here a simple interpretation of the Gini impurity that may be helpful.

– Picaud Vincent – 2017-11-06T11:33:25.850