I use this kind of rule for `class_weight`

:

```
import numpy as np
import math
# labels_dict : {ind_label: count_label}
# mu : parameter to tune
def create_class_weight(labels_dict,mu=0.15):
total = np.sum(list(labels_dict.values()))
keys = labels_dict.keys()
class_weight = dict()
for key in keys:
score = math.log(mu*total/float(labels_dict[key]))
class_weight[key] = score if score > 1.0 else 1.0
return class_weight
# random labels_dict
labels_dict = {0: 2813, 1: 78, 2: 2814, 3: 78, 4: 7914, 5: 248, 6: 7914, 7: 248}
create_class_weight(labels_dict)
```

`math.log`

smooths the weights for very imbalanced classes !
This returns :

```
{0: 1.0,
1: 3.749820767859636,
2: 1.0,
3: 3.749820767859636,
4: 1.0,
5: 2.5931008483842453,
6: 1.0,
7: 2.5931008483842453}
```

2

Also have a look at https://github.com/fchollet/keras/issues/3653 if you're working with 3D data.

– herve – 2017-04-26T09:12:48.837For me it gives a error dic don't has shape attribute. – Flávio Filho – 2017-05-23T00:11:08.297

I believe Keras could be changing the way this works, this is for the version of August 2016. I will verify for you in a week – layser – 2017-05-25T14:12:47.580

2Does this work for one-hot-encoded labels? – megashigger – 2018-01-08T19:49:49.183

9@layser Does this work only for 'category_crossentropy' loss? How do you give class_weight to keras for 'sigmoid' and 'binary_crossentropy' loss? – Naman – 2018-04-15T19:26:01.297

2@layser Can you explain

`to treat every instance of class 1 as 50 instances of class 0`

? Is it that the in training set, row corresponding to class 1 is duplicated 50 times in order to make it balanced or some other process follows? – Divyanshu Shekhar – 2018-06-12T05:12:22.277How can we know which class is class0? Same for the class1. – Philippe Remy – 2020-11-10T08:26:57.957