Unable to understand the usage of labels argument in sklearn.metrics.f1_score

2

I am trying to model a dataset with RandomForest Classifier. My dataset has 3 classes viz. A, B, C. 'A' is the negative class and 'B' and 'C' are positive classes.

In GridSearch I wanted to optimize on F1-score since the number of samples in all the classes are not evenly distributed and class 'A' has the highest number of samples.

That is where I wanted to understand the usage of labels argument. The doc says:

labels : list, optional The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average.

I could not understand it properly. Does it mean, In my screnario I should have labels as labels = ['B', 'C'], just the positive class?
Kindly Help

custom_scoring = make_scorer(f1_score, labels=[???],average='weighted')
clf = RandomForestClassifier(class_weight='balanced', random_state=args.random_state)
grid_search = GridSearchCV(clf, param_grid=param_grid, n_jobs=20, scoring=custom_scoring)

Sourabh

Posted 2019-08-01T04:34:08.690

Reputation: 121

Answers

0

The F1 measure is a type of class-balanced accuracy measure - when there are only two classes, it's very straightforward, as there's only one possible way to compute it. With 3 classes, however, you could compute the F1 measure for classes A and B, or B and C, or C and A, or between all three of A, B and C.

It seems that the "labels" parameter is telling the method which classes to compute your measure over. Since F1 is already class-balanced, you probably want to include all three labels for your measure. This parameter is likely more important for imbalance-insensitive measures like raw accuracy, as it will allow you to compute accuracy in a subset of the data - in the documentation example, they use it to exclude a majority class, allowing the user to evaluate the accuracy in the minority only. If you have a huge imbalance, say 99% of your data is of one class, your accuracy measure will be completely dominated by accuracy within that class - for this reason, it might be more informative to see how well the classifier does in the 1% only.

Nuclear Hoagie

Posted 2019-08-01T04:34:08.690

Reputation: 916

-1

In case of imbalanced dataset, accuracy score of sampling algorithm yields an accuracy of 99% which seems impressive, but minority class could be totally ignored in case of imbalanced datasets.

If data set is imbalanced, pre-processed data set with sampling algorithm (for e.g SMOTE) and re-sample it. It will create equal sets of examples for class based on neighbors.

https://stackoverflow.com/questions/57205718/how-can-we-be-sure-of-the-efficiency-of-a-neural-network/57211888#57211888

SUN

Posted 2019-08-01T04:34:08.690

Reputation: 21

I understand that. My question is : what should I put in the labels argument of f1-score function. Edited question with the code snippet – Sourabh – 2019-08-01T08:07:08.877

As per documents , "For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order". So I believe you can ignore this parameter as this is optional. – SUN – 2019-08-01T11:29:32.877