Validation score (f1) remains the same when swapping labels

3

I have an imbalanced dataset (True labels are ~10x than False labels) and thus use the f_beta score as a metric for model performance, as such:

def fbeta(beta):
    def f1(y_true, y_pred):
        def recall(y_true, y_pred):
            true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
            possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
            recall = true_positives / (possible_positives + K.epsilon())
            return recall

        def precision(y_true, y_pred):
            true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
            predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
            precision = true_positives / (predicted_positives + K.epsilon())
            return precision
        beta_squared = beta ** 2
        precision = precision(y_true, y_pred)
        recall = recall(y_true, y_pred)
        fbeta_score = (beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall + K.epsilon())
        return fbeta_score
    return f1

Running my model I get the following output.

Epoch 00001: val_f1 improved from -inf to 0.90196, saving model to output\LSTM\weights\floods_weights.h5 Epoch 2/3
- 0s - loss: 0.4114 - f1: 0.8522 - val_loss: 0.4193 - val_f1: 0.9020 Epoch 00002: val_f1 did not improve from 0.90196
Epoch 3/3
- 0s - loss: 0.3386 - f1: 0.8867 - val_loss: 0.3589 - val_f1: 0.9020 Epoch 00003: val_f1 did not improve from 0.90196
Evaluating :
51/51 [==============================] - 0s 372us/step
Final Accuracy : 0.9019607305526733

However, when running the model again with swapped y-labels (True->False, False->True) I get the exact same values and do not understand why as the f-measure should be highly unlikely to give the exact same results.

When assessing the final model using the following code for precision and recall, I do get different results: 0.6 vs 0.95788:

prediction = model.predict(data['X_test'], batch_size=64)
tp = 0
fp = 0
fn = 0
for y, p in zip(data['y_test'], np.argmax(prediction, axis=1)):
    y = y[0]
    if y == 1 and p == 1:
        tp += 1
    elif p == 1 and y == 0:
        fp += 1
    elif p == 0 and y == 1:
        fn += 1

precision = tp / (tp + fp)
recall = tp / (tp + fn)
print(2 * (precision * recall) / (precision + recall))

What's happening?

Edit:

I am compiling the model as such:

beta = 1
model.compile(
    optimizer=Adam(lr=1e-3),
    loss='categorical_crossentropy',
    metrics=[fbeta(beta)]
)

Jens de Bruijn

Posted 2018-05-25T14:18:01.100

Reputation: 133

This may be a less relevant point, but would you mind showing your model creation code section (at least model.compile(metrics=[YOUR-METRIC], ...) )? I wonder if the way of passing in your fbeta custom function might make a difference... – David C. – 2018-05-29T15:05:06.580

Certainly! I edited to question to include the compilation of the model – Jens de Bruijn – 2018-05-30T07:34:53.130

Can you add the code where you swap the labels? – kbrose – 2018-05-30T14:02:53.297

No answers