How to get accuracy, F1, precision and recall, for a keras model?

41

11

I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.

Here's my actual code:

# Split dataset in train and test data 
X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

# Build the model
model = Sequential()
model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

time_callback = TimeHistory()

# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback]) 

And then I am predicting on new test data, and getting the confusion matrix like this:

y_pred = model.predict(X_test)
y_pred =(y_pred>0.5)
list(y_pred)

cm = confusion_matrix(Y_test, y_pred)
print(cm)

But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)

Thank you for any help!

ZelelB

Posted 2019-02-06T13:29:24.533

Reputation: 717

How did you normalize X btw? – jtlz2 – 2020-05-07T17:17:31.670

Answers

42

Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.

However, if you really need them, you can do it like this

from keras import backend as K

def recall_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall

def precision_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def f1_m(y_true, y_pred):
    precision = precision_m(y_true, y_pred)
    recall = recall_m(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

# fit the model
history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

# evaluate the model
loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)

Tasos

Posted 2019-02-06T13:29:24.533

Reputation: 3 340

1if they can be misleading, how to evaluate a Keras' model then? – ZelelB – 2019-02-06T13:52:11.773

1Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually. – Tasos – 2019-02-06T14:03:20.210

Any idea why this is not working on validation for me? works fine for training. – Rodrigo Ruiz – 2020-01-12T07:07:36.400

Is there a reason why I get recall values higher than 1? – Panathinaikos – 2020-03-29T10:02:12.310

recall and precision going higher than 1 for categorical classification – rsd96 – 2020-05-08T12:25:09.197

1@Panathinaikos these functions work right only for binary classification. – Zeeshan Ali – 2020-08-27T11:40:27.140

17

You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.

from sklearn.metrics import classification_report

y_pred = model.predict(x_test, batch_size=64, verbose=1)
y_pred_bool = np.argmax(y_pred, axis=1)

print(classification_report(y_test, y_pred_bool))

which gives you (output copied from the scikit-learn example):

             precision  recall   f1-score    support

 class 0       0.50      1.00      0.67         1
 class 1       0.00      0.00      0.00         1
 class 2       1.00      0.67      0.80         3

matze

Posted 2019-02-06T13:29:24.533

Reputation: 171

3This is what I use, simple and effective. – Matthew – 2019-02-06T16:30:26.003

3

You can also try as mentioned below.

from sklearn.metrics import f1_score, precision_score, recall_score, confusion_matrix
y_pred1 = model.predict(X_test)
y_pred = np.argmax(y_pred1, axis=1)

# Print f1, precision, and recall scores
print(precision_score(y_test, y_pred , average="macro"))
print(recall_score(y_test, y_pred , average="macro"))
print(f1_score(y_test, y_pred , average="macro"))

Ashok Kumar Jayaraman

Posted 2019-02-06T13:29:24.533

Reputation: 133

0

Try this with Y_test, y_pred as parameters.

Viacheslav Komisarenko

Posted 2019-02-06T13:29:24.533

Reputation: 342

I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support' – ZelelB – 2019-02-06T13:51:12.763

You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support) – Viacheslav Komisarenko – 2019-02-06T13:59:53.110

0

See the docs of keras

import tensorflow as tf 

model.compile( ..., metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])])

Justin Lange

Posted 2019-02-06T13:29:24.533

Reputation: 101