## how to find classification accuracy in autoencoders?

2

3

how can we find the accuracy autoencoders for classification of images? because we will get the reconstruction of the image and when we will plug in the test data it will spit out the image but how would we able to calculate the accuracy? or we should take difference between the prediction and input of the image .﻿

Do you know its cost function? this is also a question of mine :) – Media – 2018-01-26T15:55:02.287

loss function is mean squared error @Media, I think we have to calculate reconstruction error ? what you say – Boris – 2018-01-26T16:41:20.400

actually I had the same idea but I thought it was wrong, Unfortunately I've not seen yet any exact cost function in autoencoder-related papers yet. – Media – 2018-01-26T16:53:27.627

1 – Boris – 2018-01-26T16:55:27.300

2

Are you using the autoencoder for classification or reconstruction? if you are pre-training the autoencoder for classification then you use the usual logloss to determine the accuracy of your classifier. If on the otherhand, if you are using the autoencoder for reconstruction, then there is no classification, and you can use something like KL divergence (described in detail here) to measure and track your reconstruction and compression performance.

0

1). Using Confusion Matrix would be best way to interperet and present the Accuracy of Auto encoder.

y_pred = Model.predict(X_test) will give the Predicted class for the Test vector.

2). Below code will create the confusion matrix:

# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix

cm = confusion_matrix(y_test, y_pred)

cm


Say if we have 5 classes then below could be an illustrative confusion matrix.

array([[ 109,    0,    4,    1,   87],
[   0,  282,   13,    0,  140],
[   0,    6,  474,    6,  757],
[   3,    4,   50,  174,  358],
[   1,    0,   29,    2, 2500]], dtype=int64)


Here the elements on the diagnol are coorectly predicted (True Positives/TP). The off diagnol elements 'along the column' are False Positives. (FP) The The off diagnol elements 'along the row' are False Negatives. (FN)

The precision metrics which is TP/(TP+FP) can be used to predict the accuracy of the model. The conjugate metric of recall TP/(TP+FN) will be desired as well. Based on the nature of problem either/both of these two metrics need to be presented to articulate the model performance. ROC curve where we have True Positive vs False positive provides better insights. Farther the curve from y=x better the operating accuracy (i.e. more correct classificationsand less wrong classification).

See an illustration in this paper for binary classification: https://www.datascience.com/blog/fraud-detection-with-tensorflow

The packaged acuuracy metrics provided by Keras is based on euclidean distance between predicted and actual class per record per epoch. It lacks in the ease of presentation relative to confusuin matrix described above. See :