1). Using Confusion Matrix would be best way to interperet and present the Accuracy of Auto encoder.
y_pred = Model.predict(X_test) will give the Predicted class for the Test vector.
2). Below code will create the confusion matrix:
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
Say if we have 5 classes then below could be an illustrative confusion matrix.
array([[ 109, 0, 4, 1, 87],
[ 0, 282, 13, 0, 140],
[ 0, 6, 474, 6, 757],
[ 3, 4, 50, 174, 358],
[ 1, 0, 29, 2, 2500]], dtype=int64)
Here the elements on the diagnol are coorectly predicted (True Positives/TP).
The off diagnol elements 'along the column' are False Positives. (FP)
The The off diagnol elements 'along the row' are False Negatives. (FN)
The precision metrics which is TP/(TP+FP) can be used to predict the accuracy of the model. The conjugate metric of recall TP/(TP+FN) will be desired as well. Based on the nature of problem either/both of these two metrics need to be presented to articulate the model performance. ROC curve where we have True Positive vs False positive provides better insights. Farther the curve from y=x better the operating accuracy (i.e. more correct classificationsand less wrong classification).
See an illustration in this paper for binary classification: https://www.datascience.com/blog/fraud-detection-with-tensorflow
The packaged acuuracy metrics provided by Keras is based on euclidean distance between predicted and actual class per record per epoch. It lacks in the ease of presentation relative to confusuin matrix described above.