There is no relationship between these two metrics.

Loss can be seen as a **distance** between the true values of the problem and the values predicted by the model. Greater the loss is, more huge is the errors you made on the data.

Accuracy can be seen as the **number** of error you made on the data.

That means :

- a low accuracy and huge loss means you made huge errors on a lot of data

- a low accuracy but low loss means you made little errors on a lot of data

- a great accuracy with low loss means you made low errors on a few data (best case)

- your situation : a great accuracy but a huge loss, means you made huge errors on a few data.

For you case, the third model can correctly predict more examples, but on those where it was wrong, it made more errors (the distance between true value and predicted values is more huge).

**NOTE :**

Don't forget that low or huge loss is a subjective metric, which **depends** on the problem and the data. It's a distance between the true value of the prediction, and the prediction made by the model. It depends also on the loss you use.

Think :

- If your data are between 0 and 1, a loss of 0.5 is huge, but if your data are between 0 and 255, an error of 0.5 is low.

- Maybe think of cancer detection, and probability of detecting a cancer. Maybe an error of 0.1 is huge for this problem, whereas an error f 0.1 for image classification is fine.

Please familiarize yourself with proper scoring rules. Kolassa’s answer and links within here give a good rabbit role into which you can dive: https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email. Briefly, accuracy is a highly problematic way of evaluating a classifier, counterintuitive as that seems.

– Dave – 2020-09-05T18:50:25.677