The purpose of this algorithm appears to be about reducing cost. So a metric which includes the financial risk associated with defaults should be preferred over one that simply measures number of defaults.
false_positive * loss_on_default metric would seem to be ideal, where you set the probability cutoff of positive class (auto-approve) so that 80% of loans were approved regardless of absolute probability of the prediction. Conceptually this is similar to picking a point on the ROC curve and assessing the algorithm at that point.
This metric could not be used directly as an objective function, you'd probably use want something like logloss weighted per sample by the loss_on_default in order to train a model.
I did a quick search for similar metrics, and could not find anything (it is also not something I have done before, so do please test the idea carefully, it is mostly conjecture on my part). There are plenty of metrics in e.g. SciKit learn however that take an array
sample_weight that does something similar - e.g. Sklearn's Zero-One loss is pretty close except it will include false negatives as well.