In my opinion, this is subjective and problem specific. You should use whatever is the most important factor in your mind as the driving metric, as this might make your decisions on how to alter the model better focussed.
Most metrics one can compute will be correlated/similar in many ways: e.g. if you use MSE for your loss, then recording MAPE (mean average percentage error) or simple $L_1$ loss, they will give you comparable loss curves.
For example, if you will report an F1-score in your report/to your boss etc. (and assuming that is what they really care about), then using that metric could make most sense. The F1-score, for example, takes precision and recall into account i.e. it describes the relationship between two more fine-grained metrics.
Bringing those things together, computing scores other than normal loss may be nice for the overview and to see how your final metric is optimised over the course of the training iterations. That relationship could perhaps give you a deeper insight into the problem,
It is usually best to try several options, however, as optimising for the validation loss may allow training to run for longer, which eventually may also produce a superior F1-score. Precision and recall might sway around some local minima, producing an almost static F1-score - so you would stop training. If you had been optimising for pure loss, you might have recorded enough fluctuation in loss to allow you to train for longer.