7

3

Its flaws are well-known and serious. To recall, an inference from A to B is valid iff all interpretations of "non-logical constants" that make A true also make B true. What are interpretations, a.k.a. models or possible worlds? These are metaphysically loaded, a nominalist would reject their use, and inherently vague, the leading theories, like Kripke's or Lewis's, disagree on basics of how they function. It is hard to agree on truth of A and B if we do not agree whether "water" refers to anything in a given interpretation. This is of course related to having to understand "meanings" of sentences to ascertain their truth, and brings up a bag of problems with Carnap's analytic/synthetic distinction, Quine's criticisms of meaning and synonymy, etc.

Moreover, when we want to verify validity of inference we do not turn on our Platonic mindsight and survey possible worlds for truth values, instead we try to find an argument that deduces B from A. So Tarski's notion is not only metaphysically loaded but pointlessly so. And pedagogically speaking, it has no adequate counterpart for argument, as intuitively understood. Which leads to some identifying "argument" with inference, and puzzled questions like "why can't we have invalid arguments with tautological conclusions".

It would be one thing if we were stuck with nothing better and had to make do. But there is deductive notion of validity that has none of these problems. Deduction is valid if each step obtains from previous ones by the usual logical rules (modus ponens, etc). Inference is valid if there is a valid deduction with the same premise and conclusion. Deduction is a formal couterpart to intuitive argument, its validity is not determined solely by premise and conclusion, but by all steps. And deductive notion of validity tracks how we actually verify logical validity, unlike Tarskian inference. The need to deal with "meanings" is much reduced, and the metaphysical load is accordingly lighter. See McKeon's IEP article.

In 1847 Mathematical Analysis of Logic Boole brandished as a key advantage of his logical calculus that "

the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of their combination". Now that we have a much more advanced calculus why is Tarski's definition so prevalent in textbooks and online sources? To a point where its non-exclusiveness and baggage are not even mentioned. Are there benefits that outweigh the costs, or is it just inertia of tradition?

One of the reason is that for Higher-order logic the two notion do not coincide : "by a result of Gödel, HOL with standard semantics does not admit an effective, sound, and complete proof calculus."

– Mauro ALLEGRANZA – 2015-07-09T07:00:01.850@Mauro ALLEGRANZA Gödel's result only shows that one should not restrict deductive arguments to a single universal deductive system, like logicism, whether first or higher order, one has to work with a meta-language. But in the meta-language results about HOL, including Gödel's, are still established by deductive arguments and not by Tarskian inference. – Conifold – 2015-07-10T21:29:27.120

It occured to me that vagueness may actually count as a benefit, it is much easier to manipulate what counts as valid argument by manipulating what counts as interpretations and making definitions in terms of them, than by manipulating deductive rules. Plantinga uses possible worlds to resurrect the ontological argument and prop up free will defense of God's benevolence, Kripke uses them to argue for mind body dualism, etc. – Conifold – 2015-07-13T19:09:25.500