Why is Tarski's notion of logical validity preferred to deductive one?

7

3

Its flaws are well-known and serious. To recall, an inference from A to B is valid iff all interpretations of "non-logical constants" that make A true also make B true. What are interpretations, a.k.a. models or possible worlds? These are metaphysically loaded, a nominalist would reject their use, and inherently vague, the leading theories, like Kripke's or Lewis's, disagree on basics of how they function. It is hard to agree on truth of A and B if we do not agree whether "water" refers to anything in a given interpretation. This is of course related to having to understand "meanings" of sentences to ascertain their truth, and brings up a bag of problems with Carnap's analytic/synthetic distinction, Quine's criticisms of meaning and synonymy, etc.

Moreover, when we want to verify validity of inference we do not turn on our Platonic mindsight and survey possible worlds for truth values, instead we try to find an argument that deduces B from A. So Tarski's notion is not only metaphysically loaded but pointlessly so. And pedagogically speaking, it has no adequate counterpart for argument, as intuitively understood. Which leads to some identifying "argument" with inference, and puzzled questions like "why can't we have invalid arguments with tautological conclusions".

It would be one thing if we were stuck with nothing better and had to make do. But there is deductive notion of validity that has none of these problems. Deduction is valid if each step obtains from previous ones by the usual logical rules (modus ponens, etc). Inference is valid if there is a valid deduction with the same premise and conclusion. Deduction is a formal couterpart to intuitive argument, its validity is not determined solely by premise and conclusion, but by all steps. And deductive notion of validity tracks how we actually verify logical validity, unlike Tarskian inference. The need to deal with "meanings" is much reduced, and the metaphysical load is accordingly lighter. See McKeon's IEP article.

In 1847 Mathematical Analysis of Logic Boole brandished as a key advantage of his logical calculus that "the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of their combination". Now that we have a much more advanced calculus why is Tarski's definition so prevalent in textbooks and online sources? To a point where its non-exclusiveness and baggage are not even mentioned. Are there benefits that outweigh the costs, or is it just inertia of tradition?

One of the reason is that for Higher-order logic the two notion do not coincide : "by a result of Gödel, HOL with standard semantics does not admit an effective, sound, and complete proof calculus."

– Mauro ALLEGRANZA – 2015-07-09T07:00:01.850

@Mauro ALLEGRANZA Gödel's result only shows that one should not restrict deductive arguments to a single universal deductive system, like logicism, whether first or higher order, one has to work with a meta-language. But in the meta-language results about HOL, including Gödel's, are still established by deductive arguments and not by Tarskian inference. – Conifold – 2015-07-10T21:29:27.120

It occured to me that vagueness may actually count as a benefit, it is much easier to manipulate what counts as valid argument by manipulating what counts as interpretations and making definitions in terms of them, than by manipulating deductive rules. Plantinga uses possible worlds to resurrect the ontological argument and prop up free will defense of God's benevolence, Kripke uses them to argue for mind body dualism, etc. – Conifold – 2015-07-13T19:09:25.500

5

Reagarding Tarski's original motivation, we can see the new English translation of Tarski's 1936 paper :

Even relatively recently it seemed to many logicians that they had managed, with the help of a relatively simple conceptual apparatus, to capture almost precisely the everyday content of the concept of following, or rather to define a new concept which with respect to its denotation would coincide with the everyday concept.

Thanks to the development of mathematical logic, we have learned during recent decades to present mathematical sciences in the form of formalized deductive theories. In these theories, as is well known, the proof of each theorem reduces to single or multiple application of a few simple rules of inference - such as the rule of substitution or detachment - rules which instruct us to which operations of a purely structural character (i.e. operations involving exclusively the external structure of the sentences) one has to subject axioms of the theory or previously proven theorems in order that the sentences obtained as a result of those operations may also be acknowledged as proven. Logicians began to suppose that those few rules of inference completely exhaust the content of the concept of following: whenever a sentence follows from others, it can be obtained from them - by a more or less complicated route - with the help of the operations specified in these rules.

Nevertheless, today we are already aware that the scepticism was here not at all out of place and that the position sketched above cannot be maintained. Already a few years ago, I gave an example - by the way a quite elementary one - of a deductive theory which exhibits the following peculiarity : [example follows of ω-incomplete theory].

[...]

The supposition suggests itself that on the route sketched above - supplementing the rules of inference used in the construction of deductive theories with further rules of a structural character - we would succeed finally in capturing the `essential’ content of the concept of following, which has by no means been exhausted by the rules used until now. Relying on the investigations of K.Gödel, one can demonstrate that this supposition is mistaken: if we abstract from certain theories with a very elementary structure, then always - no matter how we enrich the stock of rules of inference - we shall be able to construct sentences which follow in the everyday sense from the theorems of the deductive theory under consideration, but which cannot be proven in this theory on the basis of the accepted rules. In order to obtain the proper concept of following, essentially close to the everyday concept, one must resort in its definition to other methods altogether and use a quite distinct conceptual apparatus.

I mostly had in mind uses of logic outside of mathematics, like analysis of natural language arguments, where ω-completeness concerns are remote but vagueness of interpretations is very consequential. Tarski's took his 1936 motivations from Carnap before Quine's criticism of meaning and analyticity made most of them untenable (analytic is Tarski's inferred from empty premise). And in mathematics specifically there is no issue, students are first taught deductive reasoning, then formal systems and only then model-theoretic concepts. – Conifold – 2015-07-10T21:14:01.090

But even models in mathematical logic do not escape vagueness beyond arithmetic, there is no standard model of set theory, and no way to single out any model of set theory. Arithmetic is special because there is a "nice" second order theory for it. It seems to me that model theory succeeds because despite the usual phrasing "model" is just another deductive system where validity is established by producing arguments, not through Tarskian inference. – Conifold – 2015-07-10T21:15:43.690

So Gödelian arguments only work against a universal deductive system, not deductive notion of inference, which captures ω-completeness just fine when language and meta-language are used in combination. Tarskian notion on the other hand is intuitively misleading as it predicts definite truth values for things like CH, which few now believe are forthcoming. http://math.stackexchange.com/questions/1345122/why-do-we-know-that-g%C3%B6del-sentences-are-true-in-the-standard-model-of-set-theory

– Conifold – 2015-07-10T21:16:08.153

@Conifold - but the notion of valid argument is as old as logic iteself; see Aristotle's Logic : "All Aristotle’s logic revolves around one notion: the deduction (sullogismos). What, then, is a deduction? Aristotle says: 'A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so.' (Prior Analytics I.2, 24b18–20)' 1/2

– Mauro ALLEGRANZA – 2015-07-12T17:03:53.577

1The core of this definition is the notion of “resulting of necessity”. This corresponds to a modern notion of logical consequence : X results of necessity from Y and Z if it would be impossible for X to be false when Y and Z are true. We could therefore take this to be a general definition of “valid argument”. 2/2 – Mauro ALLEGRANZA – 2015-07-12T17:04:26.160

Aristotle's syllogisms are concluded according to formal rules ("figures"), and he explicitly claims that any valid argument is reducible to a chain of syllogisms. So his syllogisms derive "necessity" from their form, not external interpretations. The same goes for Stoics, Leibniz's actual infinity of possible worlds would have been anathema to both. I think Tarskian notion is something of a hazy approximation of intuition "from above", whereas deductive notion is its approximation "from below", but a closer and cleaner one. – Conifold – 2015-07-13T19:00:43.970

6

This is an interesting and important question and merits a long answer. I shall be as concise as I can consistently with being helpful. The question asks whether we should understand validity in terms of proof, which is a syntactic concept, or in terms of models, which is a semantic concept. Proofs are powerful and able to solve complex problems by reducing them to the application of a few rules. Models are potentially complex and nebulous and involve coming to terms with truth and meaning. So, why not stick with proofs?

1. First off, we might ask, why does the difference matter? If we are using first order classical predicate logic, for example, the two agree. FOPL is sound (syntactic validity implies semantic validity) and complete (semantic validity implies syntactic validity). The question will only be important if we are either contemplating a non-classical logic, or an extension to the domain (e.g. arithmetic) or if we are asking an epistemological question about what justification there is for our logic.

2. Proof theory reduces inference to deductive rules, and these are supposed to be intuitively obvious. But if this is so, which logic should we use? The intuitionist objects to LEM. The paraconsistent logician objects to LNC. The relevance logician objects to disjunction introduction. Why do they object? For semantic reasons: the rules lead to patterns of inference that they regard as objectionable when interpreted. It doesn't matter if you disagree with their arguments and consider classical logic to be correct; the mere fact that one can coherently argue about which logic is correct by appeal to its interpretation shows that it is the interpretation that matters. Constructing proofs by manipulating rules is just playing with symbols until you interpret them. The interpretation is where the rubber hits the road: logic must cohere with the empirical project of allowing us to make sense of the world around us, or else it is useless.

3. To reinforce this point, ask yourself this question: would you prefer to use a logic that is sound but incomplete, or one that is complete but unsound? I would take the former every time. If my logic is unable to prove something that I consider true, that is unfortunate but I'll live with it. But if my logic is proving things that are false, what use is it? It is the semantics that is in the driving seat and the proof system had better agree with it, or the proof system will need fixing.

4. Another consideration is that our understanding of logic is growing and it is doing so for semantic and empirical reasons. Classical logic does not cope well with vagueness or uncertainty, so we can extend it. Maybe classical logic does not correctly describe the logic of quantum theory, in which case we might have empirical grounds for changing the rules. Again, it is the semantics that is fundamental. Or with modal logics: they need different rules because modal contexts are referentially opaque and do not obey the normal quantification rules. How do we know? Because semantics.

5. You say that appealing to interpretations has no value in identifying validity, but this is at best only half true. It is perfectly apt for identifying invalidity. Suppose someone asks you whether the following argument is valid: "all chimps are warm-blooded; all apes are warm-blooded; therefore all chimps are apes". I contend that rather than producing a proof that this in invalid, it is far easier to observe that substituting "dolphin" for "chimp" yields an argument with true premises and a false conclusion.

You will have noticed, I'm sure, that I'm defending a broadly empiricist account of logic, in the spirit of Quine, Tarski, Putnam, Kripke and Lewis. Logic must answer the pragmatic requirement of expressing, organizing and systematizing our knowledge. Rationalists will no doubt shake their heads and want to claim that logic is concerned with a priori laws of thought. The history of science has not been kind to claims of a priori knowledge. It still has its defenders though, which include important logicians such as Jean-Yves Girard.

I share empiricist view of logic, but in mathematics at least it is not semantic conclusions that are objected to, nor are rules judged based on interpretations. The use of "interpretations" is a game of pretense, a roundabout talk about axioms and deductions that obscures its own nature. To me Gödel's "true but unprovable" is an example of such confusion, and the whole notion of soundness is an extension of it. Is ZFC sound? How do we reach the Platonic realm and find out what is true about sets? On Quine's view interpretations are fictions of our own making, not grounds for choosing rules. – Conifold – 2017-02-07T00:57:21.303

Even in mathematics, e.g. set theory, use of interpretations and models is common. As to not using semantics to judge rules, I suspect this is just because there is seldom a need for it. But suppose someone proved that the rule of transfinite induction implies that PA has no models - would one not be inclined to dispense with the rule? I know some who consider the fact that ZFC proves Banach-Tarski as a reason not to accept AC as an axiom. Some also prefer to dispense with most or all of set theory in favour of plural quantification because of the metaphysical baggage that set theory brings... – Bumble – 2017-02-07T15:12:42.150

1As the saying goes, many mathematicians are platonists on week days and only formalists at the weekend. Come to that, what justifies the rules that proof theorists work by? Are they not just as platonic? Ultimately we are thrown back on our faculty of reason whichever way we jump. – Bumble – 2017-02-07T15:13:15.210

Sure, models are a useful language, but they just rephrase what ZFC rules and axioms derive, and we have no way of knowing things about them other than deriving them. I am not saying that semantics is a somehow bad way of establishing validity, it is simply non-existent as such, a handy but confused code for something else. Transfinite induction implies that PA has too many models, but we keep it along with AC, not because it is "sound" but because it is handy. Like all rules it is "justified" by successful practice down here, which is the check on reason, not imaginary semantics of up there. – Conifold – 2017-02-07T21:33:20.273

0

This may be bleed-in from the rest of mathematics.

Modern mathematics consists almost entirely of the study of Categories of objects that are interpretations of axioms. Even in Analysis, formalizations like Banach spaces abstract the domain into one with variable models, instead of focussing upon the single model of the Complex or Real numbers. And Topology pressures Geometry to follow suit by reducing a Geometry to the extension of a Topology by a measure.

Establishing this model-theoretic interpretation of formal logic just brings it into line with Abstract Algebra, Topology and Computation Theory as the study of axiomatic systems through their models, and joins in the ongoing 'Categorification' of Analysis and Geometry.