Understanding Hintikka's scandal of deduction (as depicted by D'Agostino)

3

1

I am having trouble understanding Hintikka's Scandal of Deduction, as depicted in D'Agostino's article. According to this account, the problem stems from the fact that, while first order logic is undecidable, the completeness theorem shows that there is a semi-decision procedure. The problem is, even if given a set of premises a conclusion is indeed the logical consequence, there is no guarantee that we can obtain that proof within bounded resources (I take it as there being a practical resource limitation). And there is certainly no such procedure if the conclusion is not a logical consequence.

Is there a deduction analog to the problem of induction?

What is the difference between depth and surface information?

Is the problem of logical omniscience intractable?

Q1: First confusion I have concerns depth information; according to D'Agostino it is defined to be equivalent to semantic information (ie. in terms of possible worlds excluded), whereas the answer defines it as '...an ideal limit, all that we can, in principle, obtain from the armchair, without making any new observations.' It also seems to distinguish depth info from surface info by depth in terms of counts of quantifiers. I don't see how the latter's definition is in anyway related to being equivalent to semantic information.

Regarding the ideal limit bit, here is what I think it is saying but I am not at all sure: it concerns the cognitive limitation of human to see logical consequences that are of certain degree of complexity, i.e. The depth info of a sentence according to this formulation, are ALL logical consequences that are theoretically derivable, regardless of the limitation of human cognitive power.

For example, the Last Fermat theorem is a part of depth information derivable by sentences of number theory, but because of its complexity it is beyond human's cognitive power to immediately deduce it just by looking at sentences of number theory.

Q2: The quantifiers are the most confusing part: I don't at all understand the relevance of quantifier in this context; is this supposed to be an objective measure of how complicated the information a sentence conveys? e.g. ∀n∈N(n=n) is more complicated than ∃x∈N∀y∈N (x≤y) because the latter has two quantifiers? And surface information are those that have no quantifier (since 'Surface information is depth 0')?

I don't really see why quantifier is involved; for me the two sentences above, while having different counts of quantifiers, are pretty much equal in terms of being trivially true. Also if surface info are supposed to increase after we have known a new theorem through deduction, then I don't see how the quantifier is involved - I just cannot make the connection.

1Re: Q1, "possible worlds" in the context of first-order logic are exactly structures (or structures + variable assignments if you're looking at arbitrary formulas and not just sentences). A logical truth holds in all structures, hence excludes no worlds. The propositional analogue of a structure is just a truth assignment to the propositional atoms; this is a much less rich object - and I believe this is going to be part of what makes Hintikka's notion not play well at the propositional level - but it serves the same purpose (namely it determines the truth or falsity of each given sentence). – Noah Schweber – 2019-05-06T22:28:32.823

Possible duplicate of What is the difference between depth and surface information?

– Conifold – 2019-05-07T17:54:19.417

1@Conifold The OP states at the end that they read that post but didn't understand it. To the OP, I think it would help avoid this question being closed as a duplicate if you could give a particular point in one of the linked posts that you don't understand, or a particular issue they don't seem to address. – Noah Schweber – 2019-05-07T18:14:19.473

@NoahSchweber I read it, but "still I am not understanding it" is not enough to for a new question. – Conifold – 2019-05-07T18:16:51.907

@Conifold The answer you gave was a bit too technical for me to understand, especially the depth bit. I actually found D'Agostino's article on your suggestion in another comment related to Scandal of Deduction and agree with you, as you rightly suggested back then, that it is a more accessible account of this issue. – Daniel Mak – 2019-05-07T19:43:00.677

But what specifically is it that you do not understand? What can we say beyond repeating what is already written there? That is unclear. Your phrasing is too generic, and there are too many questions for one post. Maybe it will help to focus it on just one issue. – Conifold – 2019-05-07T19:51:31.430

I removed the generic parts, and only left the more specific text you added to reduce the post to a reasonable size (it is still too long, though). You can roll back the edit. – Conifold – 2019-05-08T21:06:35.350

3

1) Depth information

D'Agostino's Philosophy of Mathematical Information explicitly disclaims defining information. He takes "the operational view that, whatever its nature may be, information manifests itself in an agent’s disposition to answer questions". This can be quantified (following Carnap-Bar Hillel, and more generally, Hintikka) by introducing possible worlds, and thinking of them as different conglomerates of answers to all meaningful questions. Then the depth information in a sentence can be identified with the region of the possible worlds' space ruled out by it for an ideal reasoner, one without any computational limitations. The bigger this region, the more questions the ideal reasoner can answer definitively (the rest can have different answers in different remaining worlds). The "ideal limit, all that we can, in principle, obtain from the armchair, without making any new observations" is just a colorful way of describing the capabilities of such an ideal reasoner.

2) Quantifier depth

The quantifier depth (in a normalized form and under a deduction system of special type) was indeed thought by Hintikka as representing the complexity of a sentence. Hintikka's reason for taking quantifier depth as a measure of complexity relates to how the reasoning is modeled in certain types of deduction systems, natural deduction systems. There, to reason about quantified formulas one "instantiates" the quantified variables by introducing individuals they stand for, and reasoning about those. So ∀n∈N(n=n) only requires reasoning about one such individual (an integer), whereas ∃x∈N∀y∈N(x≤y) involves a relation between two different ones. The more individuals (and relations) are involved, the more complex the reasoning becomes, according to Hintikka. A non-ideal reasoner with bounded resources can only handle so much complexity.

This is made more vivid by the classical example of demonstrations in Euclidian geometry. There, the reasoning can be traced on a diagram with different geometric items, and the more individuals are introduced (by auxiliary constructions, which Hintikka assimilates to instantiations of bound variables) the more complex the diagram, and the tracing of it, becomes.

One can see already from this sketch that the quantifier depth is a rather crude measure of complexity. Intuitively, not just the number of items, but also the complexity of relationships between them should play a role (a diagram with n isolated dots is not that complex). And the number of quantifier alterations also seems relevant, in addition to the overall quantity of quantifiers. For example, ∀y∈N∃x∈N(x≤y) seems more complex than ∃x∈N∀y∈N(x≤y) because it introduces a functional dependence of x on y, whereas in the latter x is the same for all y. Indeed, this intuition in reflected in the level of a formula in the so-called arithmetical hierarchy. As many authors pointed out, there is an even more basic problem: quantifier depth can not at all account for the kind of complexity embodied in complicated Boolean tautologies, for example. To account for that, D'Agostino and Floridi introduced a second dimension of complexity of arguments (also specific to the modes of reasoning in natural deduction systems), namely the depth of nested conditional arguments that introduce and discharge assumptions. Whether combining their measure with Hintikka's can account for all other types of complexity in reasoning is an interesting research question.

Thank you, that answered a lot of my questions. So depth information is the total information contained. Surface information is sort of like a proper subset of depth information, the crucial difference being that surface info is effective computable (or retrievable given the computational limit), and it increases whenever a new logical consequence is deduced, while depth info never changes. Does that sound reasonable? – Daniel Mak – 2019-05-09T19:53:38.423

1@DanielMak The idea is right, but "effectively computable" has a technical meaning that is too strong, even depth information may well be effectively computable, in principle. It is in Euclidean geometry because Tarski showed that the system is decidable, but some theorems may still be computationally intractable for bounded reasoners. Surface information (Hintikka's) is not a unified thing, it is stratified by numbers, there is depth 1, depth 2, etc., surface information. – Conifold – 2019-05-09T20:21:17.490

1Each one increases when a consequence of relevant depth is derived, the sum total of surface informations is the depth information. As a result, it can not change from deriving consequences, only when new observations are made that can answer previously unresolved questions. – Conifold – 2019-05-09T20:26:07.093