**1) Semantic Information**

Let us start with what information is. Suppose we have a set of sentences we know to be true, this allows us to answer (some) questions about the world. As we learn more sentences to be true the amount of questions we can answer grows. In the epistemic logic this is measured by defining an epistemic space, consisting of all possible worlds, and taking the region of it where at least one of the known sentences is false. This falsified region represents *semantic information*, it grows as we learn more, and so does the number of questions, on which the remaining worlds agree. Those questions are settled. Semantic information is the totality of the ruled out worlds. The region of all worlds but one, the actual world, represents complete information. This picture goes back to Carnap and Bar-Hillel (they also head a numerical measure of information based on a probability measure on the epistemic space).

How can we get new information? We can observe something to be true empirically. But we can also deduce new true sentences by using logic. Alas, here comes the pain. Classical possible worlds are logically maximal, if some sentences are true in them then so are all their logical consequences. Hintikka called this "logical omniscience". This means that *we do not gain any new information by deriving consequences*, worlds where they are false are already ruled out by the original sentences. Think about it, on this conception, when Wiles proved the Last Fermat theorem we have learned nothing new! This Hintikka called the "scandal of deduction". Aside from references in the comments, a classical source is Hintikka's book Logic, Language-games and Information, see also
freely available commentary by Sagüillo.

**2) Depth and surface information**

Hintikka's solution was to qualify what was described above as the *depth* information. It is an ideal limit, all that we can, in principle, obtain from the armchair, without making any new observations. But some of it is buried deep. As we are non-ideal agents, our ability to deduce is limited. Sequoiah-Grayson gives technical details of measuring the depth in his critique. One has to use a particular formal system for deriving consequences and a particular manner of deriving them (this is needed to make the depth uniquely defined), represent derivation formulas in a normal form (with quantifiers moved to the prenex), and count how many new quantifiers are added in the course of a derivation.

Long story short, we call a consequence to be of depth k if deriving it requires adding exactly k quantifiers. This is a measure of the consequence's non-triviality. In qualitative terms, the distinction between depth and surface consequences was anticipated by Peirce's distinction between corollarial and theorematic proofs, which in turn generalized the distinction between "logical" (syllogistic) and "geometric" (diagrammatic) inferences in Euclidean demonstrations, noted already by Aristotle. For the depth k semantic information we take only those worlds, where our base sentences, and their consequences *up to this depth*, are falsified. Surface information is depth 0, only trivial consequences are taken into account. Aristotle's syllogisms produce only such consequences. One can understand why Kant thought that logic does not suffice for mathematics. This also means that some of our "possible" worlds are, in fact, incoherent. Before 1995 mathematicians could believe the axioms of set theory, *and* disbelieve the Last Fermat theorem, they were incoherent without being irrational.

**3) Depth in Euclidean demonstrations**

I explained in another post why Hintikka's solution to the scandal of deduction does not quite work, and how it was fixed, Is the problem of logical omniscience intractable?
Here let me explain what the depth means informally, in particular, in geometry. Think about natural deduction arguments, where variables can be instantiated (i.e. generic objects picked for them), and quantifiers removed. The more quantifiers are added, the more new objects, not present in the premises or the conclusion, feature in the intermediate reasoning. This has a counterpart in Euclidean demonstrations: surface information can be read off directly from the diagram depicting the premises, depth information requires producing auxiliary lines/circles, the more of them the deeper. Here is Hintikka himself in C. S. Peirce's “First Real Discovery” and Its Contemporary Relevance (1980):

"*What makes a deduction theorematic according to Peirce is that in it we must envisage other individuals than those needed to instantiate the premise of the argument. The new individuals do not have to be visualized, as the geometrical objects introduced by an Euclidean construction are. They have to be mentioned and considered in the argument, however.*

*How are such new individuals introduced? An example is obtained by converting the arguments used in elementary geometry into arguments using modern symbolic logic, especially quantification theory. Then each new layer of quantifiers adds a new individual (geometrical object) to the configurations of individuals we are considering. After all, each quantifier invites us to consider one individual, however indefinite. (The existential quantifier "(∃x)" can be read "there is at least one individual, call it x, such that"; and correspondingly for the universal quantifier.)*

[...] *Peirce's crucial insight was what happens when a traditional semi-formal geometrical argument which employs figures is converted into an explicit logical argument. Figures actually displayed of course become redundant, but the letters (or letter combinations) referring to them will become free variables (or other free singular terms, such as dummy names, depending on how the underlying logic is set up and what terminology is used in it in connection with instantiations), used in the formal argument. (Cf. the quotation above from Collected Papers 4.616, where Peirce speaks of Euclid's use of Greek letters as proper names for geometrical objects.) Each time a new geometrical object was introduced into the old semi-formal argument, a new free singular term is introduced in the formal argument, typically through a step of instantiation. Hence the complexity of the configurations of individuals considered in the semi-formal and the formal argument is the same.*

**4) Euclid's diagrammatic method**

Since Euclid did not possess multi-place predicates, quantifiers, instantiation rules, or any other logical machinery beyond the syllogistic, he had to read off non-trivial conclusions directly from the diagrams, not infer them logically from the axioms. His demonstrations are not chains of inferences, they are just accompanied by (surface) inferences. Reading off is not as easy as it sounds, even after the auxiliary constructions are carried out, one needs generality controls to make sure that accidental features of the diagrams are not taken at face value. A detailed study of the Euclid's diagrammatic method, a modern classic, is Manders's Euclidean Diagram (published in Mancosu edited volume, freely available).

We also know now that Hintikka's "conversion" into natural deduction does not quite work either. Despite some broad structural similarities, including the depth parallelism, the "configurative logic" of diagrams is incongruent with the natural deduction, see Greek Geometrical Analysis by Behboud. Figures do not become redundant and letters can not be straightforwardly identified with instantiated variables (because logical individuals can not intersect, unlike the auxiliary lines, and there is no analog of the construction postulates).

As a result, Euclid's demonstrations can not be "translated" into formal derivations by "filling the gaps", they have to be reworked, e.g. a la Hilbert. Euclid's own approach is closer to the semantic method of modern informal proofs than to the (formal) axiomatic method, see Rodin's Doing and Showing (freely available) and Rav's Axiomatic Method in Theory and in Practice. A more faithful modern reconstruction, that preserves diagrams as essential components of demonstrations, was developed by Mumma, see papers on his homepage.

1The idea seems to be that depth information is axiomatic and surface information is derived from axioms.or derived from other information. It may be similar to the a priori and analytic distinction.of Kant. . – None – 2019-01-07T14:20:32.993

1

See J.Hintikka, Surface Information and Depth Information.

– Mauro ALLEGRANZA – 2019-01-07T14:26:01.593Maybe useful SEP's entry on Logic and Information.

– Mauro ALLEGRANZA – 2019-01-07T14:29:05.8371It is peculiar to Hintikka's theory and a way to learn more about it is to read the harsh attack published by S. Sequoiah-Grayson

The Scandal of Deduction,JPhph Logic 37(2008) p67-94 (pdf psu.edu) – sand1 – 2019-01-07T16:47:24.8432

Sequoiah-Grayson is overly technical, for a more accessible account see The Philosophy of Mathematical Information by D'Agostino, pp.13-17. Roughly, surface information is accessible via "trivial" reasoning, without introducing new objects to derive consequences from the premises (in formal terms, without introducing new quantifiers into the derivation formulas). Depth information stays the same throughout derivations, surface information grows.

– Conifold – 2019-01-07T21:55:02.387A quantitative attempt at specifying the information yield of deductions was undertaken by Jaakko Hintikka with his theory of surface information and depth information (Hintikka 1970, 1973). The theory of surface and depth information extends Bar-Hillel and Carnap’s theory of semantic information from the monadic predicate calculus all the way up to the full polyadic predicate calculus. This itself is a considerable achievement, but although technically astounding, a serious restriction of this approach is that it is only a...https://plato.stanford.edu/entries/logic-information/

– Bread – 2019-01-07T22:20:15.943