Demonstrating that rigour is important



Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them.

That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I'm asking the question is that it occurred to me that I don't have a good stock of examples myself.

The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months' time if somebody suggests that proofs aren't all that important one can refer them to this page for lots of convincing examples that show that they are.

Added after 13 answers: Interestingly, the focus so far has been almost entirely on the "You can't be sure if you don't have a proof" justification of proofs. But what if a physicist were to say, "OK I can't be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that's good enough for me." To counter that, we would want to use one of the other reasons, such as the "Having a proof gives more insight into the problem" justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.)

Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, "I can't understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that's good enough." One way of countering such an argument would be to give justifications such as the ones that I've just briefly sketched. But those are a bit abstract and will not be convincing if you can't back them up with some examples. So I'm looking for some good examples.

What I hadn't spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier.


Posted 2010-09-03T13:08:22.820

Reputation: 18 667

"Sufficient unto the day is the rigor of the arguement."-old Historian of Mathematics saying,authorship unknown – Andrew L 0 secs ago – The Mathemagician – 2010-12-06T20:47:36.210


I hope I'm not flogging a dead horse, but there is a great discussion on cstheory that was spawned by this thread and fits with @gowers 'further addition' section. In particular, it is a list of cstheory results where the rigorous demonstration of an 'obviously true' statement resulted in interesting insights.

– Artem Kaznatcheev – 2014-01-27T06:54:46.877

@KConrad: I have been told the story of a mathematician who, while making a talk and being asked about a proof of some results he had claimed, exclaimed: “Why do you need a proof? It's a theorem!” – ACL – 2015-10-24T11:21:25.270

+1 I'm very glad someone asked this. As someone who works closely with physicists and engineers, I struggle to justify my desire for precision and proof sometimes. – icurays1 – 2017-12-13T19:02:03.877

The tag "gn.general" is a mistake, coming from the addition of a space in "general topology." Right now, yours is the only post using it. Perhaps retag? – JBL – 2010-09-03T13:18:36.670

12There's a clear advantage to knowing a 'good' proof of a statement (or even better, several good proofs), as it is an intuitively comprehensible explanation of why the statement is true, and the resulting insight probably improves our hunches about related problems (or even about which problems are closely related, even if they appear superficially unrelated). But if we are handed an 'ugly' proof whose validity we can verify (with the aid of a computer, say), but where we can't discern any overall strategy, what do we gain? – Colin Reid – 2010-09-03T13:53:39.983

I couldn't find an appropriate tag but will definitely retag if anyone has a good suggestion. (Or am happy to see it retagged if anyone with power to do so wants to do so.) – gowers – 2010-09-03T14:43:29.653

<< statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them.>> Examples? – Sergei Tropanets – 2010-09-03T15:26:20.913


How about Merten's Conjecture? It was verified in a very large number of cases before it was disproved.

– Skip – 2010-09-03T15:27:10.607

4I think a fairly good example is that the end-to-end distance of a typical self-avoiding walk of length n is n^{3/4}. Another is Goldbach's conjecture. (In both cases, there are powerful heuristic justifications that are backed up by computational evidence.) – gowers – 2010-09-03T15:30:12.853

Sorry -- I should make clear that that n^{3/4} was an approximation. – gowers – 2010-09-03T15:31:00.137

11What kind of person do you have in mind who would suggest proofs are not important? I can't imagine it would be a mathematician, so exactly what kind of mathematical background do you want these replies to assume? – KConrad – 2010-09-03T15:33:20.647

2Proofs (hopefully) provide understanding. Nobody doubts RH is true but we don't know why. – Felipe Voloch – 2010-09-03T16:07:30.923

9Colin Reid- I think one can differentiate between a person understanding and a technique understanding. The latter applies even if we cannot understand the proof. We know that the tools themselves "see enough" and "understand enough", and that in itself is a significant advance in our understanding. But we still want a "better proof", because a hard proof makes us feel that our techniques aren't really getting to the heart of the problem- we want techniques which understand the problem more clearly. – Daniel Moskovich – 2010-09-03T16:26:59.557


Here is some interesting and provocative reading with some opposition to rigor, by a mathematician:

– Jonas Meyer – 2010-09-03T20:54:12.060

@Voloch, there are still people who doubt the proof of the Poincare conjecture. So it seems doubtful that it could be true that "nobody doubts RH". Without even a proposed proof, why shouldn't people doubt it? – Ryan Budney – 2010-09-03T22:47:43.493

2This is a bit off topic, but related to gowers last sentence. I don't really like the idea of having to justify that proof is important. I think asking the question Why?' is natural and important in its own right. Curiosity appears to be built in to us. 'Why?' drives a significant portion of the sciences and humanities and everyday life. It would be hard to deny that our active desire to satisfy this inbuilt curiosity is at least partially responsible for human advance (whatever that is). For me, trying to find specific examples where having a proofsaves lives' cheapens the whole process. – Robby McKilliam – 2010-09-04T00:26:19.217

Still, I did like Daniel's plane story :) – Robby McKilliam – 2010-09-04T00:29:55.077

6I just want to make it clear that I do NOT dispute the importance of proofs, or intend this question to be a discussion of that. Rather, I want to take it for granted that proofs are important, but have a good supply of examples that ILLUSTRATE their importance. Also, I'm much more interested in examples that show that finding proofs yielded far greater insights into the theorems than in the question of whether we can ever be fully certain of a statement that is not yet proved (not that I deny the importance of the latter). – gowers – 2010-09-04T14:43:26.423

1@KConrad -- Zeilberger was one example, mentioned by Jonas Meyer above. I can imagine that there might be physicists who would not understand why we are so interested in making their work rigorous, but I can't give actual examples there. – gowers – 2010-09-04T14:45:21.363


Gyorgy Sereny raised the point that for mathematics (or at least large parts of mathematics) we cannot even separate the "theorems" and the "proofs" and, in fact, it is the proofs that gives the theorems their importance.

– Gil Kalai – 2010-09-04T21:40:44.753

3In any case, while the role of rigor is an interesting question and also what substitute do we have to rigorous proofs, i think that it is usually much easier to explain to an outsider the importance of rigorous proofs than to explain to an outsider the importance of the statements we are proving. – Gil Kalai – 2010-09-04T21:48:18.827

15Concerning the Zeilberger link that Jonas posted, sorry but I think that essay is absurd. If Z. thinks that the fact that only a small number of mathematicians can understand something makes it uninteresting then he should reflect on the fact that most of the planet won't understand a lot of Z's own work since most people don't remember any math beyond high school. Therefore is Z's work dull and pointless? He has written other essays that take extreme viewpoints (like R should be replaced with Z/p for some unknown large prime p). – KConrad – 2010-09-05T01:39:40.283

21Every proof has it's own "believability index". A number of years ago I was giving a lecture about a certain algorithm related to Galois Theory. I mentioned that there were two proofs that the algorithm was polynomial time. The first depended on the classification of finite simple groups, and the second on the Riemann Hypothesis for a certain class of L-functions. Peter Sarnak remarked that he'd rather believe the second. – Victor Miller – 2010-09-06T15:56:15.913

5This question is an excellent example how MO can be effective and useful for a question which, while having strong academic merit, also has strong discussion-flavor, and lead to a discussion that can be rather subjective and even argumentative – Gil Kalai – 2010-09-07T08:01:27.203

Tim: I changed essentially just one letter of the tag to be "gm.general-mathematics", to bring it in line with the arXiv classification scheme. – Willie Wong – 2010-09-08T15:18:31.657

Willie: thanks for that -- it looks like a much more appropriate tag. – gowers – 2010-09-08T16:16:19.060



I once got a letter from someone who had overwhelming numerical evidence that the sum of the reciprocals of primes is slightly bigger than 3 (he may have conjectured the limit was π). The sum is in fact infinite, but diverges so slowly (like log log n) that one gets no hint of this by computation.

Richard Borcherds

Posted 2010-09-03T13:08:22.820

Reputation: 16 053

22If one wants to carry this to the extreme, any divergent series with the property that the n-th term goes to zero will converge on a calculator as the terms will eventually fall below the underflow value for the calculator, and hence be considered to be zero. – Chris Leary – 2012-01-01T23:54:44.433

5@KevinLin To be fair, your TI-83 calculator uses floating-point numbers, which cannot carry out that computation for very long. At some point, the number you're adding to the sum will be so small that (after floating-point rounding) the sum will literally remain unchanged. – BlueRaja – 2013-08-12T18:46:37.083

1@BlueRaja: yes, but the calculator will be dead long before that point happens. – Zsbán Ambrus – 2015-06-24T11:01:55.653

7I guess this was in the 20th century. These days you can check quickly with a computer that the sum of the first five million primes is already over pi. – Zsbán Ambrus – 2015-06-24T11:04:17.337

3To avoid floating point problems, you begin with the smallest number ... – J. Fabian Meier – 2015-08-24T12:32:25.510

1@ChrisLeary Of course this should happen before the sum overflows. – Fan Zheng – 2015-10-19T02:08:38.403

@Fan Zheng Absolutely. I'm going to see if I can find an example where overflow could be an issue. I guess the terms would need to go to zero very slowly. – Chris Leary – 2015-10-19T16:03:43.810

I heard Matijasevic say (words to the effect of) the sum of the reciprocals of the known primes is less than 5 – and always will be. – Gerry Myerson – 2017-12-13T20:43:27.393

68This reminds me of a classical mathematical foklore "get rich" scheme (or scam). The ad in a newspaper says: send in 10 dollars and get back unlimited amount of money in monthly installments. The dupe follows the instructions and receives 1 dollar the first month, 1/2 dollar the second month, 1/3 dollar the third month, ... – Victor Protsak – 2010-09-06T02:37:00.993

31I remember the first time I learned that the harmonic series is divergent. I was in high school, in my first calculus class; it was in 2001. I was really surprised and couldn't really believe that it could be divergent, so I programmed my TI-83 to compute partial sums of the harmonic series. I let it run for the entire day, checking in on the progress periodically. If I recall correctly, by the end of the day the partial sum remained only in the 20s. Needless to say, I was not convinced of the divergence of the series that day. – Kevin H. Lin – 2010-09-06T07:09:13.260


I would like to preface this long answer by a few philosophical remarks. As noted in the original posting, proofs play multiple roles in mathematics: for example, they assure that certain results are correct and give insight into the problem.

A related aspect is that in the course of proving an intuitively obvious statement, it is often necessary to create theoretical framework, i.e. definitions that formalize the situation and new tools that address the question, which may lead to vast generalizations in the course of the proof itself or in the subsequent development of the subject; often it is the proof, not the statement itself, that generalizes, hence it becomes valuable to know multiple proofs of the same theorem that are based on different ideas. The greatest insight is gained by the proofs that subtly modify the original statement that turned out to be wrong or incomplete. Sometimes, the whole subject may spring forth from a proof of a key result, which is especially true for proofs of impossibility statements.

Most examples below, chosen among different fields and featuring general interest results, illustrate this thesis.

  1. Differential geometry

    a. It had been known since the ancient times that it was impossible to create a perfect (i.e. undistorted) map of the Earth. The first proof was given by Gauss and relies on the notion of intrinsic curvature introduced by Gauss especially for this purpose. Although Gauss's proof of Theorema Egregium was complicated, the tools he used became standard in the differential geometry of surfaces.

    b. Isoperimetric property of the circle has been known in some form for over two millenia. Part of the motivation for Euler's and Lagrange's work on variational calculus came from the isoperimetric problem. Jakob Steiner devised several different synthetic proofs that contributed technical tools (Steiner symmetrization, the role of convexity), even though they didn't settle the question because they relied on the existence of the absolutely minimizing shape. Steiner's assumption led Weierstrass to consider the general question of existence of solutions to variational problems (later taken up by Hilbert, as mentioned below) and to give the first rigorous proof. Further proofs gained new insight into the isoperimetric problem and its generalizations: for example, Hurwitz's two proofs using Fourier series exploited abelian symmetries of closed curves; the proof by Santaló using integral geometry established more general Bonnesen inequality; E.Schmidt's 1939 proof works in $n$ dimensions. Full solution of related lattice packing problems led to such important techniques as Dirichlet domains and Voronoi cells and the geometry of numbers.

  2. Algebra

    a. For more than two and a half centuries since Cardano's Ars Magna, no one was able to devise a formula expressing the roots of a general quintic equation in radicals. The Abel–Ruffini theorem and Galois theory not only proved the impossibility of such a formula and provided an explanation for the success and failure of earlier methods (cf Lagrange resolvents and casus irreducibilis), but, more significantly, put the notion of group on the mathematical map.

    b. Systems of linear equations were considered already by Leibniz. Cramer's rule gave the formula for a solution in the $n\times n$ case and Gauss developed a method for obtaining the solutions, which yields the least square solution in the underdetermined case. But none of this work yielded a criterion for the existence of a solution. Euler, Laplace, Cauchy, and Jacobi all considered the problem of diagonalization of quadratic forms (the principal axis theorem). However, the work prior to 1850 was incomplete because it required genericity assumptions (in particular, the arguments of Jacobi et al didn't handle singular matrices or forms. Proofs that encompass all linear systems, matrices and bilinear/quadratic forms were devised by Sylvester, Kronecker, Frobenius, Weierstrass, Jordan, and Capelli as part of the program of classifying matrices and bilinear forms up to equivalence. Thus we got the notion of rank of a matrix, minimal polynomial, Jordan normal form, and the theory of elementary divisors that all became cornerstones of linear algebra.

  3. Topology

    a. Attempts to rigorously prove the Euler formula $V-E+F=2$ led to the discovery of non-orientable surfaces by Möbius and Listing.

    b. Brouwer's proof of the Jordan curve theorem and of its generalization to higher dimensions was a major development in algebraic topology. Although the theorem is intuitively obvious, it is also very delicate, because various plausible sounding related statements are actually wrong, as demonstrated by the Lakes of Wada and the Alexander horned sphere.

  4. Analysis The work on existense, uniqueness, and stability of solutions of ordinary differential equations and well-posedness of initial and boundary value problems for partial differential equations gave rise to tremendous insights into theoretical, numerical, and applied aspects. Instead of imagining a single transition from 99% ("obvious") to 100% ("rigorous") confidence level, it would be more helpful to think of a series of progressive sharpenings of statements that become natural or plausible after the last round of work.

    a. Picard's proof of the existence and uniqueness theorem for a first order ODE with Lipschitz right hand side, Peano's proof of the existence for continuous right hand side (uniqueness may fail), and Lyapunov's proof of stability introduced key methods and technical assumptions (contractible mapping principle, compactness in function spaces, Lipschitz condition, Lyapunov functions and characteristic exponents).

    b. Hilbert's proof of the Dirichlet principle for elliptic boundary value problems and his work on the eigenvalue problems and integral equations form the foundation for linear functional analysis.

    c. The Cauchy problem for hyperbolic linear partial differential equations was investigated by a whole constellation of mathematicians, including Cauchy, Kowalevski, Hadamard, Petrovsky, L.Schwartz, Leray, Malgrange, Sobolev, Hörmander. The "easy" case of analytic coefficients is addressed by the Cauchy–Kowalevski theorem. The concepts and methods developed in the course of the proof in more general cases, such as the characteristic variety, well-posed problem, weak solution, Petrovsky lacuna, Sobolev space, hypoelliptic operator, pseudodifferential operator, span a large part of the theory of partial differential equations.

  5. Dynamical systems

    Universality for one-parameter families of unimodal continuous self-maps of an interval was experimentally discovered by Feigenbaum and, independently, by Coullet and Tresser in the late 1970s. It states that the ratio between the lengths of intervals in the parameter space between successive period-doubling bifurcations tends to a limiting value $\delta\approx 4.669201 $ that is independent of the family. This could be explained by the existence of a nonlinear renormalization operator $\mathcal{R}$ in the space of all maps with a unique fixed point $g$ and the property that all but one eigenvalues of its linearization at $g$ belong to the open unit disk and the exceptional eigenvalue is $\delta$ and corresponds to the period-doubling transformation. Later, computer-assisted proofs of this assertion were given, so while Feigebaum universality had initially appeared mysterious, by the late 1980s it moved into the "99% true" category.

    The full proof of universality for quadratic-like maps by Lyubich (MR) followed this strategy, but it also required very elaborate ideas and techniques from complex dynamics due to a number of people (Douady–Hubbard, Sullivan, McMullen) and yielded hitherto unknown information about the combinatorics of non-chaotic quadratic maps of the interval and the local structure of the Mandelbrot set.

  6. Number theory

    Agrawal, Kayal, and Saxena proved that PRIMES is in P, i.e. primality testing can be done deterministically in polynomial time. While the result had been widely expected, their work was striking in at least two respects: it used very elementary tools, such as variations of Fermat's little theorem, and it was carried out by a computer science professor and two undergraduate students. The sociological effect of the proof may have been even greater than its numerous consequences for computational number theory.

Victor Protsak

Posted 2010-09-03T13:08:22.820

Reputation: 11 982


As far as Euler's formula goes, the book Proofs and Refutations by Imre Lakatos shows how many interesting questions and new considerations can be derived from a seemingly-obvious formula.

– Thierry Zell – 2011-03-19T05:30:31.433

Victor, What was the sociological effect of the proof that primality is in P? – Gil Kalai – 2010-09-07T08:00:33.010

5I meant the inspirational effect due to (a) elementary tools used; and (b) the youth of 2/3 of the authors. – Victor Protsak – 2010-09-07T08:09:47.547

42It is indeed great and inspiring to see very young people cracking down famous problems. Recently, I find it no less inspiring to see old people cracking down famous problems. – Gil Kalai – 2010-09-07T08:57:49.930


When I teach our "Introduction to Mathematical Reasoning" course for undergraduates, I start out by describing a collection of mathematical "facts" that everybody "knew" to be true, but which, with increasing standards of rigor, were eventually proved false. Here they are:

  1. Non-Euclidean geometry: The geometry described by Euclid is the only possible "true" geometry of the real world.
  2. Zeno's paradox: It is impossible to add together infinitely many positive numbers and get a finite answer.
  3. Cardinality vs. dimension: There are more points in the unit square than there are in the unit interval.
  4. Space-filling curves: A continuous parametrized curve in the unit square must miss "most" points.
  5. Nowhere-differentiable functions: A continuous real-valued function on the unit interval must be differentiable at "most" points.
  6. The Cantor Function: A function that is continuous and satisfies f'(x)=0 almost everywhere must be constant.
  7. The Banach-Tarski paradox: If a bounded solid in R^3 is decomposed into finitely many disjoint pieces, and those pieces are rearranged by rigid motions to form a new solid, then the new solid will have the same volume as the original one.

Jack Lee

Posted 2010-09-03T13:08:22.820

Reputation: 970

6It seems to me that items 3-7 are regarded by most people as "monsters" and as such not really worthy of serious consideration. As for items 1 and 2, I think that not only have most people not heard of them, when they do hear of them, they regard them either as jokes or don't really get the point at all. So it doesn't seem to me that these are convincing arguments for most people. (They are, to be sure, convincing arguments for me.) To some extent, I'm sure this is something that can only be appreciated by some experience. I think for instance that the Pythagorean theorem is [out of space – Carl Offner – 2011-04-28T02:58:14.340


Regarding 5: cf the comment here: gives a strictly increasing function whose derivative is zero almost everywhere. Intuitively such a thing shouldn't exist, but applying the definitions rigourously shows it is true.

– David Roberts – 2010-09-04T04:08:05.130

28Historical examples tend to retroactively attribute stupid errors that were not the original, and still subtle, issue. In 3 and 7 the equivalences are not geometric (see Feynman's deconstruction of Banach-Tarski as "So-and-So's Theorem of Immeasurable Measure"). For #1, Riemannian geometry doesn't address historical/conceptual issue of non-Euclidean geometry, which was about logical status of the Parallel Axiom, categoricity of the axioms, and lack of 20th-century mathematical logic framework. Zeno's contention that motion is mysterious remains true today, despite theory of infinite sums. – T.. – 2010-09-04T08:15:50.740



The trefoil knot is knotted.

Blue Trefoil Knot


One could scarcely find a reasonable person who would doubt the veracity of the above claim. None of the 19th century knot tabulators (Tait, Little, and Kirkman) could rigourously prove it, nor could anybody before them. It's not clear that anyone was bothered by this.

Yet mathematics requires proof, and proof was to come. In 1908 Tietze proved the beknottedness of the trefoil using Poincaré's new concept of a fundamental group. Generators and relations for fundamental groups of knot complements could be found using a procedure of Wirtinger, and the fundamental group of the trefoil complement could be shown to be non-commutative by representing it in $SL_2(\mathbb{Z})$, while the fundamental group of the unknot complement is $\mathbb{Z}$. In general, to distinguish even fairly simple knots, whose difference was blatantly obvious to everybody, it was necessary to distinguish non-abelian fundamental groups given in terms of Wirtinger presentations, via generators and relations. This is difficult, and the Reidemeister-Schreier method was developed to tackle this difficulty. Out of these investigations grew combinatorial group theory, not to mention classical knot theory.

All because beknottedness of a trefoil requires proof.


Kishino's virtual knot is knotted.

Kishino knot


We are now in the 21st century, and virtual knot theory is all the rage. One could scarecely find a reasonable person who would argue that Kishino's knot is trivial. But the trefoil's lesson has been learnt well, and it was clear to everyone right away that proving beknottedness of Kishino's knot was to be a most fruitful endeavour. Indeed, that is how things have turned out, and proofs that Kishino's knot is knotted have led to substantial progress in understanding quandles and generalized bracket polynomials.


Above we have claims which were obvious to everybody, and were indeed correct, but whose proofs directly led to greater understanding and to palpable mathematical progress.

Daniel Moskovich

Posted 2010-09-03T13:08:22.820

Reputation: 12 674

11=HOMFLYPOLYNOMIAL(A3:B5) – Mariano Suárez-Álvarez – 2012-01-02T03:54:30.863

I suppose I was able add images corresponding more-or-less to the original ones based on the versions archived here and here. I think that the best way to avoid link rot is to use the built-in-editor to create imgur link. (Now that MO is part of SE network.) See here.

– Martin Sleziak – 2015-05-28T12:35:20.223

11I very much like your answer although I'd put what I consider to be a different spin on it. To me the key point of interest in showing the trefoil is non-trivial is that it shows that one can talk in a quantitative, analytical way about a concept that at first glance seems to have nothing to do with standard conceptions of what mathematics is about. A trefoil has no obvious quantitative, numerical thing associated to it. In contrast, the statement $\pi > 3$ is very much steeped in traditional mathematical language so it's rather unsurprising that mathematics can say things about it. – Ryan Budney – 2010-09-06T20:18:13.153

The knot tabulators were consciously discussing mathematics, however. They just had no tools to handle it. The ideas they were surely thinking of were analytic, related to Gauss's linking integrals and representation of spacial curves by equations. The quantative numerical things they were looking at were crossing number, alternation, chirality... things they had no idea how to calculate. – Daniel Moskovich – 2010-09-06T20:58:45.150

Starting with the definition of a virtual knot in terms of generalized Reidemeister moves given in the Wikipedia article, I personally would not be surprised even if all v knots turned out to be trivial! Presumably, reasonable people who can see that Kishino's v knot is nontrivial have another, more intuitive understanding of virtual knots. – Victor Protsak – 2010-09-06T21:21:51.630


Victor- you can understand a virtual knot as a knot in a thickened higher genus surface, modulo (de)stabilization. Consider Kishino's knot in a genus 2 surface, as in It's self-evident that there's nothing you can do to simplify this knot, let alone undo it, because there is no way to "unlink the virtual crossings from the handles"- it's "stuck". (De)stabilizing the surface is "obviously useless". A rigourous proof is a different matter.

– Daniel Moskovich – 2010-09-06T22:20:44.220

7Let me make my point more hyperbolically. That one can rigorously show that a trefoil can't be untangled, this is one of the most effective mechanisms one can use to communicate to people that modern mathematics deals with sophisticated and substantial ideas. Mathematics as a subject wasn't solved with the development of the Excel spreadsheet. :) – Ryan Budney – 2010-09-07T00:28:54.417


Surely calculus is the ultimate treasure trove for such examples. In antiquity, the Egyptians, Greeks, Indians, Chinese, and many others could calculate integrals with a pretty good degree of certainty via the method of exhaustion and its variants. But it is not without reason that Newton and Leibniz are credited with the invention of calculus. Because once you had a formalism- a proof- of the product rule, chain rule, taylor expansions, calculation of an integral- in fact, once you had the formalism in hand to make such a proof possible- then with that came an understanding, and from that sprung the most powerful analytic machine known to man, that is calculus. Without a formalism, Zeno's paradox was just that- a paradox. With the concept of limits and of epsilon-delta proofs, it becomes a triviality.
Thus, in my opinion, proof is important in that it leads to mathematics. Mathematics is important in that it leads to understanding patterns, and patterns govern all of science and the universe. If you can prove something, you understand it, or at least "your concepts understand it". If you can't prove it, you're nothing more than a goat, knowing the sun will rise in the morning from experience or from experiment, but having not the slightest inkling of why.
The specific example, then, is "calculating integrals" and "solving differential equations".

With the reader's indulgence, an example of a mathematical proof saving lives. My friend's mum is an aeronautical engineer at a place which designs fighter jets. There was some wing design, whose wind resistance satisfied some PDE. They numerically simulated it by computer, and everything was perfect. My friend's mum, who had studied PDE's seriously in university and thought this one could be solved rigourously, set about finding an exact solution, and lo-and-behold, there was some unexpected singularity, and if wind were to blow at some speed from some direction then the wing would shear off. She pointed this out, was awarded a medal, and the wing design was changed. Lives saved by a proof. I'm sure there are a thousand examples like that.

Daniel Moskovich

Posted 2010-09-03T13:08:22.820

Reputation: 12 674

By Calculus you apparently mean Analysis, because Calculus at the time of Newton and Leibniz was not rigorous. – Sergei Akbarov – 2012-05-04T19:45:26.030

54Do you have a citation for the wing story? Otherwise if I repeat it the story becomes "I read about this guy on the internet who had a friend whose mother...". – Dan Piponi – 2010-09-03T17:12:59.487

4Zeno's paradoxes are a bit richer than what you make them out to be. Notions of convergent series and epsilon-deltas deal with real numbers, the paradox is about reality, so that the real paradox becomes: "how is it possible that reality is so well captured by calculus?" (or isn't). – Thierry Zell – 2010-09-03T18:07:52.240

57Not to politicise this, but is it clear which of (1) having a properly-working fighter jet or (2) the opposite, saves more lives in the end? – José Figueroa-O'Farrill – 2010-09-03T18:31:53.810

Unfortunately not... it will have to stay purely anecdotal. – Daniel Moskovich – 2010-09-03T18:38:28.677

Jose, I believe it would be a rare country that wouldn't re-stock its supply of fighter jets, so I believe the risk trade-off is pretty clear, as you either have jets that fail more or less often. – Ryan Budney – 2010-09-03T18:42:51.530

2@sigfpe, I can corroborate Daniel Moskovich exists and I've never caught him fabricating the truth. – Ryan Budney – 2010-09-03T18:44:02.513


There is the Tacoma Narrows Bridge collapse, which can be used as an verifiable substitute for the fighter jet story in this comment. Whether a bridge with such a design could collapse or not can be viewed (after some modeling) as a mathematical statement about whether a certain sort of trajectory is possible for a system of differential equations:

– alex – 2010-09-03T19:03:36.830

4There is no lack of examples where failing to detect resonance (or flutter) led to disaster... what was nice here was the happy ending. Happy endings are harder to document, because the disaster never happened and nobody wants anyone else to know about potential averted disasters with their products – Daniel Moskovich – 2010-09-03T19:41:53.757

49All stories about wings falling off airplanes due to design errors are jokes or urban legends. In practice wings are not only tested extensively, but are built with huge error tolerances. And I doubt that anyone could find an exact solution of a realistic differential equation modeling 3 dimensional air flow over an airplane wing. – Richard Borcherds – 2010-09-03T22:03:09.267

21Way to ruin the story Richard! – Robby McKilliam – 2010-09-03T22:33:20.843

20Tacoma Narrows was designed with tolerances too. It's just that the tolerances didn't anticipate the resonant frequency of the structure. – Ryan Budney – 2010-09-03T22:49:26.507

4Putting the Greeks into the bunch of pre-history of rigor is somehow an unfair historical perspective. The idea of rigor, of axiomatic system, and deductive proof, is a production of the Greek thought; and without the Greek science, no Newton and Leibnitz might be possible. – Pietro Majer – 2010-09-04T10:27:58.770

3Rosso argues in The Forgotten Revolution that the greeks had a level of rigorous scientific theory that was only recovered in the west in the 19th cenutry. Most concepts in Newton and Leibniz seem not well founded without a theory of real numbers, limits, etc. that came much later. So maybe Leibniz and Newton (and Euler!) are an argument that it sometimes important for the development of Mathematics to be not fully rigorous and dream about what may be true. – Maarten Bergvelt – 2010-09-05T15:04:11.977

4Newton was painfully aware that his calculus lacked rigour, which is why his proofs in Principia Mathematica don't use it. – TonyK – 2010-09-05T23:52:06.490


This is not quite the same story as the one Daniel cited, but it's similar:

– Timothy Chow – 2010-09-06T22:36:26.160


If it's not the design that's wrong, it's the pilot who can put unexpected stresses on the plane's frame with high-acceleration maneuvers which can shear off a wing. It ususally happens on test flights during design evaluation, so it's usually not publicly known. Recent example in public literature:

– sleepless in beantown – 2010-09-08T10:33:01.960

4I don't know details of which equation it was or why exactly the wing was going to shear off, or whether the flaw would have been detected anyway in wind-tunnel testing or in test-flights. A key point, I think, was that a trusted computer programme numerically simulated solutions to the PDE and found no issue, while a human (my friend's mother) who happened "by chance" to have the energy and knowledge to solve it discovered that there was in fact a huge issue. Mathematics, or proof, save the day! – Daniel Moskovich – 2010-09-08T11:55:54.540


Mumford in Rational equivalence of 0-cycles on surfaces gave an example where an intuitive result of Severi, who claimed the space of rational equivalence classes was finite dimensional, was just completely wrong: it is infinite dimensional for most surfaces. This is a typical example of why the informal non-rigorous style of algebraic geometry was abandoned: too many of the "obvious" but unproved results turned out to be incorrect.

Richard Borcherds

Posted 2010-09-03T13:08:22.820

Reputation: 16 053


Maybe some of the answers to this question about "eventual counterexamples" - ie, which could plausibly be true for all $n$ but which fail for some large number - are relevant?

Some highlights from that question:

  • $gcd(n^5−5,(n+1)^5−5)=1$ is true for n=1,2,…,1435389 but not for n=1435390; and many similar
  • factors of $x^n−1$ over the rationals have no coefficient exceeding 1 in absolute value - until $n=105$
  • The Strong Law of Small Numbers, a fun paper by Guy

Tom Smith

Posted 2010-09-03T13:08:22.820

Reputation: 825

That Strong Law of Large Numbers paper is fantastic! – Robby McKilliam – 2010-09-03T13:44:14.647

4Nitpick: The title of Guy's paper is "The Strong Law of Small Numbers". – Ravi Boppana – 2010-09-03T14:01:50.263

1Oh yes... muscle memory must have taken over as I typed. Thanks! – Tom Smith – 2010-09-03T14:04:02.840


The evidence for both quantum mechanics and for general relativity is overwhelming. However, one can prove that without serious modifications, these two theories are incompatible. Hence the (still incomplete) quest for quantum gravity.

Peter Shor

Posted 2010-09-03T13:08:22.820

Reputation: 5 078

38Devils Advocate: Couldn't one argue that this demonstrates the opposite? Our theories are mathematically incompatible, yet they can compute the outcome of any fundamental physical experiment to a half dozen digits. Clearly, this shows that mathematical consistency is overrated! – David E Speyer – 2010-09-06T12:06:53.893

@David: hush hush! – Mariano Suárez-Álvarez – 2010-09-06T13:27:15.880

19@David, you may be right: quantum mechanics and general relativity are incompatible, and the first time that they come into mathematical conflict the universe will end. This should be around the time when the first black hole evaporates, around $10^{66}$ years from now. – Peter Shor – 2010-09-06T14:01:02.757

3Indeed nonrigirous mathematical computations and heuristic arguments in physics are spectacularly successful for good experimental predictions and even for mathematics. Yet the accuracy David talked about is only in small fragments of standard physics. Of course, just like asking what is the purpose of rigor we can also ask what is the purpose gaining the 7th accurate digit in experimental predictions. The answer that allowing better predictions like rigorous proofs enlighten us is a good partial answer. – Gil Kalai – 2010-09-07T05:17:53.507


There were intensice discussions regarding the role of rigor over physics weblogs. Here is an example from Distler's blog "musing":

– Gil Kalai – 2010-09-07T08:07:13.933


I think the recent work on compressed sensing is a good example.

As I understand from listening to a talk by Emmanuel Candes - please correct me if I get anything wrong - the recent advances in compressed sensing began with an empirical observation that a certain image reconstruction algorithm seemed to perfectly reconstruct some classes of corrupted images. Candes, Romberg, and Tao collaborated to prove this as a mathematical theorem. Their proof captured the basic insight that explained the good performance of the algorithm: $l_1$ minimization finds a sparse solution to a system of equations for many classes of matrices. It was then realized this insight is portable to other problems and analogous tools could work in many other settings where sparsity is an issue (e.g., computational genetics).

If Candes, Romberg, and Tao had not published their proof, and if only the empirical observation that a certain image reconstruction works well was published, it is possible (likely?) that this insight would never have penetrated outside the image processing community.


Posted 2010-09-03T13:08:22.820

Reputation: 593


Here's an example:
In the Mathscinet review of "Y-systems and generalized associahedra", by Sergey Fomin and Andrei Zelevinsky, you find:

Let $I$ be an $n$-element set and $A=(a_{ij})$, $i,j\in I$, an indecomposable Cartan matrix of finite type. Let $\Phi$ be the corresponding root system (of rank $n$), and $h$ the Coxeter number. Consider a family $(Y_i(t))_{i\in I,\,t\in\Bbb{Z}}$ of commuting variables satisfying the recurrence relations $$Y_i(t+1)Y_i(t-1)=\prod_{j\ne i}(Y_j(t)+1)^{-a_{ij}}.$$ Zamolodchikov's conjecture states that the family is periodic with period $2(h+2)$, i.e., $Y_i(t+2(h+2))=Y_i(t)$ for all $i$ and $t$.

That conjecture claims that an explicitly described algebraic map is periodic. The conjecture can be checked numerically by plugging in real numbers with 30 digits, and iterating the map the appropriate number of times. If you see that time after time, the numbers you get back agree with the initial values with a 29 digit accuracy, then you start to be pretty confident that the conjecture is true.

For the $E_8$ case, the proof presented in the above paper involves a massive amount of symbolic computations done by computer. Is it really much better than the numerical evidence?

Conclusion: I think that we only like proofs when we learn something from them. It's not the property of "being a proof" that is attractive to mathematicians.

André Henriques

Posted 2010-09-03T13:08:22.820

Reputation: 25 849

4Gian-Carlo Rota would agree, for he said (in "The Phenomenology of Mathematical Beauty") that we most value a proof that enlightens us. – Joseph O'Rourke – 2010-09-05T03:00:05.797

2That's a very interesting example, even if it is of the opposite of what I asked. My instinct is to think it's good that there's a proof, but I'm not sure how to justify that. And obviously I'd prefer a conceptual argument. – gowers – 2010-09-05T15:26:45.980

19And now, of course, we finally understand exactly what Gian-Carlo meant: A proof enlightens us if: 1) it is the first proof, 2) it is accepted, and 3) it has at least 10 up votes! – Gil Kalai – 2010-09-06T21:05:36.167


I don't think that proofs are about replacing 99% certainty with 99.99% (or 100%, if the proof is simple enough). In one of his problems he studied early on, Fermat stated that it was important to find out whether a prime divides only numbers $a^n-1$, or also numbers of the form $a^n+1$. For $a = 2$ and $a = 3$ he saw that the answer seemed to depend on the residue class of $p$ modulo $4a$. He did not really come back to investigate this problem more closely; Euler did, but couldn't find the proof. Gauss's proofs did not remove the remaining 1 % uncertainty, it brought in structure and allowed to ask the next question. Just looking at patterns of prime divisors of $a^n \pm 1$ wouldn't have led to Artin reciprocity.

Franz Lemmermeyer

Posted 2010-09-03T13:08:22.820

Reputation: 22 070


[Edited to correct the Galileo story] An old example of a plausible result that was overthrown by rigor is the 17th-century example of the hanging chain. Galileo once said (though he later said otherwise), and Girard claimed to have proved, that the shape was a parabola. But this was disproved by Huygens (then aged 17) by a more rigorous analysis. Some decades later, the exact equation for the catenary was found by the Bernoullis, Leibniz, and Huygens.

In the 20th century, some people thought it plausible that the shape of the cable of a suspension bridge is also a catenary. Indeed, I once saw this claim in a very popular engineering mathematics text. But a rigorous argument shows (with the sensible simplifying assumption that the weight of the cable is negligible compared with the weight of the road) that the shape is in fact a parabola.

John Stillwell

Posted 2010-09-03T13:08:22.820

Reputation: 9 944

20The story about Galileo and the hanging chain is a myth: he was well aware that it is approximately but not exactly a parabola, and even commented that the approximation gets better for shorter chains. If you take a very long chain with the ends close together, which Galileo was perfectly capable of doing, it is obviously not a parabola – Richard Borcherds – 2010-09-03T21:43:57.433


Richard, thanks for the correction. One of Galileo's statements seems to call the catenary a parabola, but perhaps he was making only a rough comparison. At any rate, it was a common belief, and Huygens disproved it in a letter to Mersenne in 1646, which is described in

– John Stillwell – 2010-09-03T22:19:31.553

4It may be a mistranslation: my guess is he meant that a catenary RESEMBLES a parabola. Later on in the same book he makes it clear that he knows they are different. – Richard Borcherds – 2010-09-03T22:44:33.337

1I deleted my previous comment as incomprehensible.

This story has a bit of continuation. It is possible to prove using variational calculus that the uniformly loaded cable without self-weight takes the shape of a parabola. As a result, it was commonly believed that the optimal shape to carry a uniform loading between two level points was the lightest parabola. We proved very recently that, actually, the optimal structure in this case is a lot more sophisticated and features a parabolic rib connected to the supports by fan-shape networks of mutually orthogonal bars in compression and tension. – Aleksey Pichugin – 2010-09-03T22:46:46.507

5Years ago, I saw in "Scientific American" a 2-page ad for some calculator. One of the two pages was a photograph of a suspension bridge. Across the top was the equation of a catenary. – Andreas Blass – 2010-09-04T01:27:58.097

11I was very fascinated by Richard Borcherds's comments and looked at two different translations of the Galileo's book (I also found a quote from the original text, but my Italian is not good enough). The hanging line is definitely described to take the shape of a parabola, but this statement is given in a section describing quick ways of sketching parabolae. Indeed, later in the book, Galileo talks about the shape of a hanging chain being parabolic only approximately: "the Fitting shall be so much more exact, by how much the describ'd Parabola is less curv'd, i.e. more distended". – Aleksey Pichugin – 2010-09-04T11:15:25.737

1The story I got from Lockwood's "A Book of Curves" was that it was Jungius who first disproved that a heavy rope would take a parabolic shape when hung freely, but stopped short of determining the actual equation for the catenary. – J. M. is not a mathematician – 2010-09-08T06:35:44.493

4@J.M.:Jungius may have been the first to publish a proof that the catenary is not a parabola (1669), but the proof of Huygens in his letter to Mersenne was earlier (1646). – John Stillwell – 2010-09-08T15:10:05.980

More than two decades...good to know, thanks John! – J. M. is not a mathematician – 2010-09-08T20:45:10.623


  1. Nonexistence theorems can not be demonstrated with numerical evidence. For example, the impossibility of classical geometric construction problems (trisecting the angle, doubling the cube) could only be shown with a proof that the efforts in the positive direction were futile. Or consider the equation $x^n + y^n = z^n$ with $n > 2$. [EDIT: Strictly speaking my first sentence is not true. For example, the primality of a number is a kind of nonexistence theorem -- this number has no nontrivial factorization -- and one could prove the primality of a specific number by just trying out all the finitely many numerical possibilities, whether by naive trial division or a more efficient rigorous primality test. Probabilistic primality tests, such as the Solovay--Strassen or Miller--Rabin tests, allow one to present a short amount of compelling numerical evidence, without a proof, that a number is quite likely to be prime. What I should have written is that nonexistence theorems are usually not (or at least some of them are not) demonstrable by numerical evidence, and the geometric impossibility theorems which I mentioned illustrate that. I don't see how one can give real evidence short for those theorems other than by a proof. Lack of success in making the constructions is not convincing: the Greeks couldn't construct a regular 17-gon by their rules, but Gauss showed much later that it can be done.]

  2. You can't apply a theorem to all commutative rings unless you have a proof of the result which works that broadly. Otherwise math just becomes conjectures upon conjectures, or you have awkward hypotheses: "For a ring whose nonzero quotients all have maximal ideals, etc." Emmy Noether revolutionized abstract algebra by replacing her predecessor's tedious computational arguments in polynomial rings with short conceptual proofs valid in any Noetherian ring, which not only gave a better understanding of what was done before but revealed a much broader terrain where earlier work could be used. Or consider the true scope of harmonic analysis: it can be carried out not just in Euclidean space or Lie groups, but in any locally compact group. Why? Because, to get things started, Weil's proof of the existence of Haar measure works that broadly. How are you going to collect 99% numerical evidence that all locally compact groups have a Haar measure? (In number theory and representation theory one integrates over the adeles, which are in no sense like Lie groups, so the "topological group" concept, rather than just "Lie group", is really crucial.)

  3. Proofs tell you why something works, and knowing that explanatory mechanism can give you the tools to generalize the result to new settings. For example, consider the classification of finitely generated torsion-free abelian groups, finitely generated torsion-free modules over any PID, and finitely generated torsion-free modules over a Dedekind domain. The last classification is very useful, but I think its statement is too involved to believe it is valid as generally as it is without having a proof.

  4. Proofs can show in advance how certain unsolved problems are related to each other. For instance, there are tons of known consequences of the generalized Riemann hypothesis because the proofs show how GRH leads to those other results. (Along the same lines, Ribet showed how modularity of elliptic curves would imply FLT, which at the time were both open questions, and that work inspired Wiles.)


Posted 2010-09-03T13:08:22.820

Reputation: 27 105

6Re 1: Not all evidence is numerical! It had been known for a long time that $x^n+y^n=z^n$ doesn't admit a solution in non-constant polynomials in 1 variable for $n\geq 3,$ Kummer proved FLT for regular prime exponents, etc. I wouldn't try to assign numerical "confidence rating" to these developments prior to Wiles and Taylor-Wiles. – Victor Protsak – 2010-09-03T23:32:53.583

Victor: I didn't intend to suggest the only kind of mathematical evidence is of the numerical kind. When writing (1) I imagined that someone who would have the attitude that proof is not important would think that the only way to collect evidence for a result is by working out some concrete cases, not by using proofs. In fact what you wrote supports what I wrote: numerical evidence isn't any good for FLT, say, so you do need to seek other kinds of evidence and in fact those other kinds are proofs (of special cases or analogues). – KConrad – 2010-09-04T14:43:01.753

3The kind of thing I had in mind was the following argument for Goldbach's conjecture. The mere fact that it is true up to some large n is not that convincing, but the fact that if you assume certain randomness heuristics for the primes you can predict how many solutions there ought to be to p_1+p_2=2n and that more refined prediction is true up to a large n is, to my mind, very convincing indeed. – gowers – 2010-09-04T17:21:03.657


Since you bring up prime heuristics on the side of numerical evidence in lieu of a proof, I will point out one problem with them. Cramer's "additive" probabilistic model for the primes, which suggested results that seemed consistent with numerical data, does predict relations that are known to be wrong. See Granville's paper, especially starting on page 5.

– KConrad – 2010-09-05T01:49:18.830

2I think what we learn from that example is that those kinds of heuristics have to be treated with care when we start looking at very delicate phenomena such as gaps between primes. But the randomness properties you'd need for Goldbach to be true (in its more precise form where you approximate the number of solutions) are much more robust, so far more of a miracle would be needed for the predictions they make to be false. – gowers – 2010-09-05T19:18:30.890


Oh, I'm not disputing the value of the Hardy--Littlewood type prime distribution heuristics. At the same time, I should point out that if you apply their ideas on prime values of polynomials (so not Goldbach, but still good w.r.t. numerical data) to the distribution of irreducible values of polynomials with coefficients in F_q[u], there are provable counterexamples and this has an explanation: the Mobius function on F_q[u] has predictable behavior along some irreducible polynomial progressions. For a survey on this, see

– KConrad – 2010-09-06T01:20:08.503

2Some non-existence results can be proven by numerical evidence -- for example infeasibility of a linear program can be demonstrated by a solution to a dual program derived from it (a separating hyperplane). Or non-primality of a positive integer can be demonstrated often by a witness of the form $a^{p-1} \ne 1 \bmod p$. – Victor Miller – 2010-09-06T15:48:20.880

1Victor: I wouldn't consider primality to be an existence result (and thus non-primality to be a non-existence result) in the usual sense, but rather the other way around: primality is more like a non-existence result (no nontrivial factorization). And I just remembered that one can collect numerical evidence for primality: run one of the probabilistic primality tests a lot. If we applied, say, the Solovay-Strassen test to n thirty times and for every such a we get a^((n-1)/2) = (a|n) mod n then that is pretty compelling numerical evidence that n is prime. I am editing my answer accordingly. – KConrad – 2010-09-06T16:00:03.877

To be fair, I doubt that ancient Greeks were interested in constructing the regular 17-gon; at least, I never saw any evidence that they considered this question. Let us not underestimate the ingenuity of our scientific forefathers: for example, the description of various methods of duplication of the cube occupies pp. 246-270 in the 1st vol of Heath, A history of Greek mathematics and relates the work of Archytas, Eudoxus, Menaechmus, Plato (attributed), Erathosthenes, Nicomedes, Appolonius, Heron, Philon of Byzantium, Diocles, Sporus and Pappus, some of whom gave multiple solutions. – Victor Protsak – 2010-09-07T06:53:22.890

There is a philosophical difference between what one can prove and what one would feel safe to bet on. For example, Heilbronn and Linfoot's paper (in 1934) about quadratic imaginary fields of class number 1 (Gauss's problem). In it they proved a rather stupendous lower bound for the putative 10th discriminant of class number 1 (which was proven not to exist by Kurt Heegner about 15 years later). If I were betting at the time I would have felt confident to bet on it's non-existence. – Victor Miller – 2010-09-07T13:44:06.700


One can rigorously prove that pyramid schemes cannot run forever, and that no betting system with finite monetary reserves can guarantee a profit from a martingale or submartingale.

But there are countless examples of people who have suffered monetary loss due to their lack of awareness of the rigorous nature of these non-existence proofs. Here is a case in which having a non-rigorous 99% plausibility argument is not enough, because one can always rationalise that "this time is different", or that one has some special secret strategy that nobody else thought of before.

In a similar spirit: a rigorous demonstration of a barrier (e.g. one of the three known barriers to proving P != NP) can prevent a lot of time being wasted on pursuing a fruitless approach to a difficult problem. (In contrast, a non-rigorous plausibility argument that an approach is "probably futile" is significantly less effective at preventing an intrepid mathematician or amateur from trying his or her luck, especially if they have excessive confidence in their own abilities.)

[Admittedly, P!=NP is not a great example to use here as motivation, because this is itself a problem whose goal is to obtain a rigorous proof...]

Terry Tao

Posted 2010-09-03T13:08:22.820

Reputation: 53 985

2@Gil: this is where a proof can give more than what you set out to prove. For the pyramid scheme, not only can we prove it cannot run forever, but we can also extract quantitative evidence to show that the odds of you getting in early enough are close to zero. Of course, this will not convince the numerically illiterate, but I'm convinced there is a non-negligible portion of the population that you could reach in this way. – Thierry Zell – 2011-03-19T05:20:30.167

10I doubt if the main problem is that people are not aware of the rigorous nature of non-existence proofs. First, for most people the meaning of a regorous mathematical proof makes no sense and has no importance. The empirical fact that pyramid schemes always ended in a collapse in the past should be more convincing. But even people who realize that the pyramid scheme cannot run forever (from past experience or from mathematics) may still think that they can make money by entering early enough. (The concept "run forever" is an abstraction whose relevance should also be explained.) – Gil Kalai – 2010-09-08T07:01:42.703

"[P!=NP] is itself a problem whose goal is to obtain a rigorous proof..." ... that it is impossible to find a polynomial algorithm that solves NP-complete problems. A proof of P!=NP would be another impossibility proof, so that would be an example of a rigorous proof that would prevent people from wasting a lot of time. – Sune Jakobsen – 2010-09-08T07:34:30.170


Here is an example: 19 century geometers extended Euler's formula V-E+F=2 to higher dimensions: the alternating sum of the number of i-faces of a d-dimensional polytope is 2 in odd dimensions and 0 in even dimensions. The 19th centuries proofs were incomplete and the first rigorous proof came with Poincare and used homology groups. Here, what enabled a rigorous proof was arguably even more important than the theorem itself.

Gil Kalai

Posted 2010-09-03T13:08:22.820

Reputation: 12 545

1This is not so much an example of increased care in an arguement as the development of critical technology needed to prove a result.The need for such technology is not always clear to mathematicains when they begin to formulate such arguments.The important question here is whether or not the general form of Euler's formula could have been proven WITHOUT it.In dimensions less then or equal to 3,there are many alternate proofs using purely combinatorial arguements.I'm not sure it can be proven without homology in higher-dimensional spaces. – The Mathemagician – 2010-09-04T23:39:45.713


Andrew: Yes, it can be proven without homology for higher dimensional polytopes: see

– David Eppstein – 2010-09-05T05:02:39.077

2It does happen that techniques developed in order to give a proof are more very important. In this case, as it turned out a few decades after Poincare, the high dimensional Euler's theorem can be proved without algebtaic topology, and in the 70s even the specific gaps in the 19th century proofs was fixed, but the new technology allows for extensions of the theorem that cannot be proved by elementary method and it also shed light on the original Euler theorem: That the Euler characteristic is a topological invariant. – Gil Kalai – 2010-09-05T05:10:19.613


Many examples that have been given are of statements that one could at least formulate, and conjecture, without rigorous proof. However, one of the most important benefits of rigorous proof is that it allows us to step surefootedly along long, intricate chains of reasoning into regions that we previously never suspected existed. For example, it is hard to imagine discovering the Monster group without the ability to reason rigorously.

In any other field besides mathematics, as soon as a line of abstract argument exceeds a certain (low) threshold of complexity, it becomes doubtful, and unless there is some way to corroborate the conclusion empirically, the argument lapses into controversy. If you are trying to search a very large space of possibilities, then it is indispensable to be able to close off certain avenues definitively so that the search can be focused effectively on the remaining possibilities. Only in mathematics are definitive impossibility proofs so routine that we can rely on them as a fundamental tool for discovering new phenomena.

The classification of finite simple groups is a particularly spectacular example, but I would argue that almost any unexpected mathematical object—the BBP formula for $\pi$, the Lie group $E_8$, the eversion of the sphere, etc.—is the product of a sustained search involving the systematic and rigorous elimination of dead end after dead end. Of course, once an object is discovered, you might try to argue that mathematical rigor was not really necessary and that someone could have stumbled across it with a combination of luck, persistence, and insight. However, I find such an argument disingenuous. Mathematical rigor allows us to distribute the workload across the entire community; each reasoner can contribute his or her piece without worrying that it will be torn to shreds by controversy. Searches can therefore be conducted on a massively greater scale than would be possible otherwise, and the productivity is correspondingly magnified.

Timothy Chow

Posted 2010-09-03T13:08:22.820

Reputation: 30 733


I have found that a strong indicator of research ability is a student wanting to know why something is true. There is also an interesting distinction between an explanation and a proof. (I gave up using the word "proof" for first year students of analysis, and changed it to "explanation", a word they could understand. This was after a student complained I gave too many proofs!)

A proof takes place in a certain conceptual landscape, and the clearer and better formed this is the easier it is to be sure the proof is right, rather than a complicated manipulation. So part of the work of a mathematician is to develop these landscapes: Grothendieck was a master of this!

Of course the more professional a person is in an area the nearer an explanation comes to a rigorous proof. But in fact we do not write down all the details. It is more like giving directions to the station and not listing the cracks in the pavement, though warning of dangerous holes.

The search for an explanation is also related to the search for a beautiful proof. So we should not neglect the aesthetic aspect.

Ronnie Brown

Posted 2010-09-03T13:08:22.820

Reputation: 9 589


Tim Gowers wrote:

But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth.

How about Stokes' Theorem ?

The two-dimensional version involving line and surface integrals is "proved" in most physics textbooks using a neat little picture dividing up the surface into little rectangles and shrinking them to zero.

Similarly, the Divergence Theorem related volume and surface integrals is demonstrated with very intuitive ideas about liquid flowing out of tiny cubes.

But to prove these rigorously requires developing the theory of differential forms whose consequences go way beyond the original theorems


Posted 2010-09-03T13:08:22.820

Reputation: 396

2In fact, the original motivation for the Bourbaki project was the scandalous state of affairs that no rigorous proof of the Stokes theorem could be located in the (French?) literature. The rest is history... – Victor Protsak – 2010-09-08T08:44:16.977

@Victor, maybe you remember: is there a proof of the theorem in Bourbaki? – Mariano Suárez-Álvarez – 2010-09-08T09:57:52.507

Mariano, I don't know with certainty, but I doubt it: the volume on differential and analytic manifolds was only a "survey of results". There may have been more "in the fridge", and of course Dieudonne's Analysis and other textbooks that owe their existence to the Bourbaki philosophy contain proofs of the Stokes theorem. – Victor Protsak – 2010-09-08T21:18:50.520

2I don't see how this example demonstrates the importance of rigour. Most engineers or physicists (e. g. doing electrodynamics) can get by perfectly well with "tiny-cubes" proof of Stokes' theorem and nothing ever gets wrong from using this intuitive approach. – Michal Kotowski – 2010-09-08T22:35:30.163

4@MichalKotowski: that is not inconsistent with the notion that mathematics has benefited hugely from a rigorous theory of differential forms. – gowers – 2010-09-10T17:19:07.353


This question is begging for someone to state the obvious, so here goes.

Take for example the existence and uniqueness of solutions to differential equations. Without these theorems, the mathematical models used in many branches of the physical sciences are incapable of making actual predictions. If potentially the DE has no solutions, or the model provides infinitely many solutions, your model has no predictive power. So the model isn't really science.

In that regard, the point of proof in mathematics is to create a foundation that allows for quantitative physical sciences to exist to have a firm philosophical foundation.

Moreover, the proofs of existence and uniqueness shed light on the behaviour of solutions, allowing one to make precise predictions about how good various approximations are to actual solutions -- giving a sense for how computationally expensive it is to make reliable predictions.

Ryan Budney

Posted 2010-09-03T13:08:22.820

Reputation: 30 895

7I regret to say that most physicists seem to neither know nor care about rigorous proofs of existence and uniqueness theorems. But they seem to have no trouble doing good physics without them. – Richard Borcherds – 2010-09-03T21:55:45.480

Perhaps I should have replaced "to exist" with "to have a solid philosophical foundation". – Ryan Budney – 2010-09-03T22:30:11.017

4In physics, having a formula or approximation scheme depending on N parameters (= dim. of phase space) shows existence and uniqueness locally, with global questions of singularities, attractors, topology etc understood by calculation and simulation. For classical ODE this is almost always enough and there are very few cases where careful analysis and error estimates overturned accepted physics ideas. There are more cases where physics heuristics drove the mathematics and some where they changed intuitions that prevailed in the math community. – T.. – 2010-09-04T07:58:53.007

4@T it sounds like you're assuming existence and uniqueness. What are you approximating? Approximations don't matter if you don't know what you're approximating. Moreover, if you're approximating one of infinitely-many solutions, this gives you no sense for how all the solutions (with certain initial conditions) behave, and limits your ability to predict anything. – Ryan Budney – 2010-09-04T18:26:38.570

1Uniqueness is overrated. Some questions an engineer may ask do not have a unique answer. You may call them ill-posed, granted, but it is them that the engineer needs answering. We can usually sharpen the questions to make answers unique, but they won't be the questions originally asked. For example, in structural optimization the Michell optimality criterion is formulated to give non-unique solutions. It turns out that all these solutions are physically equivalent. In this case, non-uniqueness helps solving problems, because it allows one to select solutions with simpler analytical structure. – Aleksey Pichugin – 2010-09-04T18:49:20.497

@Aleksey, this seems to miss my point. If the non-unique solutions are equivalent physically then you essentially have uniqueness. This is an example of having existence and uniqueness, but of a more elaborate nature. My example was meant to be simple. – Ryan Budney – 2010-09-04T19:04:31.863

1@Ryan: Well, yes and no. On the one hand, in structural optimization you can have several vector fields solving the same problem (provided the solution exists). From the point of view of someone developing a numerical solution method, this is just as good non-uniqueness as any, with associated convergence problems etc. On the other hand, if we were to define our idea of uniqueness by "how physical it is", we may fall into other fallacy, common amongst engineering types. Some of them may be happy to accept {\it any }answer that matches a particular set of experimental data within, say, 1%. – Aleksey Pichugin – 2010-09-04T19:12:23.580

Sorry, Ryan, it seems that I replied to your original message! I suppose, for me as an applied mathematician, being rigourous involves always having to listen to both sides of the argument and trying to reconcile the two. – Aleksey Pichugin – 2010-09-04T19:26:15.757

1@Aleksey, I feel like this is rather unrelated to the point of my post. The post was about theories with predictive abilities -- like the basic philosophical test of "what science is" where one demands falsifiability. Although solving optimization problems isn't entirely unrelated I'm not seeing where you're going with this. I'd like to suggest you start a thread on meta if you want to create a discussion. – Ryan Budney – 2010-09-04T19:31:29.820

No worries, Aleksey. – Ryan Budney – 2010-09-04T19:32:43.143

6May I point out that models which do not satisfy existence or uniqueness can be very useful. For an example, consider the Navier-Stokes equations (you get the Clay Prize for proving existence). It is quite possible that there are initial conditions where the solution does not exist. This could happen because the Navier-Stokes equations assume that the fluid is incompressible, and all real fluids are (at least to some very small degree) compressible. Even if existence were to fail to be satisfied, these equations would still have enormous predictive power, and be real science. – Peter Shor – 2010-09-05T17:04:00.553

1Yes, certainly. Undoubtedly a resolution of existence and uniqueness for Navier-Stokes would shed light on the nature of fluid flow. A (too-) many-orders-of-magnitude simpler example would be the ODE for the amount of water in a leaky bucket $y'=-\sqrt(y)$. The reverse-time non-uniqueness of solutions amounts to saying the ODE loses information, once a leaky-bucket has leaked itself empty, you can't tell how long it's been empty by the volume of water in it. :) – Ryan Budney – 2010-09-05T18:59:44.663

8From a physical point of view, uniqueness is a claim about causality. Given our belief in causality, any proposed law of nature that does not obey uniqueness may well be considered physically defective.

But perhaps more important than uniqueness is the stronger property of continuous dependence on the data. (The former asserts that the same data will lead to the same solution; the latter asserts that nearby data leads to nearby solutions.) Once one has this, one has some confidence that one's model can be numerically simulated with reasonable accuracy, and also be resistant to noise. – Terry Tao – 2010-09-08T05:46:01.850


A rich source of examples may be found in the study of finite element methods for PDEs in mixed form. Proving that a given mixed finite element method provided a stable and consistent approximation strategy was usually done 'a posteriori': one had a method in mind, and then sought to establish well-posedness of the discretization. This meant a proliferation of methods and strategies tailored for very specific examples.

In the bid for a more comprehensive treatment and unifying proofs, the finite element exterior calculus was developed and refined (eg., the 2006 Acta Numerica paper by Arnold, Falk and Winther). The proofs revealed the importance of using discrete subspaces which form a subcomplex of the Hilbert complex, as well as bounded co-chain projections (we now call this the 'commuting diagram property). These ideas, in turn, provided an elegant design strategy for stable finite element discretizations.

Nilima Nigam

Posted 2010-09-03T13:08:22.820

Reputation: 1 096


Take the usual definition of Eisenstein functions $$G_k (\tau) = \sum\limits_{(c,d) \in \mathbb{Z}^2 - \{(0,0)\}} (c \tau + d)^{-k}$$ for integer $k \geq 2$. It is easy to prove that $$G_k \left( \frac{a \tau + b}{c \tau + d}\right) = (c \tau + d)^k G_k (\tau)$$ by manipulating the infinite sum, proving that $G_k$ is a modular form of weight $k$.

However the above formula is true only for $k \geq 4$ and not true for $k=2$, and the reason for that is because the sum is not absolutely convergent for $k=2$. One can prove that $$G_2 \left( \frac{a \tau + b}{c \tau + d}\right) = (c \tau + d)^2 \left[ G_2 (\tau) - \frac{2 \pi i c}{c \tau + d} \right]$$ The annoying additional term spoils modularity, and is of tremendous importance. In particular, there are no modular forms of weight 2, which is a good example of how crucial the "rigorous" step of checking whether a sum is absolutely convergent or not may be.


Posted 2010-09-03T13:08:22.820

Reputation: 225


Based on the recent update to the question, Fermat's Last Theorem seems like the top example of a proof being far more valuable than the truth of the statement. Personally it's a rare occurrence for me to use the nonexistence of a rational point on a Fermat curve but for instance it is quite common for me to use class numbers.


Posted 2010-09-03T13:08:22.820

Reputation: 3 060


Circle division by chords,, leads to a sequence whose first terms are 1, 2, 4, 8, 16, 31. It's simple and effective to draw the first five cases on a blackboard, count the regions, and ask the students what's the next number in the sequence.

Federico Poloni

Posted 2010-09-03T13:08:22.820

Reputation: 11 194

8I like that example and have used it myself. However, the conjecture that the number of regions doubles each time has nothing beyond a very small amount of numerical evidence to support it, and the best way of showing that it is false is, in my view, not to count 31 regions but to point out that the number of crossings of lines is at most n^4, from which it follows easily that the number of regions grows at most quartically. – gowers – 2010-09-05T15:23:43.207

15More mischievously, the sequence is 1, 2, 4, 8, 16, ... , 256, ... – Richard Borcherds – 2010-09-06T04:28:06.370

3That I never knew. What a great fact! – gowers – 2010-09-06T19:25:01.247


I think that the question itself is entirely misleading. It tacitly assumes as if mathematics could be separated into two parts: mathematical results and their proofs. Mathematics is nothing other than the proofs of mathematical results. Mathematical statements lacks any value, they are neither good nor bad. From the mathematical point of view, it is entirely immaterial whether the answer to a mathematical question like `Is there an even integer greater than two that is not the sum of two primes?' is yes or no. Mathematicians simply do not interested in the right answer. What they would like to do is to solve the problem. That is the main difference between natural sciences or engineering on the one hand, and mathematics on the other. A physicist would like to know the right answer to his question and he does not interested in the way it is obtained. An engineer needs a tool that he can use in the course of his work. He does not interested in the way a useful device works. Mathematics is nothing other than a specific set consisting of different solutions to different problems and, of course, some unsolved problems waiting to be solved. Proofs are not important for mathematics, they constitute the body of knowledge we call mathematics.

Gyorgy Sereny

Posted 2010-09-03T13:08:22.820

Reputation: 159

8This is a very Bourbakist view. Much of interesting mathematics does not conform to it, because ideas and open problems are just as important in mathematics as rigorous proofs (even leaving aside the distinction between theory and proofs that is not appreciated by non-mathematicians). – Victor Protsak – 2010-09-05T04:27:49.350

4Victor, Gyorgy's point of view does not conflict with the importance of ideas and open problems. Still for a large part of mathematics proofs is the essential part. The relation between a mathematical result and its proof can often be compared to the relation between the title of a picture or a poem or a musical piece and the content. – Gil Kalai – 2010-09-05T05:20:36.743

6Gil, gowers' question addressed the distinction between an intuitive proof and a rigorous proof and my comment was written with that in mind. Using your artistic analogies, let me say that a piece of music cannot be reduced to the set of notes, nor a poem to the set of words, that comprise it (the analogy obviously breaks down for "modern art", such as atonal music and dadaist poetry). – Victor Protsak – 2010-09-05T07:28:33.073


I tend to think that mathematics or---better---the activity we mathematicians do, is not so much defined by (let me use what's probably nowadays rather old fashioned language) its material object of study (whatever it may be: it is surely very difficult to try to pin down exactly what it is that we are talking about...) but by its formal object, by the way we know what we know. And, of course, proofs are the way we know what we know.

Now: rigour is important in that it allows us to tell apart what we can prove from what we cannot, what we know in the way that we want to know it.

(By the way, I don't think that it is fair to say that, for example, the Italians were not rigorous: they were simply wrong)

Mariano Suárez-Álvarez

Posted 2010-09-03T13:08:22.820

Reputation: 36 267

I entirely agree with your first sentence. – gowers – 2010-09-06T21:20:20.787

3I once heard a less accurate but more punch version of this in some idiosyncratic history of mathematics lectures: "in mathematics, we don't know what it is we are doing, but we know how to do it". – Yemon Choi – 2010-09-08T05:08:00.750

3Yemon, there is a famous definition of mathematics, perhaps, from "What is mathematics?" by Courant and Robbins: mathematics is what mathematicians do. – Victor Protsak – 2010-09-08T08:53:49.757


"Sufficient unto the day is the rigor thereof."-E.H.Moore.

There's a lot of discussion over not only the role of rigor in mathematics,but whether or not this is a function of time and point in history.Clearly,what was a rigorous argument to Euler would not pass muster today in a number of cases.

Passing from generalities to specific cases,I think the prototype of statements which were almost universally accepted as true without proof was the early-19th century notion that globally continuous real valued functions had to have at most a finite number of nondifferentiable points.Intuitively,it's easy to see why in a world ruled by handwaving and graph drawing,this would be seen as true. Which is why the counterexamples by Bolzano and Weirstrauss were so conceptually devastating to this way of approaching mathematics.

Edit: I see Jack Lee already mentioned this example "below" in his excellent list of such cases. But to be honest,I don't think his first example is really about rigor so much as a related but more profound change in our understanding how mathematical systems are created. The main reason no one thought non-Euclidean geometries made any sense was because most scientists believed Euclidean geometry was an empirical statement about the fundamental geometry of the universe.Studies of mechanics supported this until the early 20th century;as long as one stays in the "common sense" realm of the world our 5 senses perceive and we do not approach relativistic velocities or distances,this is more or less true. Eddington's eclipse experiments finally vindicated not only Einstein's conceptions,but indirectly,non-Euclidean geometry-which until that point,was greeted with skepticism outside of pure mathematics.

The Mathemagician

Posted 2010-09-03T13:08:22.820

Reputation: 299

They knew the Bible in Moore's day: "Sufficient unto the day is the evil thereof" Matthew 6:34. This is not to suggest, of course, that rigour is evil. – Mark Bennet – 2012-01-01T21:54:14.680

1This seems more like a discussion on Jack Lee's post than an answer. – Jonas Meyer – 2010-09-04T23:27:28.250

Uh,Jonas-I didn't even see Jack Lee's post until I finished my original. – The Mathemagician – 2010-09-04T23:41:54.230

I believe you, but I think that in terms of the content, about half of which was added after you did see that post, it comes off largely as a discussion of that post. There was also an update to the question a couple of hours ago clarifying the types of examples most desired. – Jonas Meyer – 2010-09-04T23:48:41.727


I have the tendency to think that the need for absolute certainty is related to the arborescent structure of mathematic. The mathematics of today rest upon layers of more ancient theories and after piling up 50 layers of concepts, if you are only sure of the previous layers with a confidence of 99%, a disaster is bound to happen and a beautiful branch of the tree to disappear with all the mathematicians living on it. This is rather unique in natural sciences with the exception of extremely sophisticated computer programms but, in mathematics, you will have to fix by yourself an equivalent of 2K bug.

Of course, there are people who are willing to take the risk to see what they have achieved collapse in front of their eyes by working under the assumption that an unproven, but highly plausible, result is true (like Riemann hypothesis or Leopoldt conjecture). In some cases this is actually a good way to be on top of the competition (think of the work of Skinner and Urban on the main conjecture for elliptic curves which rests upon the existence of Galois representations that were not proven to exist before the completion of the proof of the Fundamental Lemma).


Posted 2010-09-03T13:08:22.820

Reputation: 1


Isn't the point that human reason is generally frail altogether, especially when making conclusions by using long serial chains of arguments? So in mathematics where such extended arguments are routine, we want their soundness to be as close to ideal as possible. Of course, even generally accepted proofs are occasionally later seen to be lacking, but to give up proofs as the ideal changes the very nature of mathematics.

I heard that the example of parts of the Italian school of algebraic geometry of the 19th century was an important example of this overextension of intuition.

Furthermore, it is only in the attempt at proof that the real nature of the reasons why a statement is true are finally exposed. So the reformulation and refoundation of algebraic geometry in the 20th century is said to have exposed revolutionary new ways of seeing mathematics in general.

Finally, it is only by proof that the limits of applicability of a theorem are really understood. This comes into play many times in physics, say where some "no-go theorem" is elided because its assumptions are not valid in some new realm.


Posted 2010-09-03T13:08:22.820

Reputation: 175

1I'm not downvoting this, but it seems overly vague to me. I think for this issue specific examples of problems that intuitive methods get wrong are more convincing than generalities about the nature of human reasoning. The Italian school example is a good one but again needs to be made more specific. – David Eppstein – 2010-09-03T18:18:15.940


Regarding the topic of Itatlian algebraic geometry, this question and some of the comments and answers may be of interest: (The linked email of David Mumford, in the comment by bhwang, is particularly interesting.)

– Emerton – 2010-09-03T18:49:19.407


The way current computer algebra systems (that I know of) are designed is a compromise between ease of use and mathematical rigor. Although in practice, most of the answers given by CASes are correct, the lack of rigor is still a problem because the user cannot fully trust the results (even under the assumption that the software is bug-free). Now, it might sound like just another case of "99% certainty is enough," but in practice it means having to verify the results independently afterwards, which could be considered unnecessary extra work.

The root of the problem seems to be that a CAS manipulates expressions when it should output theorems instead. In many cases, the expressions simply don't have any rigorous interpretation. For example, variables are usually not introduced explicitly and thus not properly quantified; in the result of an indefinite integral they might even appear out of nowhere. Dealing with undefinedness is another problem.

All of this is inherent in the architecture of computer algebra systems, so it cannot be fixed properly without switching to a different design. The extra 1% of certainty may indeed not justify such a change. But if rigor had been emphasized more from the start, maybe we would have trustworthy CASes now.

I think this line of thought can be generalized. (As a non-mathematician) I can't help but wonder how mathematics would have progressed without the widespread introduction of rigor in the 19th century. I can't really imagine what things would be like if we still didn't have a proper definition of what a function is. So maybe rigor is indeed not strictly necessary in particular cases, but it has shaped mathematical practice in general.

Sebastian Reichelt

Posted 2010-09-03T13:08:22.820

Reputation: 218

2I tend to not use CAS packages that aren't open-source. Knowing the precise details of the implementation of the algorithm allows you to understand its limitations, when it will and will not function. There is still uncertainty in this of course -- did I really understand the algorithm? Is the computer hardware faulty? Does the compiler not compile the code correctly? And so on. Open-source also has the advantage that the algorithms are re-usable and not "canned". – Ryan Budney – 2010-09-04T20:40:06.840


The answer to another MO question What did Ramanujan get wrong? cites Bruce Berndt (Ramanujan's Notebooks, Part IV) for a discussion of some cases where Ramanujan's legendary intuition went astray and led him to incorrect beliefs about the accuracy of certain approximations and asymptotic expansions. So this is a concrete example of a generally reliable but non-rigorous approach yielding false statements sometimes.

Timothy Chow

Posted 2010-09-03T13:08:22.820

Reputation: 30 733


Richard Lipton recently blogged about this question in the context of why a potential proof of $P \neq NP$ would be important. I am probably bastardizing his words, but one of the reasons he gives is that a proof may give new insight and methods of attack to other problems. He cites Wiles' proof of Fermat's Last Theorem as an example of this phenomenom.

Tony Huynh

Posted 2010-09-03T13:08:22.820

Reputation: 18 841


This question already has an amazing amount of great answers. Being a physicist with very limited knowledge of mathematics, I certainly cannot expect to contribute something of equal value, however after reading through the answers I'm missing a certain aspect. The missing issue is: "What is a proof"? The "rigorization" process of Calculus with $\epsilon$/$\delta$ proofs was mentioned as a major progress. However I saw nothing doubting that the current foundations of mathematics might still be "improved", where it is of course an interesting question what that would mean. A while ago, when I was trying to find the answer to above question, I came across the following story:

Fields medalist Vladimir Voevodsky, when working on rather scary stuff that I do not begin to understand (motivic cohomology...), came across many cases where ground-breaking published papers with proofs contained errors, which would only be noticed years after. Sometimes this would render a lot of later work worthless. Errors were not noticed, although people were studying the papers in seminars. When it happened to him, it genuinely scared him and got him to start working on computer-assisted proof, as well as an axiomatization of mathematics that goes beyond ZFC, called "Univalent Foundations". It has several conceptual advantages but apparantly is very unknown (and not complete yet!). The aim is to produce a framework where computer verified proofs are a practical option (contraty to now, where such proofs are extremely cumbersome and impossible to use on a regular basis in publications).

Returning closer to the question at hand: A quote from his motivation (a lengthy article containing many references and statements relevant to this question) for the foundational work of his is:

A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

Granted, the results I'm describing here are far removed from practical applications, such as would interest an engineer. However it isn't entirelly unreasonable that fields like theoretical physics would eventually come into contact with parts of mathematics which seem similar in terms of obscurity. The verdict is: Even in the 21st century, having a published result with a proof is not enough to be sure it's true, not even in the most renowned journals and by the most trusted authors. To achieve that, one would need to push the limits of rigour even further, thus acknowledging its importance (where of course it is up to debate whether and how we want to do this).

By the way, Voevodsky has a recorded talk of his where he considers the question "what if the current foundations of mathematics are inconsistent?", where he tries to imagine how one would work in a framework known to be inconsistent, rather than one where one hopes but can never prove that verything is fine

Adomas Baliuka

Posted 2010-09-03T13:08:22.820

Reputation: 111

1This doesn't seem to me to be an answer to the question. The points made here might be worth raising in a new question. – Gerry Myerson – 2017-04-24T05:58:55.680

1I thought this was answering the question because it "Demonstrat[es] that rigour is important". It also points out cases where having a (correct!) proof for something turned out to be very important for salvaging some results, which lost credibility and also the interest of the mathematical community, when their fundamentals turned out to be wrong. If more people think my impression was wrong, I strongly encourage someone to delete the answer. – Adomas Baliuka – 2017-04-24T06:10:58.640


In response to the request for an example of a statement that was widely but erroneously believed to be true: does Gauss's conjecture that $\pi(n) < \operatorname{li}(n)$ for every integer $n \geq 2$, disproved by Littlewood in 1914, qualify?

Greg Marks

Posted 2010-09-03T13:08:22.820

Reputation: 1 058

2It's a good example of the need for rigor. But I've always been skeptical of the story that it was widely believed to be true, since any competent mathematician familiar with Riemann's 1857 explicit formula for π(n) would have realized that there are almost certainly going to be occasional exceptions. (Littlewood removed the word "almost" by a more careful analysis.) – Richard Borcherds – 2010-09-08T19:58:40.970


This is not strictly an answer to the question since it is a hypothetical rather than an example, but perhaps relevant nonetheless: Suppose that you have some computer program-say for keeping an airliner stable under gusts of wind-and it relies on a numerical algorithm, proved to converge under reasonable assumptions about air pressure and wind velocity, so on. A faster and more stable algorithm is developed for which no proof of convergence is known, though all the researchers in the field assure you that it always converges and that they are certain that it will always converge given the plausible scenarios your code is likely to be used for. I think it is clear that you should not trust their judgment, but rather retain the old code, despite the clear desirability of having a faster numerical algorithm.

So from the perspective of the researchers, a proof might not be all that important; it may have only told them what they already knew and generated no new insights in the process. But from the perspective of the consumers of mathematics, knowledge that a proof exists may lead to incremental improvements in technology that would otherwise not happen. Of course, at this point we've come full circle and it becomes important to the researchers to supply a proof.

My second point again distinguishes between consumers of mathematics and researchers: It is sometimes much easier to become 100% certain of something than 99% certain. 99% certainty that a given statement is true requires thinking about many concrete examples and developing intuition, whereas 100% certainty requires logically assenting to the statements contained in a proof. By this standard, I would say that I am 100% certain about the bulk of my mathematical knowledge, and not 99% certain. Perhaps this is a lamentable state of affairs, but time is finite. We cannot hope to develop intuition about all the statements we wish to use while working on problems we do wish to develop intuition about. In that sense, proofs encode in a few kb's the vast amount of information stored as the intuitions of all the researchers working on a particular problem. Again under this model, the purpose of proofs is for the convenience of non-researchers.


Posted 2010-09-03T13:08:22.820

Reputation: 161

I doubt that an airline would not choose greater efficiency (which usually comes with lower costs) even if there is a small percentage of risk left. – Johannes Hahn – 2011-05-04T14:25:20.213


I'm not sure it answers the question, but for me, providing (and understanding) a rigorous proof is a way to be sure you understood the deep thing behind what may seems "obvious", for good or wrong reasons: An example I like to give to non-mathematicians, is the Cantor's proof that the cardinality of $\mathbb [0,1]$ is strictly bigger than the one of $\mathbb N$.

The next metaphor is, I can imagine, well known, but that serves the purpose of the question: You start by asking what "two sets have the same cardinality" means. For that, I like to tell the story about the shepherd which can count until, say, 100, but has 1000 sheeps. Every year they go away to pasturelands, and come back. The clever shepherd uses small rocks and associate in his head "one rock = one sheep" to see if he lost, or won, sheeps the year after. Once this notion of bijection is somehow acquired, you ask about the bijection between $\mathbb N$ and $2\mathbb N$, and uses a piece of paper to illustrate. Finally, you go absurd and prove you can't have "one rock = one sheep" for $\mathbb [0,1]$ and $\mathbb N$. Surprisingly, even the more mathematophobics always feel their intuition is wrong and they seem to enjoy this feeling of admitting to themselves it was something deep; infinity is a complicated notion, and they were able to touch that. Maybe a mathematical proof is something which helps to transform a philosophical discussion into a statement nobody accepting elementary axioms can't argue against.

Adrien Hardy

Posted 2010-09-03T13:08:22.820

Reputation: 1 138


The fundamental lemma is an example that most believed and on whose truth several results depend. According to Wikipedia, Professor Langlands has said

... it is not the fundamental lemma as such that is critical for the analytic theory of automorphic forms and for the arithmetic of Shimura varieties; it is the stabilized (or stable) trace formula, the reduction of the trace formula itself to the stable trace formula for a group and its endoscopic groups, and the stabilization of the Grothendieck–Lefschetz formula. None of these are possible without the fundamental lemma and its absence rendered progress almost impossible for more than twenty years.

and Michael Harris has also commented that it was a "bottleneck limiting progress on a host of arithmetic questions."

Anthony Pulido

Posted 2010-09-03T13:08:22.820

Reputation: 163

1Fundamental lemma is not even 50% obvious by any stretch of imagination. Like most of the Langlands program, it is not a specific result that admits a precise, uniform statement; rather, it is a guiding principle that needs to be fine-tuned in order to be compatible with other things that we, following Langlands, would like to believe in $-$ then, and only then, does it becomes meaningful to talk about proving it. – Victor Protsak – 2010-09-06T21:35:44.087


Thank you, Victor. I'm not proposing that the Fundamental Lemma is obvious, but it seems that is was accepted as likely to be true, because others based new work on it. PC below gives the example of Skinner and Urban, and Peter Sarnak says here,28804,1945379_1944416_1944435,00.html, that "It's as if people were working on the far side of the river waiting for someone to throw this bridge across," ... "And now all of sudden everyone's work on the other side of the river has been proven."

– Anthony Pulido – 2010-09-06T22:45:10.163


I was under the impression that the fundamental lemma, at least, is a set of statements clear enough to be amenable to proof attempts. I think we have on page 3 here and Theorems 1 and 2 here precise statements for the various fundamental lemmas... It's possible I'm misunderstanding you.

– Anthony Pulido – 2010-09-06T22:55:15.167

4The fundamental lemma had a precise formulation, due to Langlands and Shelstad, in the 1980s (following earlier special cases). It is a collection of infinitely many equations, each side involving an arbitrarily large number of terms (i.e. by choosing an appropriate case of the FL, you can make the number of terms as large as you like). It was universally believed to be true because otherwise the theory of the trace formula (some of which was proved, but some of which remained conjectural), as developed by Langlands and others, would be internally inconsistent, something which no-one ... – Emerton – 2010-09-07T04:05:14.170

3... believed could be true. This a typical phenomenon in the Langlands program, I would say: certain very general principles, which one cannot really doubt (at least at this point, when the evidence for Langlands functoriality and reciprocity seems overwhelming), upon further examination, lead to exceedingly precise technical statements which in isolation can seem very hard to understand, and for which there is no obvious underlying mechanism explaining why they are true. But one could note that class field theory (which one knows to be true!) already has this quality. – Emerton – 2010-09-07T04:13:28.047

Emerton, what about the transfer factors? Their definition is part of the statement of the fundamental lemma, but if I recall correctly, it doesn't really follow from general philosophy, and in particular, needs to be given for each type separately (also, some funny sign factors appear). Likewise, the statement of Langlands functoriality needed to be subtly modified several times (by Langlands due to L-indistinguishability, by Arthur based on the progress in our understanding of the trace formula, etc). If that's not moving the goalposts while the ball is in the air, I don't know what is. – Victor Protsak – 2010-09-07T06:23:29.517

Thank you both for your enlightening comments. I've learned a great deal and have something to think about. Emerton... Matthew? I can't be sure that you remember me. I met you in the spring of 2009, I believe, at the IAS. I accompanied you to Nassau St. afterwards, along which way we talked about various things. Thank you again for the nice time and thank you both again for your comments. I hope my answer wasn't an excessive stretch. – Anthony Pulido – 2010-09-08T05:07:28.520

The question was, which statements were widely believed to be true, whose eventual proof led to more than just knowledge of their truth. From considering both your comments, I think I gave an example of precise statements that were believed to be true because they were the results of a plausible principle, whose proof led to more than knowledge of their truth. – Anthony Pulido – 2010-09-08T05:08:19.207

It's possible Professor Gowers was asking for an example of a proof that gave new unexpected knowledge, not something that was known to be an essential ingredient for a body of existing work, unless I missed something, which is possible, and indeed likely, and the proof of the FL indeed has already led to more. – Anthony Pulido – 2010-09-08T05:15:31.460

These statements are difficult to understand in isolation, like Emerton said, and therefore not obvious, like Victor said, if I understand you correctly. According to Emerton, it had a precise formulation in the 80s, and Victor claims it didn't, that it evolved in such a way that one can't say it was formulated precisely until recently? Victor, if you have the time, it might help me understand you to know how your comments relate to the short history here:

– Anthony Pulido – 2010-09-08T05:16:02.943


Mathematics wasn't that rigorous before N. Bourbaki: in the Italian school of Algebraic Geometry of the beginning of the XXth century the standard procedure was Theorem, Proof, Counterexample. Also at the time of Cauchy some theorems in analysis began like "If the reader doesn't choose a specially bad function we have..."

The use of rigour in analysis, which Cauchy began, avoided that by being able to explain what was a "good function" in each case: analytic, $C^{\infty}$, being able to do term-by-term derivation in its expansions series...

Gabriel Furstenheim

Posted 2010-09-03T13:08:22.820

Reputation: 108

-1: Bourbaki seems rather late for an estimated transition to modern standards of rigor. Would you say that Hilbert's work was lacking in rigor? – S. Carnahan – 2010-10-27T07:02:57.727

You're completely right in that Hilbert's work wasn't lacking rigour, but it wasn't the standard, think of H. Poincaré – Gabriel Furstenheim – 2010-10-27T18:59:00.030

1@Gabriel : This seems like a delicate historical claim. I would actually be interested to see some sources on the evolution of standards of rigor within the mathematical community. – Andrés E. Caicedo – 2010-12-06T23:11:43.703


@Andres "Nicolas Bourbaki, a project they began in the 1930s, in which they attempted to give a unified description of mathematics. The purpose was to reverse a trend which they disliked, namely that of a lack of rigour in mathematics. The influence of Bourbaki has been great over many years but it is now less important since it has basically succeeded in its aim of promoting rigour and abstraction."

– Gabriel Furstenheim – 2010-12-07T08:26:52.763

@Gabriel : Thanks! – Andrés E. Caicedo – 2010-12-07T22:09:55.890

2The unified description of mathematics was not the initial intent of the Bourbaki project. By all accounts, it was to write an up to date analysis textbook (losing a whole generation to World War 1 had left a gap). Of course, the whole thing got out of hand pretty quickly... – Thierry Zell – 2011-03-19T05:15:39.553


In my experience, the two greatest difficulties in mathematics are:

  1. The obvious is not always true.

  2. The truth is not always obvious.

Rigour is the essence of mathematics. A rigorous proof provides an explanation of why a particular mathematical statement is true, and, at the same time, takes care of all the "Yes, but what if"s.

Rigour and proof provide the guarantee of correctness and reliability.

Rigour and proof refine our mathematical insights and instincts so that the superficially "obvious" misleads us less frequently.

When I pose the problem "1, 2, 3, x Find x." the initial response is usually a derisory laugh, of disbelief that I am serious, because "the answer is obviously 4". It is easy to demonstrate using practical examples that this statement is, as it stands, nonsense. A rigorous analysis is required.

Imi Bokor

Posted 2010-09-03T13:08:22.820

Reputation: 1


Michael Atiyah's discussion of the "proof" and it role seems to relevant to be posted here.

Taken from: "Advice to a Young Mathematician in the Princeton Companion to Mathematics." This link was provided by "mathphysicist" in answer on another MO question: A single paper everyone should read?

Quotation from M. Atiyah:

"In fact, I believe the search for an explanation, for understanding, is what we should really be aiming for. Proof is simply part of that process, and sometimes its consequence."

"... it is a mistake to identify research in mathematics with the process of producing proofs. In fact, one could say that all the really creative aspects of mathematical research precede the proof stage. To take the metaphor of the “stage” further, you have to start with the idea, develop the plot, write the dialogue, and provide the theatrical instructions. The actual production can be viewed as the “proof”: the implementation of an idea.

In mathematics, ideas and concepts come first, then come questions and problems. At this stage the search for solutions begins, one looks for a method or strategy. Once you have convinced yourself that the problem has been well-posed, and that you have the right tools for the job, you then begin to think hard about the technicalities of the proof."

Alexander Chervov

Posted 2010-09-03T13:08:22.820

Reputation: 10 194

1I have described my search in 1965-74 for higher order Seifert-van Kampen theorems as: "An idea for a proof in search of a theorem." It took 9 years to find the gadgets to get a theorem. I think it is quite good to look at the work we do as a search for theorems. There is an apochryphal dedication of a PhD Thesis: "I am deeply grateful to Professor X, whose wrong conjectures and fallacious proofs, led me to the theorems he had overlooked." Sounds like excellent supervision!! It also leads to the question: What is a theorem? – Ronnie Brown – 2012-05-03T17:35:13.873


Allow me to quote part of the introduction of chapter 9 of Lovász: Combinatorial Problems and Exercises.

The chromatic number is the most famous graphical invariant; its fame being mainly due to the Four Color Conjecture, which asserts that all planar graphs are 4-colorable. This has been the most challenging problem of combinatorics for over a century and has contributed more to the development of the field than any other single problem. A computer-assisted proof of this conjecture was finally found by Appel and Haken in 1977. Although today chromatic number attracts attention for several other reasons too, many of which arise from applied mathematical fields such as operations research, attempts to find a simpler proof of the Four Color Theorem is still an important motivation of its investigation.

So here it's not so much the proof but the search for a proof that has given something extra over just believing the theorem. Does that still count as an answer to this question?

Zsbán Ambrus

Posted 2010-09-03T13:08:22.820

Reputation: 1 191