171

192

I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are used to it.

**Example 1.** Euler's proof that there are infinitely many primes.

If you haven't seen anything like it before, the idea that you could use *analysis* to prove that there are infinitely many primes is completely unexpected. Once you've seen how it works, that's a different matter, and you are ready to contemplate trying to do all sorts of other things by developing the method.

**Example 2.** The use of *complex* analysis to establish the prime number theorem.

Even when you've seen Euler's argument, it still takes a leap to look at the complex numbers. (I'm not saying it can't be made to seem natural: with the help of Fourier analysis it can. Nevertheless, it is a good example of the introduction of a whole new way of thinking about certain questions.)

**Example 3.** Variational methods.

You can pick your favourite problem here: one good one is determining the shape of a heavy chain in equilibrium.

**Example 4.** Erdős's lower bound for Ramsey numbers.

One of the very first results (Shannon's bound for the size of a separated subset of the discrete cube being another very early one) in probabilistic combinatorics.

**Example 5.** Roth's proof that a dense set of integers contains an arithmetic progression of length 3.

Historically this was by no means the first use of Fourier analysis in number theory. But it was the first application of Fourier analysis to number theory that I personally properly understood, and that completely changed my outlook on mathematics. So I count it as an example (because there exists a plausible fictional history of mathematics where it *was* the first use of Fourier analysis in number theory).

**Example 6.** Use of homotopy/homology to prove fixed-point theorems.

Once again, if you mount a direct attack on, say, the Brouwer fixed point theorem, you probably won't invent homology or homotopy (though you might do if you then spent a long time reflecting on your proof).

The reason these proofs interest me is that they are the kinds of arguments where it is tempting to say that human intelligence was necessary for them to have been discovered. It would probably be possible in principle, if technically difficult, to teach a computer how to apply standard techniques, the familiar argument goes, but it takes a human to *invent* those techniques in the first place.

Now I don't buy that argument. I think that it is possible in principle, though technically difficult, for a computer to come up with radically new techniques. Indeed, I think I can give reasonably good Just So Stories for some of the examples above. So I'm looking for more examples. The best examples would be ones where a technique just seems to spring from nowhere -- ones where you're tempted to say, "A computer could never have come up with *that*."

**Edit:** I agree with the first two comments below, and was slightly worried about that when I posted the question. Let me have a go at it though. The difficulty with, say, proving Fermat's last theorem was of course partly that a new insight was needed. But that wasn't the only difficulty at all. Indeed, in that case a succession of new insights was needed, and not just that but a knowledge of all the different already existing ingredients that had to be put together. So I suppose what I'm after is problems where essentially the *only* difficulty is the need for the clever and unexpected idea. I.e., I'm looking for problems that are very good challenge problems for working out how a computer might do mathematics. In particular, I want the main difficulty to be fundamental (coming up with a new idea) and not technical (having to know a lot, having to do difficult but not radically new calculations, etc.). Also, it's not quite fair to say that the solution of an arbitrary hard problem fits the bill. For example, my impression (which could be wrong, but that doesn't affect the general point I'm making) is that the recent breakthrough by Nets Katz and Larry Guth in which they solved the Erdős distinct distances problem was a very clever realization that techniques that were already out there could be combined to solve the problem. One could imagine a computer finding the proof by being patient enough to look at lots of different combinations of techniques until it found one that worked. Now their realization itself was amazing and probably opens up new possibilities, but there is a sense in which their breakthrough was not a good example of what I am asking for.

While I'm at it, here's another attempt to make the question more precise. Many many new proofs are variants of old proofs. These variants are often hard to come by, but at least one starts out with the feeling that there is something out there that's worth searching for. So that doesn't really constitute an entirely new way of thinking. (An example close to my heart: the Polymath proof of the density Hales-Jewett theorem was a bit like that. It was a new and surprising argument, but one could see exactly how it was found since it was modelled on a proof of a related theorem. So that is a counterexample to Kevin's assertion that any solution of a hard problem fits the bill.) I am looking for proofs that seem to come out of nowhere and seem not to be modelled on anything.

**Further edit.** I'm not so keen on random massive breakthroughs. So perhaps I should narrow it down further -- to proofs that are easy to understand and remember once seen, but seemingly hard to come up with in the first place.

2Perhaps you could make the requirements a bit more precise. The most obvious examples that come to mind from number theory are proofs that are ingenious but also very involved, arising from a rather elaborate tradition, like Wiles' proof of Fermat's last theorem, Faltings' proof of the Mordell conjecture, or Ngo's proof of the fundamental lemma. But somehow, I'm guessing that such complicated replies are not what you have in mind. – Minhyong Kim – 2010-12-09T15:18:28.510

@Minhyong: right! All of these proofs involved fundamental new insights, but probably the proof of an arbitrary statement that was known to be hard (in the sense that "the usual methods don't seem to work") and was then proved anyway ("because a new method was discovered") seem to fit the bill... – Kevin Buzzard – 2010-12-09T15:30:59.217

9Of course, there was apparently a surprising and simple insight involved in the proof of FLT, namely Frey's idea that a solution triple would give rise to a rather exotic elliptic curve. It seems to have been this insight that brought a previously eccentric seeming problem at least potentially within the reach of the powerful and elaborate tradition referred to. So perhaps that was a new way of thinking at least about what ideas were involved in FLT. – roy smith – 2010-12-09T16:21:30.547

11Never mind the application of Fourier analysis to number theory -- how about the invention of Fourier analysis itself, to study the heat equation! More recently, if you count the application of complex analysis to prove the prime number theorem, then you might also count the application of model theory to prove results in arithmetic geometry (e.g. Hrushovski's proof of Mordell-Lang for function fields). – D. Savitt – 2010-12-09T16:42:04.740

2In response to edit: On the other hand, I think those big theorems are still reasonable instances of proofs that are difficult to imagine for a computer! Incidentally, regarding your example 2, it seems to me Dirichlet's theorem on primes in arithmetic progressions might be a better example in the same vein. – Minhyong Kim – 2010-12-09T17:34:23.603

7I agree that they are difficult, but in a sense what I am looking for is problems that isolate as well as possible whatever it is that humans are supposedly better at than computers. Those big problems are too large and multifaceted to serve that purpose. You could say that I am looking for "first non-trivial examples" rather than just massively hard examples. – gowers – 2010-12-09T18:04:51.283

I'm still a bit confused about your motivation. Are you trying to understand why you think certain proofs are hard to be generated by computers? Or are you interested in this list for its own sake? Or something else? – Jack Lemon – 2010-12-09T18:11:06.103

What about Hilbert's approach to the "fundamental problem of invariant theory"? I.e. the one that supposedly provoked the remark "This is not mathematics, but theology". – Jon Bannon – 2010-12-09T19:02:24.403

2@Luke: My motivation is that I believe that computers ought to be able to do mathematics. To explore that view, it is very helpful to look at problems of this type, since either one will be able to explain how certain ideas that seem to come from nowhere can in fact be generated in an explicable way, or one will end up with a more precise understanding of the difficulties involved. Of course, I'm hoping for the former. – gowers – 2010-12-09T19:09:24.090

While this is a common belief even for strong proponents of computerized mathematics, it is not clear if these types of ideas/proofs would be harder for computer systems (fully automatic or interactive). For example, the "probabilistic method" had major impact and led to surprising proofs/concepts in different areas in different times. So the idea: "Use a probabilistic argument to prove the existence of the required objects" or "Add probabilistic ingredient to this notion" could have been offered (and still can be offered) rater automatically. – Gil Kalai – 2010-12-10T12:42:40.643

1I see another conceptual difficulty with the spirit behind the question: Suppose we have to compare two proofs for two a priori equally important theorems. The first proof is based on a fundamentally new way of thinking (whatever it means, but let's assume that it is meaningful). In the second proof the proof of Lemma 12.7 is based on a fundamentally new way of thinking. How do we compare these two scenarios? – Gil Kalai – 2010-12-10T15:13:06.513

2My feeling is that when someone says "X is fundamentally new" (for various values of X) in reference to some mathematics, IMO this usually demands as a prerequisite that one has a pretty narrow perspective on the kinds of thinking that came beforehand in order to believe the statement. This doesn't take anything away from novel mathematics, it's just that

fundamentally newis almost always too hyperbolic expression for the mathematics it describes. I imagine the main reason mathematicians use such hyperbolic terminology is that hype draws people's attention, and that helps ideas propagate. – Ryan Budney – 2010-12-11T05:17:37.2172@Ryan, as I hope my remarks make clear, I completely agree. In other words, I hope that by asking for fundamental newness I have set an impossible challenge. Maybe I could refine the question further: I am looking for proofs that

appearto be so different from what went before that they require some special and characteristically human "genius" to be discovered. – gowers – 2010-12-11T07:16:52.323@Gil: I agree that the two scenarios exist. I'm not sure I see the need to compare them. – gowers – 2010-12-11T07:17:36.250

A comparison of the two scenarios is relevant for trying to understand what computers can do. Even if we agree that "fundamentally new (=FN)" arguments is the hardest element to automatize, it still seems harder (for a computer and perhaps also for a human) to find an FN argument at an unknown place down the proof than right at the beginning. – Gil Kalai – 2010-12-11T08:04:50.463

A related MO question: http://mathoverflow.net/questions/21562/what-are-some-mathematical-concepts-that-were-pretty-much-created-from-scratch

– Gil Kalai – 2010-12-13T19:16:21.560One more (rather obvious) remark is that sometimes "fundamental new way of thinking" in general, and such new ways that lead to proofs, emerges gradually from a large body of work by many people. – Gil Kalai – 2010-12-13T19:24:41.483

1It is surprising how (successful) fundamentally new ways of thinking are clustered. Cantor idea is FN and yet very closely related to the ancient liar paradox, in this cluster also Russell's proof that his set is not a set and Goedel's theorem. The FN idea of non constructive methods, and in particular, probabilistic proofs. Homology is FN method in classifying topological spaces and fixed point theorems, and then (Emerton's answer) through fixed point theorems in number theory, and also there is the FN mysterious method to classify representations based on actions on homology. – Gil Kalai – 2010-12-14T11:32:58.563

4It seems to me that this question has been around a long time and is unlikely garner new answers of high quality. It also seems unlikely most would even read new answers. Furthermore, nowadays I imagine a question like this would be closed as too broad, and if we close this then we'll discourage questions like it in the future. So I'm voting to close. – David White – 2013-10-13T18:52:12.383

1Wow. So you can get 147 votes and still be closed as off-topic. Doesn't the fact that 147 math researchers liked it, alone, attest to its relevance? ...anyway surprised nobody mentioned Cantor's cardinality proofs. Before that, proofs by contradiction were not considered valid. – j0equ1nn – 2015-07-24T23:27:32.063