Rigorous mathematics

Lean offers rigour.

For years, people have been pointing me to Thurston’s work “On proof and progress in mathematics”, a work which I think of as a brilliant exposition of the importance of non-formalisation in mathematics. But it was only this week that I actually read the article Thurston was responding to — Jaffe and Quinn’s “Theoretical mathematics”. This paper opened my eyes. They have a word for how I now think about mathematics! They explain that I am now doing something called rigorous mathematics. This phrase gives me the language to explain what I am trying to tell mathematicians.

What is rigorous mathematics? It means mathematics which is guaranteed to be correct. Undergraduate-level mathematics is clearly rigorous mathematics: it has been checked from the axioms by many many people. What else? Perhaps SGA is a good example. It has been around since the 1960s and has been read by many people. The work of Grothendieck and others written up in this seminar was used in Ribet’s work on level-lowering, which was in turn used in Wiles’ proof of Fermat’s Last Theorem. But not all of our human literature is rigorous mathematics. We can basically guarantee that one of the papers published this year in one of the top mathematics journals will, within the next few years, be partially retracted, and nobody is asking the questions about whose job it is then to go through all the papers which cited it and to meticulously check that none of the citations were to parts of the paper which were wrong. I think defining rigorous mathematics in these human terms, trying to assert which parts of the literature are “rigorous”, is a hard problem. Perhaps Bourbaki is rigorous. Patrick Massot found a slip or two in it when formalising parts of Bourbaki’s Topologie Générale in Lean, but nothing remotely serious. Perhaps SGA is rigorous too — after all, hasn’t Brian Conrad read it or something? But actually Bourbaki and SGA both use different foundations, and neither of them use ZFC set theory (Bourbaki rolled its own foundations and then apparently ignored them anyway and just did mathematics, and SGA needs universes in places, but, as Brian Conrad has carefully checked, not in the places used to prove Fermat’s Last Theorem).

I am not sure I know a good answer for exactly which parts of the mathematical corpus as represented by our human literature is “mathematically rigorous”. Maybe the way it works is that anything published in a journal over 20 years ago and which hasn’t been retracted yet is mathematically rigorous. But probably some recently graduated PhD students would also say their work was mathematically rigorous as well. Perhaps the definition most mathematicians work with is that rigorous mathematics is “the part of published mathematics which the elders believe to be correct”.

There is another question too, of less interest to most mathematicians but of some importance when trying to figure out what we mean by rigorous mathematics, and that is the problem of saying exactly what the axioms of mathematics are. Are we allowed universes or not? Whose job is it to tell us? Scholze’s course cites, but does not use, a paper by Barwick and Haine which uses universes everywhere. De Jong puts some effort into explaining how the stacks project can be made to work in ZFC, but honestly who ever reads those bits of the stacks project? Scholze talks about infinity categories in his course and no doubt he can solve all set-theoretic issues — after all he did take the trouble to write section 4 of etale cohomology of diamonds. Emily Riehl and Dominic Verity are writing a book on infinity categories . I bet that book will be rigorous. Wouldn’t it be interesting checking it was rigorous in Lean? I think that this is an important question. Lurie needs infinity categories and his work is having wider implications across mathematics. The Clausen and Scholze work also needs infinity categories and is claiming to prove Grothendieck duality in a completely new way. Is this work definitely rigorous? What about this work of Boxer, Calegari, Gee and Pilloni, announcing some major new results in the Langlands programme but subject to 100 or so missing pages of arguments which are not in the literature but which experts know shouldn’t present any major problems (see the warning in section 1.4 on page 13). I have now seen a citation of that paper in another paper, with no mention of the warning. Is this what rigorous mathematics looks like? I am not suggesting that the work is incorrect, incomplete or anything, and I am not even suggesting that the human race is incapable of making the results in this 285 page paper rigorous. But is it, as it stands, rigorous mathematics? I have my doubts. When David Hansen stands up in the London Number Theory Seminar and claims that a certain construction which takes in a representation of one group and spits out a representation of another one is canonical — and then refuses to be drawn on the definition of canonical — is this rigorous? And if it isn’t, whose job is it to make it rigorous? I am well aware that other people have different opinions on this matter.

I think that every one of the computer proof systems — Isabelle/HOL, UniMath, Coq, Lean, Mizar, MetaMath and all the others, all represent candidate definitions for “rigorous mathematics” which are far better defined than the human “definition”. Which system is best? Who knows. I think mathematicians should learn them all and make their own decisions. I think Lean has got some plus points over some of the others, but I know people who think differently.

Perhaps a more liberal definition would be that rigorous mathematics is the theorems which are proved in the union of these computer proof checkers. So for example we could say that the odd order theorem was rigorously proved from the axioms of maths, as was the four colour theorem, the prime number theorem, the Kepler conjecture and the Cap Set conjecture. These are examples of theorems which I think all mathematicians would agree were rigorously proved. But we will need a lot more machinery before we can rigorously prove Fermat’s Last Theorem in one of these computer systems. In my mind, Fermat’s Last Theorem is currently a theorem of theoretical mathematics in the sense of Jaffe and Quinn, although I have no doubt that one day it will be a theorem of rigorous mathematics as well, because we human mathematicians have written some very clear pseudocode explaining how to make it rigorous. As for the ABC conjecture, I genuinely don’t know if Mochizuki’s ideas can be made into rigorous mathematics, although it seems that in general mathematicians are not happy with the pseudocode which has been produced so far. I can see a very simple way for the Mochizuki-followers to sort this out though, and we would happily talk to them on the Lean chat.

About xenaproject

The Xena Project aims to get mathematics undergraduates (at Imperial College and beyond) trained in the art of formalising mathematics on a computer. Why? Because I have this feeling that digitising mathematics will be really important one day.
This entry was posted in rigour, Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

13 Responses to Rigorous mathematics

  1. >SGA needs universes in places, but, as Brian Conrad has carefully checked, not in the places used to prove Fermat’s Last Theorem

    I had a conversation about this as well this week (with Jim Borger). I know Brian Conrad was our best hope for knowing universes weren’t used in the proof of FLT, but that is even worse that “theoretical mathematics”. It’s “this one guy claims he read all the proofs and beta-reduced them sufficiently to see the universes aren’t used”. Thankfully, and this should be much, much better known among number theorists, there is now a *theorem* that they aren’t necessary. Yes, an actual proof that the logical background needed to define what Grothendieck used universes for is a weak form of set theory, weaker even than ZFC. This is due to the category theorist and logician Colin McLarty. This has been recently published:

    >Colin McLarty, “The large structures of Grothendieck founded on finite order arithmetic”, Review of Symbolic Logic (2019) https://arxiv.org/abs/1102.1773

    He even can get Zariski cohomology of coherent sheaves on Noetherian schemes defined in the very weak foundation of second-order arithmetic (https://arxiv.org/abs/1207.0276), though this approach fails for the étale topology.

    Like

    • xenaproject says:

      Well I guess the Wiles/Taylor proof needs etale cohomology in places, although definitely at least some of the time one can work around this using Tate modules of Jacobians…

      Like

      • Sorry, that wasn’t clear. The first paper works for étale cohomology, and in fact derived functor cohomology for any abelian category (IIRC). The second paper (about second-order arithmetic) only works for Zariski. But with explicit bounds on sizes of sites and coeffcients one could of course get stronger results.

        Like

  2. DH says:

    Re. FLT and universes etc., it’s always worth rereading Brian Conrad’s rather scorching commentary here: (https://mathoverflow.net/questions/35746/inaccessible-cardinals-and-andrew-wiless-proof). The situation was never as opaque as some people seem to think.

    Anyway, glad my talk was memorable. 😉

    Like

    • xenaproject says:

      Hey David! You still owe me a definition of canonical 😉 Thanks for the Conrad link. I remember when this noise was all happening and Brian was quite annoyed by it all — somehow there was this rumour that had started, that FLT needed universes, and Brian’s point was this was borne of ignorance. His counter-argument was “if you just read the complete proof you will see that there are no universes there” 😉 (although he has confessed to me that he hasn’t read Langlands-Tunnell, but I’m pretty sure they’re universe-free there!) However because this was impossible for most people to do, people had to just trust him. And they did. Contrast with Mochizuki, another brilliant mathematician, who claims that the rumours that his work is incomplete are also borne of ignorance. His counter-argument is also “just read the proof”. But somehow our community doesn’t accept it this time — I guess because too many people tried and failed.

      Probably also worth mentioning was that the Wiles/TW original proof of FLT also needed Gross’ work on companion forms, which needed some unverified claims about compatibility of Hecke operators across “canonical” isomorphisms between cohomology groups. However because the community all believed these, far less fuss was made about them. I remember raising this issue with Taylor at the time, and he said that this was one of the reasons that DDT got written. Of course since then Conrad saved the day again, by supervising Bryden Cais’ thesis. Bryden could have argued in his thesis that his work completes the original Wiles/TW proof of FLT! But he didn’t.

      Like

  3. gestepia says:

    What are your thoughts on Lean offering ‘laziness’? I’ve recently seen myself use Lean in the same way as a schoolchild might type “5 x 4” on a calculator; as a scapegoat for low numeracy (in this case proof-writing) standards. Just applying procedures without thinking them through has become closer to my thinking style when working on some levels in the Natural Numbers Game, which is quite worrisome! Should the same stance on Lean be taken as calculators are in the classroom?

    Like

    • xenaproject says:

      For the earlier levels, it is *really* clear that you can just get into a “rw add_zero, rw add_succ, rw mul_one” mentality and not even think about where you’re going, you just know that you’re going in the right direction. But this only gets you so far. I wonder whether the levels which can be solved “automatically” like this are just not very interesting. Nobody ever writes these proofs anyway — perhaps because they are so trivial. By the time you’re working with topological spaces in Lean it doesn’t feel like this at all, the ideas in the Lean proof are the same as the ideas in the maths proof. So I think it’s just somehow a quirk due to the fact that the material here is really quite low-level.

      Like

      • gestepia says:

        Good to know- I’ll keep working with Lean and see how my experiences play out. I do think some of the later levels in Multiplication world aren’t too interesting per se, but I think they are of value as they do serve as a gateway (confidence booster?) for those who find Lean difficult and have trouble getting comfortable with it (myself included).

        Like

      • xenaproject says:

        To give some context — the way the game was created was that I tried to figure out the most complicated “class” which was known to Lean and which the natural numbers were a member of (and this was something like the class of totally ordered commutative semirings or something) and then I just set about proving that a user-defined natural numbers was an instance of this class from scratch. This was the inspiration for many of the lemmas. Some of the later lemmas in world X are often inspired by the fact that even though they look pointless, one needs them in world X+3.

        Liked by 1 person

  4. Pingback: Resumen de lecturas compartidas durante diciembre de 2019 | Vestigium

  5. Timothy Chow says:

    I hope you can interest the fans of Mochizuki’s IUTT in proof assistants. For some reason that I don’t fully understand, my experience has been that many amateurs recognize the relevance of proof assistants when it comes to such controversies, but many professional mathematicians resist the idea. I commented on this topic on the Foundations of Mathematics mailing list a few years ago. https://cs.nyu.edu/pipermail/fom/2018-January/020794.html

    Like

    • xenaproject says:

      The issue with the IUTT proof seems to be that there is not enough information in the papers for anyone other than Mochizuki’s disciples to follow one of the arguments. Formalisation won’t help with this, unless the disciples themselves decide to do it, and given that they have shown no interest in clarifying the argument in traditional ways, it’s hard to imagine that they would attempt to clarify it in a non-traditional way. In stark contrast to Grothendieck’s disciples, who travelled the world in the 60s spreading the good news and converted many mathematics departments, this has not happened with IUTT. None of Mochizuki’s disciples have any interest in computer formalisation systems, as far as I am aware (I’ve not seen any on the Lean chat, for example).

      One has to be careful with such arguments anyway. Let’s say that a Mochizuki disciple claims that the Wiles/Taylor proof of FLT is incorrect (a ludicrous claim, but perhaps one which to a Mochizuki disciple feels is analogous to what the mainstream number theory community is saying to them). Say the Mochizuki disciple then challenges the number theory community to formalise FLT in order to prove that it is OK. Of course it can be done (because all of the details are in the literature), but it would take many many thousands of person-hours. Over the last two days I spent about 6 hours formalising some definitions and a basic lemma in Scholze’s 2019-20 course on liquid mathematics; before that I spent 10 minutes verifying the details on paper. Ultimately it was only about 200 lines of code, but it took time to construct. Right now formalising mathematics at this level takes time.

      Like

      • Timothy Chow says:

        Certainly it would have to be the disciples doing the formalization. One cannot expect to interest the current IUTT “elders” in formalization, but as I understand it, there is funding going to junior mathematicians to study IUTT. Perhaps they can be persuaded that it would increase recognition and support of their field if the proof were formalized.

        Your analogy with FLT fails because (to use the language of my FOM post) there is an asymmetry in the sizes of Community A and Community B. It is the smaller community that stands to gain from a formalization.

        I think that maybe the reason professional mathematicians seem to resist this line of reasoning is that they are convinced that IUTT is wrong, and so they perceive a request for formalization as a debating tactic, a “gotcha”: If you can’t formalize it then it must be wrong, so there! Q.E.D. But it doesn’t have to be that way. Here’s another example that is less sociologically fraught: Randall Holmes announced some years ago a proof that NF is consistent. This is a famous problem in logic that has been open since Quine introduced NF in 1937. In this case the experts think the proof is probably right, but it is so complicated that nobody besides Holmes seems to be 100% sure. Holmes has tried revising his proof several times to make it clearer, but these revisions come at a cost: People who spent a lot of time on an earlier draft will get frustrated that they have to start over. I gather that the uncertainty about its correctness has been an obstacle to the publication of the proof.

        In this case, Holmes happens to already be comfortable with proof assistants, and he appreciates the value of having a formal proof in order to persuade the larger community of the correctness of a complicated new solution to an old open problem. Furthermore, as I said, in this case the experts are favorably disposed toward the proof but fall just short of being completely convinced. The trouble is that Holmes is just one person and has many demands on his time. Perhaps if he got some assistance from some others then the hump could be surmounted. There would seem to be at least two benefits from a Xena Project point of view; first, it would be a different subfield of mathematics from most of what’s current in Lean, and so it would help bolster the claim that “all of mathematics” is being formalized, and second, it would demonstrate to the wider mathematical community that formal proofs can actually play a valuable role in breaking deadlocks of this type (i.e., an important proof that is so complicated that experts don’t want to publish it or spend time studying it because it might be wrong, is ultimately accepted because of a formalization). I suppose you could say that Flyspeck is already an example of this type, but in that case, the proof was published in the Annals before the formalization, so it wasn’t really a deadlock.

        Like

Leave a comment