This blog post is a report on how a bunch of mathematicians recently did something which five years ago I would have said was completely out of reach. Note that it’s my (Kevin Buzzard’s) personal take on the work; I am not speaking for all the authors or the Lean community or the mathlib community or any other organization. Note also that I was personally involved with the work.
The background
In December 2020, Fields Medallist Peter Scholze challenged the computer proof community to formally verify an important new theorem of himself and Dustin Clausen. Scholze issued the challenge here, and the challenge itself is now published in Experimental Mathematics, one of very few mathematical journals that is currently showing an interest in the area of formalisation. The name of the challenge was the Liquid Tensor Experiment (LTE), and yes it was a homage to the Liquid Tension Experiment, a band which Scholze is fond of. The theorem behind the challenge is a highly technical result, and I will not go into the details; check out the links I just mentioned for more on the mathematics involved. It would not be unreasonable to describe the theorem as a complex result about complex mathematical objects.
I have a bucket list of milestones for the area of mathematical formalisation, and formalising a complex theorem about complex objects was on this list. Why? One could argue that many of the major computer formalisations (the Kepler conjecture, four colour theorem and odd order theorem) were complex theorems about “simple objects” (spheres, planar graphs and finite groups, respectively). A formalisation which should not be placed in the same category as any of these milestone results, but which could perhaps be regarded as complementary to these major achievements, is my formalisation with Commelin and Massot of the definition and very basic properties of perfectoid spaces — this amounted to very trivial theorems about complex objects. For me, the formalisation of a proof of a complex theorem about complex objects was a missing part of the puzzle.
But not any more, because Johan Commelin, assisted by Adam Topaz and a team of other mathematicians, announced earlier this month that their Lean formalisation of Scholze’s challenge was complete. The final touches of the formalisation were put together at ICERM in Providence RI, USA, during the Lean for the Curious Mathematician 2022 workshop in July. These workshops are for working mathematicians with little or no Lean experience and who are interested in finding out what’s going on and how to get involved. If this sounds like you, then the videos for the ICERM workshop are now online at the above link. If you’re interested in seeing Commelin’s announcement of the completion of the project, it’s just over 40 minutes into my talk on the status of algebraic geometry in Lean.
More on the challenge and its resolution
The reference for the proof which we followed was Scholze’s lectures on analytic geometry, the notes for a course he gave in 2019/20 in Bonn. The theorem we formalised was Theorem 9.1 of these notes (see also Remark 8.15). On p59 of the pdf there is a five line argument reducing Theorem 9.1 to Theorem 9.4, and then the next six pages contain a proof of Theorem 9.4. I’ve claimed above that Theorem 9.1 is a complex theorem about complex objects; however Theorem 9.4 is a more low-level statement, which unravels to a technically messy but essentially completely low-level claim. It took around one month to formalise the statement of Theorem 9.4, and a further five months to formalise its proof. This averages out at about one page of mathematics per month. You can see a map of the formalisation of the proof of Theorem 9.4 here. Clicking on a rectangle (definition) or an oval (theorem) will pop up a LaTeX statement of the result and will offer you a link to the corresponding Lean code. We finished the proof of 9.4 about a year ago, and to my mild surprise the media got interested; we were even featured in Nature. What probably caused the interest was that one of Scholze’s motivations to encourage a formalisation was that he was not convinced that too many humans had, or would, plough through the proof of Theorem 9.4; the formalisation gave him the assurance that his work was correct.
All that remained then for the challenge, was to formalise the five lines in the paper which showed that Theorem 9.4 implied Theorem 9.1. I’ll reproduce them here:
“By the preceding discussion, one can compute as the derived inverse limit of
over all
; equivalently, all
. Theorem 9.4 implies that
for any the pro-system of cohomology groups
is pro-zero (as
is zero). Thus, the derived inverse limit vanishes, as desired.”
These five lines took a further year to formalise, and that’s what has just been completed. The reason it took so long was that the Lean community had to build a working theory of abelian categories, cohomology, Ext groups, derived functors etc etc as prerequisites, and whilst some of these subjects had been approached before in theorem provers, we had to get everything formalised in one system at once and everything playing well with each other. It is worth noting however that even though I’m claiming that the proof of the theorem is “complex”, the actual length of the paper proof is nowhere near as long as, for example, the proof of the Kepler conjecture. However I do regard the result as an indication that we’re going in the right direction, pushing these systems to do things they’ve never done before (although of course I was involved in the formalisation, so one should bear this in mind when evaluating my claims here).
There’s something called the “de Bruijn factor” of a formalisation, which is the ratio of the number of lines of computer code to the number of lines of human mathematics. One major goal in the area of computer formalisation is to “get the de Bruijn factor down to 1”, that is, to make formalising mathematics in a computer no more difficult than formalising it in LaTeX. However the Commelin-Topaz-… formalisation of the Liquid Tensor Experiment seems to make a mockery of this concept, because around 30,000 lines of code went into formalising the five page proof of Theorem 9.4, and around 60,000 lines of code went into formalising the five lines above. At the very least it indicates that humans can squeeze a huge amount of mathematics into one line of text. Had the required basics (abelian categories etc) been in Lean’s mathematics library mathlib
already, the formalisation would have taken far less time and far fewer lines. One thing we did benefit from was Scott Morrison’s work — Scott has built a huge amount of category theory in mathlib
over the previous six years, providing us with a stable foundation to build on.
One question which a professional mathematician with no Lean experience could ask when trying to evaluate the work is: how to be sure that the Lean formalisation of the challenge actually corresponds to the mathematical version of the challenge? For example how can we tell that the symbol ℝ
in the Lean code actually corresponds to the real numbers, and there is no cheating in the background? Commelin and Topaz have attempted to deal with this issue by making an examples directory in the repository, where they prove basic theorems about the objects involved (for example they show that ℝ
is a complete ordered field and that any complete ordered field is uniquely isomorphic to ℝ
, as evidence that Lean’s ℝ
really does correspond to the real numbers and there is no trickery going on). They also created evidence for Lean’s Ext groups, profinite spaces, condensed abelian groups and so on; if you understand the mathematical definitions of these objects then you can see an indication that the project’s definitions coincide with the usual mathematical definitions, even if you cannot directly read the Lean code.
What did we learn?
Peter Scholze already explains in a second blog post two quite interesting things that we learnt during the process. Firstly, we learnt that the stable homotopy theory used in the original proof (via the Breen-Deligne resolution) can be eliminated. The way this was discovered was quite interesting; the formalisers were avoiding having to formalise this part of the proof but we were forced to start formalising the statements of what we needed; once these were clarified Commelin and Scholze realised that actually they could get away with a weaker theorem; initially we were calling the trick the “Commelin complex” but later on we discovered that the construction (but not the theorem needed about it) was made by MacLane many decades ago. And secondly Scholze seemed to learn something conceptual about the proof — he described it as “what makes the proof work”. You can see his thoughts on the matter at this mathoverflow question. But in some sense this is old news (the second blog post was published over a year ago). What else did we learn?
In my mind, one of the most important things we learnt is that if you get a group of mathematicians who are focussed on formalising an arbitrary piece of modern mathematics, then it can get done, however complex the theory is. In other words, we have a substantial piece of evidence showing that the software that computer scientists have developed for doing this task has now “caught up with humans”. One could regard this as a triumph. One could also regard it as a pretty weak statement. The computer did not “do” the mathematics — the humans did it; first Scholze and Clausen did it, and then Scholze write the argument down (in pdf notes for the course in Bonn, linked to above) in a form suitable for mathematicians, and then Commelin and Massot wrote a more detailed mathematical blueprint in a “hybrid” part-Lean part-LaTeX format (you can view the blueprint here; it goes into much of the detail of the proof), and then finally a team of people transcribed what they had written from a human language into a computer language. One thing we learn is that the computer language (Lean’s dependent type theory, in this case) is rich and expressive enough to be able to formalise recent and complex work of a Fields Medallist. One could ask whether it is feasible to formalise this work in a weaker logic, for example in Higher Order Logic, the logic used by Isabelle/HOL and the other HOL systems. I don’t think anyone actually knows the answer to this question; I have conjectured that the answer is “no” but there are expert HOL users who seem to conjecture that the answer is “yes”. My real fear is that we will never find out. Manuel Eberl is making fabulous progress formalising what I would call “standard MSc mathematics” such as the basic analytic theory of modular forms in a HOL system, but what we are missing is a HOL evangelist from the mathematical community (a kind of HOL version of me) and I don’t know how to create such a person. Indeed in some sense I am making the problem harder, because my constant evangelism for Lean to mathematicians has the consequence that research mathematicians who get interested in formalisation tend to come to Lean first, and the situation is becoming more and more like a monopoly. There are plus and minus points to this; one minus point is that nobody seems to be working on getting HOL to do things that “it wasn’t designed to do” (for example a modern treatment of sheaf cohomology, etale cohomology etc via derived functors).
We also learn that formalisation can be, at times, very slow going. However one should not lose faith here. The reason it took 1 year to formalise five lines of mathematics was that a huge amount of theory-building had to be embarked upon beforehand. That theory only needs to be built once. Lean now has a working theory of short and long exact sequences, derived functors, abelian categories, complexes and so on; furthermore there is evidence that the formalisation of these definitions is usable in practice, because we used them. Indeed this is another thing that we are learning in the Lean community; instead of thinking “what MSc course shall we formalise next” we are thinking “what major result shall we start working towards” because this not only drives development but it ensures that developments of theories are actually usable.
We learn that formalisation can be parallelised, although we already knew this long ago from the work of Gonthier et al on the odd order theorem, and the work of Hales et al on the Kepler conjecture (both papers have 10+ authors). Different people can be put to work on different areas of a formalisation project and do not even have to know about, or understand, what others are working on. The forthcoming paper on the formalisation will have a nice long list of authors, and this is something I’m very happy about; furthermore anyone interested in seeing the precise details of what everyone did can just look at the list of commits and committers on github — the details are completely open to everybody. This is open source mathematics.
The future
So where are we going and what is the point? These are very reasonable questions. One possible reaction that a mathematician might have to the work is that it is pointless, because it is a computer verification of a result which was “basically known to be true anyway”. This is missing the point. Why are we teaching computers modern mathematics? Because I believe that one day future computer systems will be helping us, and whilst I cannot see into the future, consider the following question. Scholze and Clausen are arguing that condensed abelian groups are an important new definition in mathematics. How can AI developers create a chatbot which helps graduate students to learn about the category of condensed abelian groups, until professional mathematicians have taught computers what a condensed abelian group is? I know that some people in the machine learning crowd would answer things like “use language models, train on the papers about condensed abelian groups” but I am not yet convinced that this alone will turn into a useful tool. Lean (and other theorem provers) offer a framework where one can create a library of useful theorems, examples, and counterexamples. We do not even have to type in all the proofs. If “we” (the mathematical community, for example PhD students in arithmetic geometry) start translating important statements, examples, counterexamples, references and so on, and somehow combine this with language models, we will at least make some kind of interesting thing, which I imagine will get more helpful to humans over time. Some dream of a time where computers are better at proving theorems than humans. For me this is still complete science fiction; I have seen no evidence that we are anywhere near this milestone, and have no real vision as to how we are ever going to get there. But one can instead dream of a time where computers are helping humans to do research level mathematics, and work like this makes me believe that this goal is something I might see in my lifetime.
You mention needing a HOL evangelist; perhaps you could link to Larry Paulson’s blog which is half-way there: https://lawrencecpaulson.github.io/
LikeLiked by 1 person
Yes I’m well aware of Larry and his work! Very happy to advertise it here.
LikeLike
“How can AI developers create a chatbot which helps graduate students to learn about the category of condensed abelian groups, until professional mathematicians have taught computers what a condensed abelian group is?”
Is such a chatbot an actual part of your vision for the future or are you being sarcastic?
LikeLike
Oh no I wasn’t being sarcastic at all! Someone on Twitter also suggested that this was a very pessimistic way to end the article. This is an example of something which I think is actually within reach. Some think that far more will be achieved, perhaps you think far less. But something with a database of theorems and counterexamples and an accessible search interface is surely within reach.
LikeLike