Lean 2022 round-up

A brief survey post containing some of the things which happened in the Lean community in 2022.

The Liquid Tensor Experiment

In December 2020, Fields Medallist Peter Scholze challenged the formal computer proof community to verify one of his theorems with Clausen. By June 2021 a key technical lemma had been formalised by the Lean community and there was a certain amount of media attention (e.g. here and here), to the extent that many people assumed that the challenge was over. It was not. There was the small matter of developing the theory of abelian categories, derived functors, Ext groups and various other standard tools in research mathematics. This took another year. The project was finished at the Lean for the Curious Mathematician workshop in ICERM, where Johan Commelin got the formalisation of the full Clausen–Scholze result compiling. Adam Topaz was another big player here. See the Lean community blog post announcing the result for more details, and a follow-up post discussing the question of how someone who doesn’t want to read much Lean code might attempt to check that the theorem has been correctly stated. There is also the blueprint which describes the precise mathematical details of the route to the goal.

The Sphere Eversion Project

People who know some mathematics but have not used proof assistants might have a mental model of the following form: “discrete stuff should be easy to formalise, continuous stuff is probably harder because there are pictures involved”. Patrick Massot, Floris van Doorn and Oliver Nash ably demonstrated that continuous picture-based mathematics — in this case differential topology — was now in scope for theorem provers, with their paper “Formalising the h-principle and sphere eversion“. The work formalises a modern proof of an instance of Gromov’s h-principle, and deduces Smales’ result that you can turn a sphere inside-out without creasing or snapping it, if you allow the sphere to pass through itself. The work also comes with a blueprint which explains the mathematical route the authors took. The work was accepted for publication at CPP2023 and there will be a talk on the project on 16th January 2023 at CPP.

Unit fractions

In late 2021, Thomas Bloom announced a proof of an old conjecture of Erdos and Graham on unit fractions; any positive density subset of the positive integers contains a finite subset the sum of whose reciprocals is 1. This result still surprises me — for example can you really find a finite set of distinct positive integers all of which end in 7 and such that the sum of their reciprocals is 1? Apparently you can. Bhavik Mehta and Bloom started work on a full formalisation of Bloom’s result (including the prerequisites, such as an instance of the Hardy-Littlewood circle method). They finished in June 2022, before Bloom had received a referee’s report for the paper. Here’s a link to the blueprint.

p-adic Hodge theory

Fontaine kick-started modern p-adic Hodge theory with his observation that certain p-adic period rings played a key role. María Inés de Frutos Fernández has developed enough of the theory of local fields to define the first two rings in the theory: ℂₚ and B_{HT}, together with the Galois action. An attack on local class field theory is now viable.

Kaplansky’s unit conjecture

In 2021, Gardam disproved an old conjecture of Kaplansky. In 2022 Anand Rao Tadipatri and Siddhartha Gadgil formalised Gardam’s disproof in Lean (and they’re speaking about the work in the London Learning Lean Zoom seminar on Feb 9th), but in contrast to the three projects above, they used Lean 4, meaning that they had no access to Lean 3’s mathematics library mathlib. Talking of which…

The port has started!

The thing powering mathematics in Lean is the Lean 3 mathematics library mathlib. The problem is that we now have a viable Lean 4, so the library needs to be ported, and Lean 4 is not backwards compatible with Lean 3; it’s better. A first pass at the problem was made by computer but it seems that we need humans to finish it off. The port started in earnest in December, and right now we are 12 percent of the way through. Will it get harder or easier as we proceed? Nobody knows! That’s one of the fun things about doing research. Anyone interested in learning Lean 4 by porting some basic mathematics from Lean 3 (finite sets, subgroups, topology etc all still need to be done) is welcome to take a look at the mathlib 4 porting wiki. With the advent of cached compiled olean files the job of porting got much easier in the last week or so, as you no longer have to compile anyone else’s branch manually. I am very much hoping that the port will be complete by the summer, and the way to make this happen is to get more people on board. Please consider joining us if you want to make a contribution to open source software in mathematics!

About xenaproject

The Xena Project aims to get mathematics undergraduates (at Imperial College and beyond) trained in the art of formalising mathematics on a computer. Why? Because I have this feeling that digitising mathematics will be really important one day.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

4 Responses to Lean 2022 round-up

  1. PC says:

    Observation, indeed….

    Like

  2. Steven Obua says:

    How can Lean 4 be “better” than Lean 3 if it cannot run the theories of Lean 3? What about Lean 5? Will that be backwards compatible? How about Lean 6?

    While it is often necessary in software development to break backwards compatibility to improve things, I would argue that for proof assistants such breakage must be avoided. I don’t want to formalise a theory about vector spaces, and then later on have to reformulate it, because of changes in the underlying system. Nothing has changed about vector spaces in the meantime! A formalisation is an eternal thing, and this is only possible if the software in which it lives is also eternal. Well, maybe not eternal, but at least backwards compatible for theories.

    Like

    • xenaproject says:

      Python 3 was better than python 2 but it could not run python 2 programs. Microsoft were always adamant that Lean 3 was just a project rather than finished product and they made it clear to the mathlib community way back in 2019 that Lean 4 would break everything. However the mathlib people were having too much fun and wouldn’t stop; this was a conscious decision. Now we have code which is auto-porting lean 3 code to lean 4 and then we have to manually fix up the bits which didn’t work, which, as the process goes on and on, becomes easier (because the auto-porting code gets better and the human understand of what breaks gets better). I agree that if Lean 5 breaks all Lean 4 code then serious questions will need to be asked, however there is no talk of Lean 5 right now; the mathlib community pushed Lean 3 very hard and taught MS (and in particular Leo de Moura) a lot about what worked and what didn’t work; they have reacted accordingly.

      Like

  3. Rafael says:

    Please, integrate Lean 4 with GPT-4. If it’s really hard to do it, then give us an options make donations.

    Like

Leave a comment