## My algebraic geometry course

So something stupid happened. I had decided “no Lean” for the algebraic geometry course I’m teaching this term, and then I went to Lean Together 2020 and ended up on the plane back trying to formalise Martin Orr’s notes in Lean and now I’m seriously thinking about going for it and seeing how much we can do. It’s the theory of varieties. At the minute we’re doing affine algebraic varieties over algebraically closed fields. I might back off later if it gets tough, but I am eyeing the Nullstellensatz. They’ve done it in Isabelle. So it could well be possible in Lean 3. Can we state the Nullstellensatz in Lean 4? I saw people using Lean 4 last week. You still have to know what you’re doing, and probably using emacs is your best bet, but it is usable.

You won’t be using Lean 4 unless you’re an expert in this stuff, but here’s a challenge for mathematicians: read this proof. It’s my Lean write-up of lecture 1. That Lean file is 184 lines long, and it proves that the union of two algebraic subsets of $k^n$ is also an algebraic subset of $k^n$. If you are a mathematician — give it a go. I bet you understand the first half of it. Then it gets technical because a proof starts, but even if you can’t understand the Lean code, I am still explaining in the comments what is going on. I mentioned in my last post how Hales and Koepke are thinking hard about making it easier for human mathematicians to use this kind of software. Until then, you need Rosetta Stones, and this Lean file is an attempt to give one. If you have Lean installed on your computer, then you can clone the repository for the course, run the code on your own computer and see Lean’s tactic state at every point. This makes it much easier to figure out what is going on. It is not always easy. However, it is sometimes easy, especially if you have played the natural number game. Look — there’s zero_mul on line 132. If you don’t know about zero_mul then you could play the natural number game, or try some maths challenges for the Lean curious.

How come everything goes so well up until the proof? Well, notation is a nightmare. Can this be fixed in Lean 4? Why does the goal often look incomprehensible? Why do I have to write mv_polynomial (fin n) k instead of $k[X_1,X_2,\ldots,X_n]$? Why do I see fin n → k instead of $k^n$? Why I am I taking intersections over all of the proofs that $s\in S$? That makes no sense to a mathematician. How can we make this better? It’s just a normal proof presented in a scary way — but I am sure that we can make it less scary with current technology. This is one of the things I hope to work on this term.

## Lean Together 2020

I spent last week in Pittsburgh at Lean Together 2020. I felt privileged to be in the presence of Hales, Koepke and Avigad. These people are watching where Lean is going, and my impression is that they are happy. Massot, Gouëzel and Hölzl were also there, and it was very interesting to see them talking about manifolds (Patrick Massot and Sébastien Gouëzel are research mathematicians, but Johannes Hölzl is a computer scientist who now works for Apple in Munich and is formalising crypto). If someone had told Patrick 5 years ago that he would be going to a conference where (a) he would have technical discussions about the definition of a manifold and (b) give a talk about perfectoid spaces, I am not sure he would have taken them very seriously.

There were loads of great talks, and then of course there were some talks I didn’t understand at all. One person who truly made an impression on me was Ulrik Buchholtz. He taught me a lot about homotopy type theory. What I liked about him was that he had understood that whenever anyone says the word “constructive” I instantly hit the roof and start yelling that I simply do not care and neither do any of my friends, so he barely ever mentioned the concept and as a result I managed to be able to concentrate on what he was trying to tell me. He has a mathematical background, and explained his thoughts about how the univalence axiom is actually a good idea for mathematics. But honestly, first let’s think of new notation for it, because every mathematician knows that equality takes values in an impredicative Prop and that’s the end of it. I was always unhappy that Milne and Grothendieck used equality to denote canonical isomorphism, perhaps we just need a new notation for this kind of equality? Update a few hours later: Actually, on re-reading this post I realise that this is a lie. I thought Milne’s idea of using the equals symbol to denote a canonical isomorphism was a brilliant idea when I was a post-doc. The reason I’ve gone off it now is that I have no idea what a canonical isomorphism is so I can’t formalise the idea properly.

Sebastian Ullrich spoke about Lean 4! Summary: it’s going to be super-cool, notation will rock, and they even have 2 tactics! intro and assumption. I reckon you could prove $P\implies P$ with those, and who knows what else. The elephant in the room: porting mathlib to Lean 4. People were optimistic — but we don’t really know anything.

Hales and Koepke spoke about natural language. They have examples of text which sounds like normal English mathematical text and can be read by their Haskell program and turned into some kind of tree data structure. The task now is to turn that data structure into valid Lean code. I guess Lean 4 will be the thing to target.

Cyril Cohen talked about how to make a big library of data structures (like the theory of groups, rings, fields, totally ordered monoids etc, all at once). He was using Coq but I am sure that we have things to learn from those guys.

Reid Barton raised what looked to me like a straightforward maths question about semialgebraic subsets of $\mathbb{R}^n$, and then noted that it was super-hard to prove in Lean. He says we need to write a new tactic to prove it.

Neil Strickland raised some very important points in his talk, regarding scalability of the mathlib project. I am pretty sure that several of them still have not been dealt with.

Manuel Eberl showed us all how much better Isabelle is than Lean at analysis, and then Fabien Immler showed us again. Is this because analysis is hard to do in dependent type theory? I don’t think so, because the Coq people are also pretty good at analysis. We are just behind.

Sylvie Boldo gave a wonderful talk about Lebesgue integration in Coq. Sylvie — it was a pleasure to meet you. Sylvie is one of the people behind the Coquelicot project. What can Lean learn from this project?

On Wednesday, we had four talks by people using SMT or SAT solvers in different ways. I was worried that I would find these talks incomprehensible because I really don’t know anything about SAT solvers, but all of them — Lsitsa, Keller, Heule and Ebner — were really enjoyable. Gabriel Ebner unveiled a hammer for Lean! I’m voting Mjölnir by the way.

Mario Carneiro talked about metamath 0. I wish I could say more about his work, but it is too foundational for me to understand it properly. All I know is that we had a proof of Dirichlet’s theorem on primes in an AP in metamath and then Mario pressed a button and we had one in Lean, and it had been written by a computer and was incomprehensible. Scary?

Markus Rabe talked about Google’s dream to build a mathematician. They want to write code that reads ArXiv and spits out some kind of tree data structure. They want to read human language. Hales and Koepke are offering an alternative — they suggest that, right now, humans will have to change (i.e. learn) if they want to communicate mathematics to computers themselves.

Finally, Imperial undergraduates Chris Hughes, Amelia Livingston and Jean Lo did me proud, with intelligent comments and questions.

To everyone I saw there, and in particular to the organisers Jeremy Avigad and Rob Lewis, and also to Mary Grace Joseph, thank you very much for an extremely enjoyable conference.

Posted in General | | 1 Comment

## Lean has real manifolds!

And tangent bundles too! Thank you Sébastien Gouëzel! Tangent bundles would be hard to do in simple type theory (e.g. Isabelle/HOL) because they are naturally a dependent type, and if you work around this in the way that the HOL people usually do then you seem to have to abandon typeclasses (so sheaves of rings would be hard to work with).

I think that one measure of how appropriate a computer proof system is for human researchers is whether it can state the Clay Millenium problems. If a system isn’t strong enough to even state the problems then surely it is not going to be of much interest to most “working mathematicians”. Which systems can state all of them? None of the systems can. Manifolds are an initial but important step towards the statement of the Hodge Conjecture. Of course we also have to formalise some cohomology theories. I have an MSc student, Shenyang Wu, working on group cohomology, which I always felt was the most concrete and down-to-earth of the cohomology theories which I know.

But before we get too excited, let’s still remember that Lean does not (yet) have the Cauchy Integral Formula…

## Lean offers rigour.

For years, people have been pointing me to Thurston’s work “On proof and progress in mathematics”, a work which I think of as a brilliant exposition of the importance of non-formalisation in mathematics. But it was only this week that I actually read the article Thurston was responding to — Jaffe and Quinn’s “Theoretical mathematics”. This paper opened my eyes. They have a word for how I now think about mathematics! They explain that I am now doing something called rigorous mathematics. This phrase gives me the language to explain what I am trying to tell mathematicians.

What is rigorous mathematics? It means mathematics which is guaranteed to be correct. Undergraduate-level mathematics is clearly rigorous mathematics: it has been checked from the axioms by many many people. What else? Perhaps SGA is a good example. It has been around since the 1960s and has been read by many people. The work of Grothendieck and others written up in this seminar was used in Ribet’s work on level-lowering, which was in turn used in Wiles’ proof of Fermat’s Last Theorem. But not all of our human literature is rigorous mathematics. We can basically guarantee that one of the papers published this year in one of the top mathematics journals will, within the next few years, be partially retracted, and nobody is asking the questions about whose job it is then to go through all the papers which cited it and to meticulously check that none of the citations were to parts of the paper which were wrong. I think defining rigorous mathematics in these human terms, trying to assert which parts of the literature are “rigorous”, is a hard problem. Perhaps Bourbaki is rigorous. Patrick Massot found a slip or two in it when formalising parts of Bourbaki’s Topologie Générale in Lean, but nothing remotely serious. Perhaps SGA is rigorous too — after all, hasn’t Brian Conrad read it or something? But actually Bourbaki and SGA both use different foundations, and neither of them use ZFC set theory (Bourbaki rolled its own foundations and then apparently ignored them anyway and just did mathematics, and SGA needs universes in places, but, as Brian Conrad has carefully checked, not in the places used to prove Fermat’s Last Theorem).

I am not sure I know a good answer for exactly which parts of the mathematical corpus as represented by our human literature is “mathematically rigorous”. Maybe the way it works is that anything published in a journal over 20 years ago and which hasn’t been retracted yet is mathematically rigorous. But probably some recently graduated PhD students would also say their work was mathematically rigorous as well. Perhaps the definition most mathematicians work with is that rigorous mathematics is “the part of published mathematics which the elders believe to be correct”.

There is another question too, of less interest to most mathematicians but of some importance when trying to figure out what we mean by rigorous mathematics, and that is the problem of saying exactly what the axioms of mathematics are. Are we allowed universes or not? Whose job is it to tell us? Scholze’s course cites, but does not use, a paper by Barwick and Haine which uses universes everywhere. De Jong puts some effort into explaining how the stacks project can be made to work in ZFC, but honestly who ever reads those bits of the stacks project? Scholze talks about infinity categories in his course and no doubt he can solve all set-theoretic issues — after all he did take the trouble to write section 4 of etale cohomology of diamonds. Emily Riehl and Dominic Verity are writing a book on infinity categories . I bet that book will be rigorous. Wouldn’t it be interesting checking it was rigorous in Lean? I think that this is an important question. Lurie needs infinity categories and his work is having wider implications across mathematics. The Clausen and Scholze work also needs infinity categories and is claiming to prove Grothendieck duality in a completely new way. Is this work definitely rigorous? What about this work of Boxer, Calegari, Gee and Pilloni, announcing some major new results in the Langlands programme but subject to 100 or so missing pages of arguments which are not in the literature but which experts know shouldn’t present any major problems (see the warning in section 1.4 on page 13). I have now seen a citation of that paper in another paper, with no mention of the warning. Is this what rigorous mathematics looks like? I am not suggesting that the work is incorrect, incomplete or anything, and I am not even suggesting that the human race is incapable of making the results in this 285 page paper rigorous. But is it, as it stands, rigorous mathematics? I have my doubts. When David Hansen stands up in the London Number Theory Seminar and claims that a certain construction which takes in a representation of one group and spits out a representation of another one is canonical — and then refuses to be drawn on the definition of canonical — is this rigorous? And if it isn’t, whose job is it to make it rigorous? I am well aware that other people have different opinions on this matter.

I think that every one of the computer proof systems — Isabelle/HOL, UniMath, Coq, Lean, Mizar, MetaMath and all the others, all represent candidate definitions for “rigorous mathematics” which are far better defined than the human “definition”. Which system is best? Who knows. I think mathematicians should learn them all and make their own decisions. I think Lean has got some plus points over some of the others, but I know people who think differently.

Perhaps a more liberal definition would be that rigorous mathematics is the theorems which are proved in the union of these computer proof checkers. So for example we could say that the odd order theorem was rigorously proved from the axioms of maths, as was the four colour theorem, the prime number theorem, the Kepler conjecture and the Cap Set conjecture. These are examples of theorems which I think all mathematicians would agree were rigorously proved. But we will need a lot more machinery before we can rigorously prove Fermat’s Last Theorem in one of these computer systems. In my mind, Fermat’s Last Theorem is currently a theorem of theoretical mathematics in the sense of Jaffe and Quinn, although I have no doubt that one day it will be a theorem of rigorous mathematics as well, because we human mathematicians have written some very clear pseudocode explaining how to make it rigorous. As for the ABC conjecture, I genuinely don’t know if Mochizuki’s ideas can be made into rigorous mathematics, although it seems that in general mathematicians are not happy with the pseudocode which has been produced so far. I can see a very simple way for the Mochizuki-followers to sort this out though, and we would happily talk to them on the Lean chat.

## My visit to Microsoft Research

I visited Microsoft Research in September and then came back to the chaos of a new academic year, a new cohort, and a new curriculum. Sorry it has taken me so long to write this post.

So at Microsoft I spent most of my time talking to Leo De Moura and Daniel Selsam. Leo is the main Lean developer, currently working very hard on Lean 4. Daniel is working on the IMO Grand Challenge. 30 years ago I think that was manifestly clear that no computer theorem prover from that time could ever get a gold medal on the IMO. Daniel is using Lean 4 and investigating how it could be integrated with AI. Daniel is adamant that he wants nothing less than a gold. My attitude was “if a computer manages to solve just one IMO problem by itself, by running the IMO tactic on a formalised version of the question, then that would already be pretty awesome. But take a look at Machine Proofs In Geometry by Chou, Gao and Zhang and then look at these talk slides — computer scientists are developing tools which solve geometry problems, and remember that IMO geometry, unlike number theory, is a decidable domain (Tarski showed that basically bashing out the coordinates always works). Daniel grilled me about inequalities and I don’t think I did very well — I was never any good at those (or, for that matter, at the geometry questions). Fortunately for Daniel, Reid Barton visited Microsoft Research a few weeks later.

And I finally got to meet Leo properly. One interesting thing about Leo is that whilst he is giving me code which enables me to express all my mathematical ideas completely rigorously, Leo himself has no interest in learning any of the advanced mathematics which we have been doing in Lean. Leo is trying to write a programming language which runs on a certain kind of dependent type theory and which will compute quickly in a way which is provably bug-free. For example I’m sure Leo would be very interested in talking to people who wanted to use Lean to develop a bug-free operating system for a computer.

Having said that, I’m sure that my visit would have been a great opportunity to learn something about the Lean source code, to take a look at all those C++ files and try and find out what the computer actually does with its C++ objects when I’m telling it to apply induction. But I have no idea about C++ and I don’t think we talked about the code at all, even though he wrote most of it.

But Leo and I actually had loads of other things to say to each other. I gave a talk, he asked me what mathematicians needed and I told him some problems we’d had with getting our ideas into Lean. The boundary of my interest in the Lean Theorem Prover is usually the Tedious Implementation Issue . For example, there are about three ways of doing homomorphisms of rings in Lean, and they’re all what I as a mathematician would call the same. On the other hand, the computer scientists have technical conversations on the Lean chat about which implementation will suit our needs best, now they are beginning to understand them better. This is a key point. We had a ring morphism $R\to S$ and an $S$-module $M$ and we wanted to consider it as an $R$-module. They said “Oh, we didn’t know you needed that functionality, and this is unfortunate because we’ve set it up so that modules could only be modules over one ring, and changing it will result in having to create a new copy of the module, which wouldn’t be very good. Hang on, we’ll fix it”. And they did, and it took Mario Carneiro a very long time to fix it, but fix it he did. And now using modules in Lean is quite similar to using modules in mathematics, as long as you know the module library. One day we’ll have automation which means you won’t even need to know the library that well, but sledgehammers in Lean are very much a work in progress right now.

What I want to do is mathematics, without ever caring about tedious implementation issues. You could say that the Xena project is trying to figure out how much these issues can be avoided. By encouraging undergraduate and PhD level mathematicians how to formalise their ideas in Lean we are getting better and better at telling computer scientists what modern mathematics looks like, and then people like Leo and others will be better informed when they are making big design decisions about how to write Lean 4. This is one of the reasons why what goes on at the Xena project is important.

Posted in General | | 2 Comments

## Prove a theorem. Write a function.

New maths PhD students Lambert A’Campo and Ashvni Narayanan have proved that a normed real vector space with compact unit ball is finite-dimensional in Lean! [update added a few days later: ROFL Sébastien Gouëzel on the Lean chat complained that they had proved the “wrong” theorem and now I see that they have observed that exactly the same proof works for vector spaces over an arbitrary complete non-discrete normed field 🙂 ]

For me this is a milestone, because Lambert and Ashvni heard about Lean through me, Ashvni has chatted to me about Lean in person, and Lambert both in person and online, but they don’t really come to the Xena project. Lambert just asked me if this theorem was provable in Lean, I said “sure”, and he’s just gone away and worked on it with Ashvni and they’ve figured out how to prove it in Lean.

Here’s s brief history of how they got there. Lambert talked to me on the Lean chat. He played the natural number game. He installed Lean, created a new Lean project on his computer called compact_unit_ball, and just got going. He pushed it onto github so Ashvni could access it too. When they got stuck, Lambert asked questions in the #new members stream in the Lean chat, and experts helped him out. We understand mathematicians in that chat, as long as they are speaking rigorously. At the end of the day, these new PhD students have written a computer program containing the definition of a function which, when run, spits out a term of a type. A mathematician would call the type “the statement of the compact unit ball theorem”, and they would call the term “the proof of the compact unit ball theorem”.

Lambert and Ashvni are on my CDT, the London School of Geometry and Number Theory (applications for 2020 open now and close Dec 31st 2019!) I honestly think that teaching young mathematicians how to prove theorems in Lean is every bit as important as teaching them how to write computer programs in Python. Doing maths this way is making sure that you don’t make any mistakes along the way. Lambert and Ashvni proved a theorem by writing a function and I am proud that our CDT (Centre for Doctoral Training, a maths PhD student program) will accept such work as a submission for a computer project.

Lambert and me have been talking about etale cohomology. It’s going to be a tall order. A second year undergraduate at Imperial College, Calle Sönne, is working on sites and sheaves. We have found some of the literature a bit imprecise. What is a Grothendieck topology? Look at the definition on Wikipedia! It says that a Grothendieck topology is a “collection” of distinguished sieves blah blah blah. What is a collection? Can some mathematician tell us a more precise reference which deals with these issues properly? Is it a set or a class? Or is there more than one definition here?

Lambert did come to Xena last Thursday, and Johannes Girsch was also there (he’s another PhD student on the CDT). Calle was there too. We’ve been reading about condensed mathematics. That’s Clausen and Scholze’s new theory. Scholze talks about sites and Grothendieck topologies in there, and on the first page of chapter 11 he says the word “canonical” four times. If there are any other mathematicians in London interested in trying to figure out etale cohomology or condensed mathematics within the context of the Lean theorem prover (or even not within that context), any mathematician who thinks they know a definition of canonical, or any mathematician interested in learning about how to prove a theorem by writing a function, they can always find me in the computer room in Imperial’s maths department on Thursday evenings during Imperial’s term time. If London is not convenient, I’m often available on the Lean chat.

PS There are lots of <= worlds in the natural number game now, but hardly any < worlds beyond the definition. If people want to help out, feel free! There’s also the fledgeling real number game

## The natural number game — an update

I just pushed an update to the natural number game. I’m rather pleased with how this has all worked out. I started off thinking it was kind of a game, but the explanations turned out so wordy and now I think it is kind of a book/game hybrid. I attempt to teach around ten key tactics: refl, rw, induction, intro, have, apply, exact, split, cases, left, right, exfalso. These are somehow the bare minimum that you can get away with. Many of the applications are mathematical rather than abstract logic stuff, so hopefully they are appealing to mathematicians.

To come: ≤ world. This will have a ton of levels, some quite tricky. Then I might do something about even/odd numbers, and possibly something about prime numbers if I have the time/energy.

And then what? I think there is potential for an integer game, and also for a basic analysis game with sups and infs and stuff, where the real numbers are presented as a black box complete archimedean field. In these games I would refer the player to the natural number game for tactic-learning and just get on with the levels. I have had a lot of positive feedback about the natural number game.

On the other hand, people have started asking questions such as “so what maths is in Lean exactly, nowadays?” and I don’t feel that there is a good answer to this on the internet at the minute — a half-baked answer I gave about 18 months ago when I was just getting on top of Lean is now woefully out of date. We even seem to have manifolds now, thanks to the heroic efforts of Sébastien Gouëzel. I think an equally important question is “what maths isn’t in Lean yet, but could be in it soon?” I will attempt to answer both of these questions in the near future. [update a few weeks later — see “what maths is in Lean” link above!]

Posted in Learning Lean, M40001, undergrad maths | Tagged , | 12 Comments