The natural number game — an update

I just pushed an update to the natural number game. I’m rather pleased with how this has all worked out. I started off thinking it was kind of a game, but the explanations turned out so wordy and now I think it is kind of a book/game hybrid. I attempt to teach around ten key tactics: refl, rw, induction, intro, have, apply, exact, split, cases, left, right, exfalso. These are somehow the bare minimum that you can get away with. Many of the applications are mathematical rather than abstract logic stuff, so hopefully they are appealing to mathematicians.

To come: ≤ world. This will have a ton of levels, some quite tricky. Then I might do something about even/odd numbers, and possibly something about prime numbers if I have the time/energy.

And then what? I think there is potential for an integer game, and also for a basic analysis game with sups and infs and stuff, where the real numbers are presented as a black box complete archimedean field. In these games I would refer the player to the natural number game for tactic-learning and just get on with the levels. I have had a lot of positive feedback about the natural number game.

On the other hand, people have started asking questions such as “so what maths is in Lean exactly, nowadays?” and I don’t feel that there is a good answer to this on the internet at the minute — a half-baked answer I gave about 18 months ago when I was just getting on top of Lean is now woefully out of date. We even seem to have manifolds now, thanks to the heroic efforts of Sebastian Gouezel. I think an equally important question is “what maths isn’t in Lean yet, but could be in it soon?” I will attempt to answer both of these questions in the near future.

Posted in Learning Lean, M40001, undergrad maths | Tagged , | 10 Comments

Counting binary relations: an apology

OK so this is really just another equality post. And an apology to the Imperial 1st years for teaching them badly. Let’s start with two questions.

  • Q1) How many binary relations are there on a set with 2 elements?
  • Q2) Is 2 + 2 = 4 equal to Fermat’s Last Theorem?

Ok so I am voting for “16” with Q1 and “of course not” with Q2. But I now realise that these answers are inconsistent with each other :-/ I know this is probably logic-for-computer-scientists 101, but I never did that course so I’m learning it myself now, in an inefficient way. And in some sense the point of these posts is to try and teach these ideas to mathematicians in a way that makes sense to them. Ok so let me assume that the answer to Q1 is 16 and the answer to Q2 is no, and deduce a contradiction.

It’s clear that for x,y integers, and n a positive integer, x\equiv y mod n is a true/false statement, so for a fixed n this gives us a binary relation on the integers — indeed, an equivalence relation — and the equivalence classes are \mathbb{Z}/n\mathbb{Z}. Happy so far?

Now let A be a subset of the integers. This “congruence mod n” equivalence relation induces a binary relation on A as well.

OK so now fix A=\{1,7\}, and let’s define R to be the binary relation on A of being congruent mod 2, and let’s define S to be the binary relation on A of being congruent mod 3.

And now, because of Q2 above, the statement 1\equiv 7 mod 2 is not equal to the statement that 1\equiv 7 mod 3 — they’re definitely equivalent logically because they’re both true — but they have different mathematical content and so are definitely not equal. However R(1,7) is equal to 1\equiv 7 mod 2 by definition, and S(1,7) is equal to 1\equiv 7 mod 3 by definition, so we are forced to deduce that R(1,7) is not equal to S(1,7) which of course implies that R is not equal to S. But when we were answering Q1 we decided that R and S were the same binary relation (because R(x,y) and S(x,y) are both true for all x and y in A), and if you think of binary relations on A as subsets of A^2 then of course R and S are equal. So if you think about them one way, they’re equal, and if you think about them in another way, they’re not.

This phenomenon (two logically equivalent true/false statements are equal) is called propositional extensionality. It’s an axiom in Lean; note that Lean uses type theory as its foundations. I’m not sure that it can be expressed in set theory (everything is a set in set theory, and propositions are not sets so I’m not 100 percent how to even do mathematics with them in set theory). I surely use propositional extensionality all over the place in my work, it’s related to the law of the excluded middle which I also use all over the place. This is one of these things which, when I first read it with my “proper mathematician” hat on, I just thought “this is silly constructivist computer scientists making a fuss about nothing”. I just checked the perfectoid project and indeed I can confirm that at least one point in our repository Commelin, Massot and I must have assumed it, because our definition of a perfectoid space in Lean relies on this axiom.

Conversely, I explicitly told the 1st years in my lectures earlier this month that I was a bit uncomfortable with the idea that 2 + 2 = 4 equals Fermat’s Last Theorem. But both of these things are true, so by propositional extensionality, they’re going to have to be equal, and if you don’t accept this then you have to accept that there are infinitely many binary relations on a set with two elements (or even on a set with one element — for each true statement I can define R(x,x) to be equal to that statement). And what is annoying is that on the test today I asked my students how many binary relations there were on a set with two elements and several of them said that there were infinitely many, and I think that this answer, which I am going to have to mark as wrong, does have its merits. It is, unfortunately, not the answer that a mathematician would give, and I am supposed to be training these people to be mathematicians.

However I think that on this occasion I am not training them very well. I think that I should have made it crystal clear in the lectures that, in mathematics, we say that two binary relations R and S on a set X are equal if \forall x,y\in X, R(x,y)\iff S(x,y). Now looking at it this way, it would seem better to say that R and S were somehow equivalent, and hence I should have asked the first years to count the number of binary relations on a set with two elements up to equivalence. But pedagogically this seems like a real can of worms because some of them are having a hard enough time wrestling with mathematics this abstract anyway.

How mathematicians cheat.

Here’s what I think is going on. Mathematicians have an incredibly powerful and fluid notion of equality, and I believe that many of us do not ever really examine it carefully. We know exactly how it works, we have no clue what the axioms for it are, and when we become PhD students we might even start using the = sign to denote canonical isomorphism. We are in good company — Grothendieck does this in EGA, and Milne’s book on etale cohomology explicitly states it as a convention in the introduction; thanks to @TheHigherGeometer who dug out screenshots on Twitter when I was last going on about this stuff. Because we don’t really understand it properly, we don’t teach it properly, and then we wonder why students think that there are infinitely many binary relations on a set with two elements.

I think that what is really going on is that some students in our classes manage to just “get it”, and see what we’re doing, and then copy it, and those are the ones we think are mathematicians. And those who foolishly believe that 2 + 2 = 4 is not equal to Fermat’s Last Theorem and hence deduce that there are infinitely many binary relations on a set with two elements — these are people who “don’t get it” and we just mark their answers as incorrect. What is happening is that those people who “get it” are managing to see through the fog and are beginning to understand what we mean when we say that two things are “equal”, or “the same”, or “canonically isomorphic” or whatever way we use to describe the appropriate equivalence relation meaning “the same thing as far as a mathematician is concerned” on the objects we’re looking at.

When I found the EGA example of Grothendieck saying that two canonically isomorphic localisations were equal, even though they are literally not at all equal according to the standard definition of a localisation, I mentioned this to Johan de Jong, someone who has been teaching algebraic geometry for a very long time now. He told me that he had noticed that some students really struggled with this idea in his class — it comes up in the definition of the structure sheaf on a scheme. If R is a commutative ring and f,g\in R with, say f^2=g^3, then the basic open sets D(f) and D(g) really are actually equal as subsets of Spec(R), so the rings we assign to these basic open sets when defining the sheaf on the standard basis had better also be actually equal, but when you look at what is done in most of the books, we assign R[1/f] to D(f) and R[1/g] to D(g) and these are not actually equal. Our fix as mathematicians is to simply declare that they are equal anyway. This is a fudge, but it works, because they are “sufficiently equal” (they are, after all, canonically isomorphic). Students that don’t understand this point are deemed to be not very good at algebra. But there is an explicit issue here. Do we explicitly teach it? No we don’t, we just expect them to get the hang of it. Should we explicitly teach it? Will it confuse people more? Maybe it should be mentioned.

In 2011 I was assigned to teach a basic group theory course at Imperial; the previous lecturer was Prof. Martin Liebeck, a teacher whom I hugely respect. The students had already seen a definition and basic examples of groups in a more general “introduction to bits of algebra” course, but it was my job to start on group theory proper and define things such as homomorphisms, kernels, the first isomorphism theorem and so on. But Prof. Liebeck’s notes (which I was following) introduced the notion of an isomorphism of groups before the notion of a homomorphism. At the time I thought this was ludicrous. In any category, an isomorphism is a morphism with a two-sided inverse, and this felt like a natural definition of isomorphism to me; why introduce it before introducing the morphisms? But when I went to the library and looked through the books that Prof Liebeck was recommending, one of them — Fraleigh’s “Abstract algebra” (6th edition) — devoted two paragraphs in his “changes from the 5th edition” explaining that after many years of teaching algebra he had become convinced that doing isomorphisms, carefully and properly, before homomorphisms, was, he believed, pedagogically very important. Because I was young and inexperienced, I decided to believe Liebeck and Fraleigh, and continued to do the same.

What I have realised now though is that this concept — isomorphism of groups — is a great example of showing students how fluid we are when it comes to equality (“there are 2 groups of order 6”), so it’s good to teach it early. However we have already started with this fluidity in the introduction to proof course (with “equality of equivalence relations”) and perhaps this important idea of understanding when two things are “the same to a mathematician” should be introduced even earlier. For example, the fact that equivalence relations biject with partitions relies on this slightly subtle notion of equality of equivalence relations.

In summary then, at least one smart person came out of my course thinking that there are infinitely many binary relations on a set with two elements, and I now believe that this is not because they “don’t get it”, it is simply because I did not explain carefully enough what it means for two binary relations to be “the same”. My bad. Any other lecturer who has seen a student make the same mistake — maybe it is because they have not yet figured out that things can be “the same” without being equal, and we probably do not explicitly mention this subtlety in the lecture notes. In my case, this is because it has taken me 20 years of lecturing to notice that I am even doing it.

Posted in Uncategorized | Tagged , , | 2 Comments

Chalkdust, and the natural number game!

Chalkdust is a great magazine for undergraduates which circulates in hard copy form around London and is also available online. It’s quirky, it’s put together by people who know what they’re talking about, and I think it does a great job of promoting mathematics to the undergraduate and school communities in London and beyond. And issue 10, out today, has an article about Lean in it 😀 [disclaimer: written by me 😉 ]

The reason this article got written was that I was at the launch party for the last issue, and got talking to Eleanor Doman, one of the people behind the magazine. She’d seen me give a school talk and asked if I had anything I wanted to contribute. Two years ago I had written a blog post about the natural numbers which had proved quite popular with Imperial’s undergraduates, on proving basic theorems about the natural numbers just from Peano’s axioms. People told me that it was a good gateway into learning how to use Lean. I decided it was worth revisiting this idea, and wrote the article linked to above. But I also thought it was about time I made this stuff easier for people to try themselves, and so, with many many thanks to Mohammad Pedramfar who did all the web coding, we present to you…

The Natural Number Game!

Ever wanted to prove that a\leq b and b\leq c implies a\leq c from first principles, but couldn’t be bothered to download and install a computer theorem prover? Well now you can! The natural number game is an atttempt by me to teach complete beginners, who are comfortable with induction but know nothing about theorem provers, how it feels to use one. Undergraduate mathematicians — this is a chance to try Lean without having to download anything, and get over enough of the learning curve to get going and actually do something without getting stuck 100 times. Or maybe you’ll get stuck 100 times anyway. Find me on the Lean chat and give me feedback.

Currently we have around 50 levels, covering addition, multiplication, raising numbers to powers, and a few inequality levels. Over the next few weeks I hope to add 20 more inequality levels and some stuff about even and odd numbers and primality.

If you’re not a fan of the web interface and just want to play the game offline on your own computer, you’ll need to install Lean and its maths library, but this is not too hard nowadays. The natural number game source code is available online at github with installation instructions.

Posted in Learning Lean, undergrad maths | Tagged , , , | 13 Comments

Does anyone “know” a proof of Fermat’s Last Theorem?

So there’s a piece in Vice which quotes me as saying that I fear “all published maths is wrong”. Of course this is nonsense: indeed, as every mathematician knows, some published maths is wrong, but the vast majority of it is fine (indeed some of it has even been checked in Lean 🙂 ). The idea of how much of the literature is wrong/incomplete/whatever came up in the context of explaining various viewpoints which Patrick Massot, Michael Harris and myself had been exploring during our PhilMath Intersem last June, but for the record, “all published maths is wrong” is not something I fear, because I know it’s not true. There is another quote attributed to me: “I think there is a non-zero chance that some of our great castles are built on sand”, which I did say, and which I think is a far more accurate representation of my views (but rather less clickbaity). I guess that this viewpoint — that maybe some important stuff is not ok — is not believed by all mathematicians. Some seem to think that all the important stuff is fine, and I’m less sure.

But I also spoke about Fermat’s Last Theorem in the article, and made a claim which again not everyone in my community seems to believe — indeed Persiflage expressed an opposing view on Twitter. The claim I made is that no human knows a complete proof of Fermat’s Last Theorem. Speaking as someone who was doing a PhD in precisely the area used by Wiles and Taylor at the time it all blew up, I am lucky enough to have an overview of the argument — and certainly Persiflage does too. But does one human know all the proof? Last year I asked Wiles if he knew all the details of his own proof (30 minutes in), and he does not say “yes”. Indeed he says that he would not want to sit an exam on the proof of Langlands–Tunnell.

So what does it mean to “completely understand a proof of Fermat’s Last Theorem”? Vice quotes me as saying that nobody “completely understands” it, but it also claims that I suggest “(nobody) knows whether it’s true”, which again is a bit misrepresentative. There are people that I trust, who have checked the parts which I have not checked, so I think we can argue that I know (to the extent that a human can know anything) that FLT is true. But what about finding that one person who “completely understands” it? Let’s take Wiles’ interpretation — could we find someone who is guaranteed to pass an exam whose syllabus was all of the mathematics involved in the proof? I think we should make it an open book exam, so the human being examined can look at the literature — but they cannot consult other humans.

And here I’m going to stick to my guns. To completely understand a proof of FLT (let’s say, for now, the proof explained in the 1995 Darmon-Diamond-Taylor paper) you will need to be a master of the techniques used by Langlands in his proof of cyclic base change (and I know people who are), and a master of Mazur’s work on the Eisenstein ideal (and I know people who are). But what about the far less sexy technical stuff? To move from the complex analytic theory of modular forms to the algebraic theory you will need to know the delicate interplay between the analytic and algebraic theory of moduli spaces of elliptic curves (and I know people who know this — but I went through some of this stuff once and it’s far more delicate than I had imagined, and there are parts of it where the only useful reference seems to be Brian Conrad’s brain). This last example is perhaps a good example of a tedious technical issue which it’s very easy to forget about, because the results are intuitive and the proofs can be technical. There are many other subtleties which one would have to fully understand because they’re on the syllabus. Is there really one human being who would feel confident answering questions on all of this material? I am really not sure at all.

And I think the reason I am not sure is because Lean has taught me that checking the details can sometimes be hard. If someone had asked me two years ago if I knew the definition and basic properties of adic spaces, I would have said “sure — I even wrote a paper about them”. But when formalising them I realised that there were all sorts of subtleties which I had just “taken for granted” because they were tedious technical issues which “of course I could work through in theory”. Actually working through some of them, when formalising them, made me understand that some of these issues were far bigger and more complex than I had realised. In the perfectoid project there is still an issue regarding the topology on an adic space, which I had assumed would be all over within about a page or two of pdf, but whose actual proof goes on for much longer, and at one point even invokes Hochster’s thesis (which I’ve certainly not read), although Conrad assures me that it is not necessary. We’ve formalised the definition, but there are still some very basic properties of adic spaces which we have not formalised. Maths is hard. My belief is that, over time, mathematicians begin to forget this, and they start to conflate the idea of a result being “standard” with the idea of a result that they completely understand the proof of. Our notion of proof is so different to the formal notion that we skip huge steps knowing that the techniques will work, and after a while we may forget what is involved in these techniques (if indeed we ever knew). This observation is not really of mathematical relevance — if I need the Langlands–Tunnell theorem in my work (and I did, once, in my work on Artin) then I will invoke it and not lose any sleep over it. But on the other hand, do I really completely understand the proof of that Artin work — my own theorem? I wouldn’t say I did.

Posted in Uncategorized | Tagged , , , | 9 Comments

IMO 2019 Q1

Someone sent me a link to this year’s IMO questions this afternoon at work, and the first one caught my eye because it looked like a relatively straightforward functional equation question: find all f : \mathbb{Z}\to\mathbb{Z} such that f(2a)+2f(b)=f(f(a+b)). I’ve seen enough of these in my time to know how to go about them, and it didn’t take me long to find a solution on paper. I then had to go to a meeting, but half way through when I didn’t have anything else to contribute I idly started formalising my solution in Lean.

Now formalisation is hard, so I knew I would end up getting bogged down in details and things which looked straightforward on paper would end up messy. At some point someone divides by 2, that looks a bit problematic with the integers, and then there’s some integer induction — I kept waiting to get to the place where it was all going to go horribly pear-shaped.

But that place never came. Powerful tactics like convert and ring , the conv mode, and a sensible induction procedure for the integers meant that the Lean proof flowed just as the paper proof had done. Here it is! It probably took over half an hour, but under an hour, to finish it. Of course the question is not too hard — but that’s not the point.

The point is this. Before I got into the formalisation business, it was “obvious” to me that formalising questions like this should be easy. When I actually started, it became very clear to me very quickly that formalising IMO questions would be very hard, because formalising anything with content is hard. But the nice surprise is that actually, once one has a grip on formalisation, if one is working in an area which is “sufficiently close to the axioms” like elementary number theory or algebra, maybe nowadays in Lean it really is becoming as easy as I always “knew” it would be.

Posted in General, number theory | Tagged , | 1 Comment

Lean does not (yet) have real manifolds.

There are loads of computer theorem provers. There is Lean, there is Coq, there is Agda, Idris, Isabelle-HOL, HOL-Light, UniMath, Mizar and many others. Which one is the best one for formalising a proof of Fermat’s Last Theorem?

I’ve spent the last few months having a good look at all of these systems and I think that out of all the ones which are currently on offer to us (the mathematical community) the one which offers most hope is Lean. I checked Lean’s suitability by defining perfectoid spaces in Lean 3 with Patrick Massot and Johan Commelin, two other mathematicians — and it worked fine. We ran into a lot of problems along the way, but in essentially every situation the problem we were in was of our own making, and we were rescued by computer scientists who know nothing of perfectoid spaces but who are on the other hand very well equipped to answer extremely specific technical questions and give us extremely specialised advice about how best to turn mathematics into functional programming in practice.

But we still don’t have smooth real manifolds. I was initially perplexed by why things were going so slowly; I saw a definition from first principles as an undergraduate. But now I understand the reason, and it’s a bit bizarre. The reason is that a general principle in Lean is that one should set things up in as much generality as possible, for certain efficiency reasons which are of no interest to mathematicians. A more precise explanation is of course the extremely specialised advice offered to us by the computer scientists in this situation. This has an unfortunate consequence that you can’t just wait for an Imperial undergraduate to do it 😦 One apparently really needs to think about differentiating maps between some quite general real topological vector spaces, at least some of which can be infinite-dimensional. This kind of stinks, but that’s the way it is with dependent type theory. Important: I am not saying that real manifolds can’t or won’t be done in Lean. I am just saying that we know what to do but it is taking a long time and it is more technical than one might guess at first. We’ll get there. Thank you to (in alphabetical order) Jeremy Avigad, Sebastien Gouezel, Patrick Massot, Zhouhang Zhou and all the others who are making steady progress. We have recently reached a milestone — we have defined the Bochner Integral. I don’t know what that is. Although when thinking about it in Lean I ended up so confused that I had to confess on mathoverflow that I didn’t even really understand the real numbers.

Posted in General, Imperial, undergrad maths | Tagged , , , , , , , , , , , , , , , , | 7 Comments

A computer-generated proof that nobody understands

Very early on when I was reading about this formal proof verification game, I learnt about the Robbins Conjecture. Let me start this post with an overview of what this conjecture is, and how it was proved by a computer, and how the proof is easy for humans to understand. Then let me tell you about Bruck Loops, where the situation is quite different, and finally let me tell you about fancier mathematical objects and why things aren’t so easy in that case.

Robbins algebras

There is something in maths called a Boolean Algebra — it’s just a definition, like a group or a topological space. Feel free to look at the formal definition on the Wikipedia page for Boolean algebras. In the 1930s Herbert Robbins defined some other mathematical structure now called a Robbins Algebra, and conjectured that every Robbins Algebra was a Boolean Algebra. Proving this conjecture is like solving a big logic puzzle; we have some set with some structure and a load of axioms (a Robbins algebra), and we want to prove some other statements about this structure (i.e., the axioms for a Boolean algebra).

The conjecture remained open for around 60 years, possibly because it was tricky and possibly because there were not too many people working on it. Then in 1996 William McCune proved the conjecture using “automated theorem provers” called EQP and Otter. To give a rather brutal description of what an automated theorem prover is — it is something which takes as input a question such as the Robbins Conjecture, and then just throws all the axioms around generating lemma after lemma about Robbins algebras with the hope of ultimately managing to prove all the statements which define Boolean algebras. As you can imagine, this “brute force and ignorance” approach is not particularly appealing to the purist, but, let’s face it, it used to be the way that computers played chess, and anyone who has had their king and rook forked by a computer’s knight a fraction of a second after they have made a careless move knows that this approach can sometimes pay off.

Now one interesting thing about McCune’s proof was that actually, at the end of the day, it was pretty short. Within a year Bernd Dahn had got yet another system, ILF, to go through the Otter proof and simplify it (spotting common themes which were used more than once in the proof and pulling them off as useful lemmas); Dahn ultimately wrote a “normal” seven page maths paper, published in the Journal of Algebra in 1998, containing a human-readable account of the proof. The proof had now been completely “de-computerised”.

We learn from this story the unsurprising fact that computers are good at logic puzzles.

Bruck loops

I didn’t tell you what a Robbins algebra was, but I will attempt to tell you what a Bruck loop is. If you really don’t care then just skip down to the statement of the theorem below.

A loop is “a group without the associativity axiom”. More precisely a loop is a set Q equipped with a binary operation \cdot : Q \times Q \to Q and an identity element 1\in Q satisfying 1 \cdot x = x \cdot 1 = x for all x\in Q, plus the following “inverse axiom”: for all q\in Q, left multiplication by q and right multiplication by q are both bijections from Q to Q. In particular for any q\in Q the equations qr=1 and rq=1 have unique solutions for r, and one could call these solutions the left and right inverses of q. These two inverses might not coincide though.

Groups of course give examples of loops, but there are non-associative examples too. Most definitions in group theory have some sort of analogue for loop theory, although you sometimes have to think about things the right way. For example one can consider the group of permutations Q\to Q generated by left and right multiplications by elements of Q, and the inner mapping group Inn(Q) of a loop Q is the subgroup consisting of elements sending 1 to 1; if Q is a group then this is readily checked to be the inner automorphisms of Q.

Sometimes the lack of associativity makes these analogues really painful though. For example the centre of a loop Q is the subloop Z(Q) of Q consisting of the elements z\in Q such that z\cdot q=q\cdot z for all q\in Q and z\cdot(x\cdot y)=(z\cdot x)\cdot y for all x,y\in Q and two more equations like this with z in the middle and on the right. One checks that the centre of a loop is a normal subloop and one can form a quotient loop Q/Z(Q).

Just like groups, we say a loop Q has nilpotency class at most 1 if Q=Z(Q), and Q has nilpotency class at most 2 if Q/Z(Q) has nilpotency class at most 1.

For groups, we have Inn(G)\cong G/Z(G) and in particular if Inn(G) is abelian then G has nilpotency class at most 2. These results use associativity though, and are not true in general for loops. However they are true for Bruck Loops:

Theorem (a computer). If Q is a Bruck loop and Inn(Q) is abelian, then Q has nilpotency class at most 2.

A Bruck loop is a loop satisfying two extra axioms: firstly x\cdot (y\cdot (x\cdot z)) = (x \cdot (y \cdot x)) \cdot z (which implies that left inverses equal right inverses), and secondly that (xy)^{-1}=x^{-1}y^{-1} (another “commutativity” condition).

The humans involved in the proof were J. D. Phillips and David Stanovský, who wrote a paper published in 2012 in Communications in Algebra. The humans reduced the question to proving that a certain equation was always satisfied for Bruck loops with abelian inner mapping groups, and then they threw a bunch of automated theorem provers at it. The prover Waldmeister managed to come up with a proof. Here it is — all 30000+ lines of it. This helpful page on Stanovsky’s website offers other formats of the proof, for example a 1068 page pdf file containing a complete “human-readable” proof. What human would ever read this proof though?

I had no idea that computers were doing that sort of thing nowadays. I learnt about this work from Michael Kinyon at AITP 2019.

Shimura varieties

In my mind, this work raises an interesting question. Note that the work also raises a lot of boring questions. Of course “boring” a very subjective term. But let’s try and put this work on Bruck loops into some sort of context. I work in a mathematics department full of people thinking about mirror symmetry, perfectoid spaces, canonical rings of algebraic varieties in characteristic p, etale cohomology of Shimura varieties defined over number fields, and the Langlands philosophy, amongst other things. Nobody in my department cares about Bruck loops. People care about objects which it takes an entire course to define, not a paragraph. The questions raised by this work which people in my department might find “boring” are: here are other classes of loops; can one prove analogous theorems about these classes? Or : here is another elementary object which takes a paragraph to define but which is not taught to the undergraduates at Imperial College London. Can one prove the following conjecture about these objects? The kind of mathematicians I know are, for the most part, not remotely interested in such questions. This is all to do with subtleties such as fashions in mathematics, and which areas are regarded as important (that is, worth funding).

So what are the mathematicians I know interested in? Well, let’s take the research staff in my department at Imperial College. They are working on results about about objects which in some cases take hundreds of axioms to define, or are even more complicated: sometimes even the definitions of the objects we study can only be formalised once one has proved hard theorems. For example the definition of the canonical model of a Shimura variety over a number field can only be made once one has proved most of the theorems in Deligne’s paper on canonical models, which in turn rely on the theory of CM abelian varieties, which in turn rely on the theorems of global class field theory. That’s the kind of definitions which mathematicians in my department get excited about — not Bruck Loops. I once went to an entire 24 lecture course by John Coates which assumed local class field theory and deduced the theorems of global class field theory. I have read enough of the book by Shimura and Taniyama on CM abelian varieties to know what’s going on there. I have been to a study group on Deligne’s paper on canonical models. So after perhaps 100 hours of study absorbing the prerequisites, I was ready for the definition of a Shimura variety over a number field. And then there is still the small matter of the definition of etale cohomology.

So why are computers proving theorems about Bruck loops but not proving theorems about the cohomology of Shimura varieties?

Here is a conjectural answer. Bruck loops are easy, so it’s easy to prove theorems about them. Shimura varieties are hard, so it’s never going to work.

Yeah, but hang on a minute. If 20 years ago we had raised the possibility that a computer could come up with a 1000 page proof of a statement which humans had formulated, however “easy” the objects in question were, I think that it would not have been hard to find people who would have been very skeptical. Think about it. The definition of a Bruck Loop in the 1000 page pdf is 19 axioms. Now make a naive guess about how long it would take to create a 30,000 line proof of a statement about Bruck Loops. The number 19^{30000} does not fit into our universe in any meaningful way. The computer scientists must have done something clever here. In particular I think that although many mathematicians would instinctively rule out the possibility of computers proving hard theorems about complicated objects because of issues of scale, this work on Bruck loops throws some sort of spanner into those works.

Here is another answer, which is correct for trivial reasons. One reason why computers are not proving theorems about the etale cohomology of Shimura varieties over number fields is that the human race have not yet taken the time to tell a proof verification system what the phrase “etale cohomology of a Shimura variety over a number field” means. Why is this? Well, I am currently finding it very hard to think of people who know the definition of the canonical model of a Shimura variety and also know how to use proof verification systems. I also think that it is much easier to train mathematicians to use proof verification systems than it is to train computer scientists to read papers by Deligne, and this is one of the reasons why the Xena project exists.

But is this the only reason? Well, unfortunately, I suspect that it is not. If you were to ask a computer scientist what they would need in order to start proving things about the cohomology of Shimura varieties defined over number fields, I suspect that they would ask for something like

  • the definitions;
  • sample proofs of hundreds (or thousands?) of theorems to train their AI;
  • some conjectures, of approximately the same level of difficulty as the theorems.

The reason that, in contrast to the Bruck loop work, they will need some proofs, is because the natural framework to define objects like Shimura varieties is within some higher order logical structure; even for the definition of a topological space one has to quantify over subsets rather than just elements. On the other hand, after AITP 2019, Josef Urban showed me some work where the E theorem prover had managed to prove basic theorems about topological spaces given enough “training data” theorems.

If instead of Shimura varieties we think about getting computers to prove theorems about perfectoid spaces, then Commelin, Massot and I have achieved the first bullet point above. As you can see, we still have a vast amount of work to do before we can catch up with Peter Scholze. Another milestone in this game would be to prove the tilting correspondence, because proving that would involve proving hundreds of other theorems along the way (which could also be used to train). However before that milestone, we still have the milestone of stating the tilting correspondence. Even stating it will involve having to prove some theorems. There is still a long way to go.

Posted in General | Tagged , , , , , , , , , , | Leave a comment