Thoughts on the Pythagorean theorem

I’m sure I’m saying nothing new here. I’m just explaining another example of how thinking about how to formalise things has taught me stuff about what mathematics is.

What is the Pythagorean theorem?

The Pythagorean theorem, a.k.a. Pythagoras’ theorem, comes in two parts. Firstly there is the theorem statement, which says that in a right angled triangle (like the dark blue one below), the square of the hypotenuse equals the sum of the squares of the other two sides. And then there is the proof, which originally is either due to Pythagoras or not depending on who you believe. Let’s start with the statement.

What does the statement actually mean?

Let’s take a look at the picture of the squares on the hypotenuse and the other two sides.

Some squares

The dark blue triangle is right-angled. The claim is that the square C is equal to the sums of the squares A and B. On the face of it, this is nonsense. If you take squares A and B, you have a picture containing two squares; but square C is just one square. How can one square equal two squares? But of course the claim is not that the pictures are equal, the claim is that the areas are equal.

But what is area? To find out, let’s go back to Euclid.

Euclid’s take on the theorem

Euclid’s Elements contains a proof of the Pythagorean theorem, right at the end of book 1. The proof involves drawing some triangles and arguing that various things are “equal”. This approach is valid because Euclid has explicitly stated as his Common Notion 1 that equality, whatever it is, is transitive.

One can chase this concept of equality back to Proposition 35, which claims that two parallelograms with the same base and the same height are “equal”. In fact this seems to be the first time that the word “equal” is used to mean “having equal area” in the Elements. Halving the parallelograms we deduce the more familiar Proposition 37, that two triangles with the same base and the same height are also “equal”. So what goes into the proof of Proposition 35, that two parallelograms with the same base and height are “equal” in the sense of having equal area?

The key ideas in the proof are Euclid’s second and third common notions: that “equals added to equals are equal”, and “equals subtracted from equals are equal”. In high-level terms, these common notions imply that equality is not just an equivalence relation, but a congruence relation. But let’s see how Euclid uses these notions in his proofs.

Equals added to equals are equal.

The two orange regions have equal areas, because they are both “equals added to equals”: the small triangles and the big triangles are both congruent.

Equals subtracted from equals are equal

Here, the two larger triangles are congruent, so the two orange areas are equal, because they are equals (the dark blue triangle) subtracted from equals (the larger triangles). For Euclid, the equality of the areas of the two orange regions in these examples is axiomatic. Take a look at the proof of Proposition 35 to see how these facts are used to prove that two parallelograms with the same base and height are “equal”.

Area in Euclid book 1

So, what Euclid does mean by the “area” of a shape? Well this is the funny thing — he never says, throughout book 1! He only says what it means for two shapes to have “equal area”!

This is exactly what an equivalence relation is. An equivalence relation on a type is a concept of equality on terms of that type. It can be thought of as focussing on a particular attribute of the terms you are considering (for example the area of a shape, or the value of an integer modulo 10) and saying that two terms are equivalent if the values of those attributes are equal. Euclid is putting an equivalence relation on shapes. His definition of the relation involves cutting and pasting in geometry, and at the end of the day the proof of the Pythagorean theorem in Euclid is essentially a jigsaw puzzle. Here is an even simpler jigsaw puzzle proof:

A proof of the Pythagorean theorem

Euclid and type theory

When Euclid did mathematics, he was doing type theory. For Euclid, points were terms of the Point type, lines were terms of the Line type, and planes were terms of the Plane type. Euclid wrote down the axioms his types satisfied (for example there was a unique line between any two distinct points) and proceeded to work from there. He has a definition of a 2-dimensional shape, and assuming that a plane exists, his shapes exist too. He defined an equivalence relation on 2D shapes, and proved that the 2D shape corresponding to the square on the hypotenuse was related to the 2D shape corresponding to the union of the squares on the other two sides, using properties of this relation which he has earlier axiomatised.

The proof of Pythagoras’ theorem in Euclid is what is known as a synthetic proof. We assume that a Euclidean Plane exists and satisfies a list of axioms, which Euclid attempted to write down and which most of us never even contemplate. We then formulate the theorem, and prove it using the axioms.

Numbers from geometry?

Note that Euclid is in some kind of a position to define real numbers at this point, or at least the positive real numbers. For example, Euclid knows what it means for two line segments to have equal length — it means that you can translate and rotate one line until it coincides with the other. He could hence define the positive reals to be equivalence classes of line segments, under the equivalence relation of being the same length. However one runs into problems when it comes to completeness, something Euclid’s axioms were not really designed for.

Geometry from numbers: Enter Descartes.

Descartes suggested doing things the other way around, using numbers to do geometry rather than using geometry to define numbers. Descartes observed that one could label a point in the plane with an x and y coordinate. This changed everything. All of a sudden “the plane” (a term whose existence is never talked about in Euclid) becomes modelled by \mathbb{R}^2. Euclid’s definitions, common notions, and axioms now need to be revisited. We need to check that this more refined model satisfies the rules of Euclid’s game (a bit like checking that Einstein’s theory turns into Newton’s in the limit).

We model a point as an ordered pair of real numbers, we can define lines as solutions to linear equations because the reals are a field so we have that language available. We can prove the parallel postulate no problem. The theory of integration gives us a way to measure lines (length), angles (measure), curves (length) and 2-dimensional shapes (area), using the natural (Euclidean) Riemannian metric on the plane. We can now completely rephrase Pythagoras’ theorem: it is now an equality of numbers. We can re-interpret the “jigsaw puzzle” proof in Euclid as a consequence of finite additivity of Lebesgue measure on the plane. We can also give a completely new proof, using the theorem that the distance from (a,b) to (c,d) is \sqrt{(a-c)^2+(b-d)^2}, as one can check using a line integral (modulo the theorem that the shortest distance between two points is a straight line, which needs proving in this context).

I saw measure theory developed as an undergraduate, and probably a few years ago I would have argued that this is now the “proper” proof — but now I realise that this proof still has some synthetic element to it: namely, the real numbers. We have a non-synthetic plane, but it is made from synthetic numbers.

What are the real numbers?

I was told as an undergraduate that it was an axiom that the reals existed, and that they were a complete ordered field. All of the analysis I learnt as an undergraduate was built upon this assumption, or rather, this structure (addition, multiplication, inequality) and these axioms (associativity and commutativity of addition and multiplication etc) on this type (the reals). In some sense it is no different to Euclid, who also had types (e.g. points), structures (e.g. triangles) and axioms (e.g. the common notions, or the parallel postulate), but who was modelling the Pythagorean theorem in a different, arguably more primitive, way.

Enter the analysts, bearing sequences

Descartes solved the problem of how to represent points in a plane with real numbers, but for Descartes, the reals were a type. Many years later, Cauchy and Dedekind gave two ways to represent the reals using simpler objects. Indeed, Cauchy sequences and Dedekind cuts are (different) ways of building the reals from the rationals. Similarly, the rationals can be built from the integers, the integers from the naturals, and the naturals are…well, they’re just some synthetic thing satisfying Peano’s axioms, right? At this point one could argue that Pythagoras’ theorem has become a statement about sequences of pairs of natural numbers (or however else one is modelling positive rationals), and the natural numbers have no definition — they are synthetic. But we can go further.

Enter the logicians, bearing sets.

One thing that ZFC set theory (the usual set-theoretic foundations of 20th century mathematics) has going for it, is that it gives a very unambiguous answer to the question: “Why are the natural numbers a set?”. The answer is “It’s one of the axioms”. One thing against it is that in ZFC set theory, everything is a set, even stuff which you don’t want to be a set. For example, real numbers (like the area of the square on the hypotenuse) are now sets, and the Pythagorean Theorem is now a theorem about the equality of two sets, although we don’t know exactly what the sets are, because we can never be sure whether the real numbers in the Platonic universe (or whichever universe we’re operating in) use Cauchy sequences or Dedekind cuts. [Pro tip: if your universe offers good quotienting facilities, use Cauchy sequences: they’re cleaner.]

The fact that we don’t know whether the reals being used in Pythagoras’ theorem are Cauchy sequences or Dedekind cuts is an indication that we have unfolded things too far, as far as Pythagoras’ theorem goes. Most mathematicians regard the real numbers as a type. A real number is not a set — Gauss or Riemann could certainly have informed you of that.

It is interesting that we can keep unfolding this way — but we can never get to the bottom of things. We can’t define everything, we always have to start somewhere — a synthetic beginning. Euclid started with points, lines and planes. Descartes started with the reals. Cauchy and Dedekind observed that you could start with the naturals. Set theorists start with sets. There is no definition of a set in ZFC set theory — a set is just a term in a model of ZFC set theory. The model can be thought of as the type, and its elements (or whatever you want to call them — they are not elements of a set in the internal logic of the model) as the terms. The behaviour of sets is developed using the axioms.

Pythagoras’ theorem : refinements of equality

So what is Pythagoras’ theorem? Is it that two shapes are “equal”, two numbers are equal, or two sets are equal? In some sense, it’s all of these things. In some sense this story reminds me of chemistry at school. The joke in the chemistry class was that they were always telling you lies — models of atoms which were very naive, and then more careful models which were more accurate, culminating (for me) with an undergraduate quantum mechanics course which told me that an electron was actually some kind of a complex-valued function. It feels very similar here. Euclid had a perfectly adequate notion of geometry, he axiomatised what he needed and argued from there. Later on we found a way to model the plane from more fundamental objects such as the real numbers. After that we found a way of modelling the real numbers using the natural numbers, and in some sense this is where we stop; using either type theory or set theory, the natural numbers have no definition — their existence is asserted, and we use them via their API (addition, multiplication, the principle of induction). Maybe someone will come along in future with a theory of mathematical quarks, which can be used together with the quark axioms to build the natural numbers in a natural way, and then our understanding of what Pythagoras’ theorem is might change again.

Posted in General, Olympiad stuff, Type theory, undergrad maths | Tagged , , | 7 Comments

Two types of universe for two types of mathematician

Thank you Johan for pointing out to me that the mathlib stats page had got really good! But one thing that made me laugh is that somehow on their stats for commits

I see I have done just enough to get a mention! I was quite surprised. Let’s look at those mathlib committers, most of them far more prolific than I am. Who are they? One surprising thing about them: although we have a common goal, of formalising mathematics in Lean, we are a really mixed bag of people, some of whom you might expect to have no research interests in common with me at all.

Yury Kudryashov is a post-doc in the maths department at Toronto. I’ve never met him, because of visa issues. The dark blue line shows his meteoric rise.

Scott Morrison does TQFT‘s at ANU and was one of the founders of MathOverflow.

Mario Carneiro is a PhD student of Jeremy Avigad, in the philosophy department at CMU. He taught me Lean.

Johannes Hoelzl is a computer scientist who works for Apple on formally verifying their products. He wrote the filter library for mathlib, which was essential for Patrick Massot’s work on topology, in turn crucial for the construction of perfectoid spaces in Lean.

Chris Hughes is an undergraduate at Imperial College London. He’s going to an MSc project with me next year, on a one-relator group tactic. Chris has taught me so much about what mathematics really is. Chris learnt Lean when he was a 1st year undergraduate and began to interpret all of his lectures from a type theoretic point of view, which in turn has led to what I think is an extraordinary understanding of “what is actually going on” in a pure mathematics undergraduate degree (in particular exactly how informal some of it is).

Rob Lewis has a PhD in logic and is now a post-doc in the computer science department at VU Amsterdam. He wrote linarith (“Loves the jobs you hate”). I ran into serious technical problems with tactics malfunctioning when making the natural number game, and Rob wrote the code which enabled me to hack tactic mode and fix stuff, saving the project. He set up the mathlib documentation project. This is the community’s effort to explain Lean’s interface to mathematicians. Before his academic career, he was a teacher.

Sébastien Gouëzel is a professor of mathematics in Nantes, who won the Brin Prize in 2019. He was the driving force behind manifolds in Lean. We now have C^n and C^\infty manifolds, over general complete fields such as the complexes, reals or p-adics, and in the real case you can also have corners. It was harder than you might initially think.

Johan Commelin is a post-doc in arithmetic geometry in Freiburg, working with Annette Huber. He has a wife and three small kids, and last week uploaded not one but two papers to ArXiv about o-minimality. He developed the API for a new Lean type called a “group with zero”, for example a field or a division ring but forget about the addition. Talking to him about stuff like this made me understand that the definition of a UFD never uses addition, and is hence a special case of a more general definition. History sometimes needs rewriting, but it’s OK — we hide the details from you. A UFD is a monoid with zero, by the way (such that the underlying monoid is a product of a group and a free abelian monoid — that’s the full definition in fact, although there are two ways of interpreting “is”).

Simon Hudon is a computer scientist and an excellent functional programmer. His definition of a monad is different to mine, and he knows what all the weird >-> symbols mean. He wrote a bunch of stuff in core Lean and was around since the start. Mostly meta.

Patrick Massot has translated most of Bourbaki’s Topologie générale into Lean, and can now confirm that they were (almost always) right. He’s a topologist in Orsay (except now we say Saclay) and teaches his undergraduates using Lean. You should try his introductory analysis problem sheets and other stuff in the Lean Tutorials repo. If you have Lean installed the modern way, you can just type leanproject get tutorials and then open the project in VS Code. Patrick also wrote the leanproject tool, and it has solved the constant problems beginners on Windows machines were having when trying to get a fully compiled mathlib running locally without waiting an hour or more for it to compile.

Reid Barton won four gold medals at the IMO.

Kenny Lau is an undergraduate at Imperial College, whose first year project on formalising the statements of local class field theory for abelian algebraic groups in Lean unsurprisingly won the “best pure maths project of the year” prize. I needed localisation of rings when doing schemes and he bashed out the entire theory like it was easy.

Gabriel Ebner is our bald-headed fixer. If the mathlib people can’t get something to work and they blame Lean, he sometimes has words with it directly. He has driven Lean from 3.4.2 to 3.17.1 in the space of, what — six months? It’s better and it’s faster. To anyone still on Lean 3.4.2 — switch to the Lean Prover community version of Lean. It’s so much easier now. Anyone searching for Lean — beware of Microsoft’s old pages. Search for Leanprover community or mathlib.

And then me at the bottom. A professor of maths with a general malaise about the state of number theory, who three years ago this week tried Lean for the first time and got hooked.

We’re a really motley crew, talking to each other about different ways of thinking about common areas of interest. There are many other people who have committed to mathlib too — e.g. people I’ve met on discord whose names I don’t even know but who got interested in seeing if they could formalising a random thing in Lean, and it has turned out that the answer is “yes, and in doing so you can make mathlib better”. People like Amelia Livingston, another Imperial undergrad, who felt that Kenny Lau’s theory of localisation of rings should be generalised to localisation of monoids and rewrote the whole lot. She was right. People like Shing Tak Lam, who was practicing for his STEP exams (hard UK school maths exams for 18 year olds) by formalising the questions, and his project to formalise a question about polynomials over the reals from STEP ended up with him developing the entire mathlib theory of partial derivatives for multivariable polynomial rings. He asked what a ring R was at some point, I said “just pretend it says the real numbers \mathbb{R}“.

The two cultures of mathematics

Together we are investigating the boundary between the specification of mathematics, and the implementation of mathematics, for both definitions and for proofs. I learnt from Sébastien at LFTCM2020 that we can now prove that smooth morphism of smooth manifolds induces a morphism on tangent spaces in a manner functorial in the manifold, but it took a bunch of people to turn the theorem statement from maths into code, and then a bunch of people to translate the proof from maths into code (and everyone was standing on the shoulders of giants, in some sense).

There are two universes involved in what we are making. There’s the part that’s in Type (where the creative ideas such as perfectoid spaces and the Cauchy reals are digitised) and Prop (the part within the begin/end blocks where the computer games are). In Tim Gowers’ essay on the two cultures of mathematics, he talks about the concepts of theory-building and problem-solving. I have always believed that there was something very true at the core of this, but now I am beginning to understand it much better. Type is where the theory-building is going on, and Prop is where the problem-solving is occurring.

But these two parts of mathematics are inextricably linked. Patrick, Johan and I formalised the definition of a perfectoid space, but on the way there we had to prove a whole load of theorems — even before we could write down the definition. There are two topologies on an affinoid perfectoid space. One generated by basic open subsets, and one generated by rational open subsets. In the future, when defining sheaves on a perfectoid space, sometimes we will use one basis, and sometimes the other. The proof that they are the same is 20 pages of very tricky algebra involving valuations on topological rings, and we have not done it yet — we just picked one of the definitions when defining a perfectoid space and moved on. At some point in the development, we will need to use the other definition, and then we’ll have to prove the theorem. But in fact this sort of thing already happened to us many many times before, and then we did have to prove the theorems. Sometimes things get tough. Mathematicians so good at instantly switching between the various “obviously equivalent” ways that a mathematician looks at a complicated algebraic object (“It’s an equivalence relation! Now it’s a partition! Now it’s an equivalence relation again! Let your mental model jump freely to the point of view which makes what I’m saying in this particular paragraph obvious!”, or “Matrices are obviously associative under multiplication because functions are associative under composition.” (dutiful student realises later that this proof assumes the action of matrices on \mathbb{R}^n is faithful, and did you see that dependent type there?). Some of the proofs we’re writing are simply proofs that humans are behaving correctly when using mathematical objects.

But writing some proofs in mathlib is just fun. In fact there are now a growing number of proofs in Lean written by mathematicians who are coming to mathlib and finding that the interface to the thing they wanted (e.g. a topological space, or an equivalence relation, or a ring) is there and usable. It is hence possible for them to state and prove (or reprove, if mathlib did it already) the results they are interested in, using Lean’s tactic framework. We don’t know how far this can go, and we don’t know whether type-theoretic issues will cause problems further down the line (e.g with etale cohomology) but at this rate, it looks like we’re going to find out.

Currently on display at Royal Academy of Art (proud Dad :-))

Posted in Uncategorized | Tagged , , | 4 Comments

Lean for the Curious Mathematician 2020

I have just spent an exhilarating week in the company of a whole bunch of mathematicians, a fair few of whom are serious professors from prestigious universities, all of us learning how to use Lean to do mathematics ranging from basic logic to category theory and differential geometry.

Why? Because I’ve been at Lean for the Curious Mathematician 2020, an online meeting organised by Johan Commelin and Patrick Massot. It was like no other conference I’ve ever been to. On a typical day there were three “sessions”. A session was typically two hours long, and consisted of a mathematician who knows how to use Lean giving a 15 to 30 minute Zoom talk introducing a part of Lean’s maths library mathlib, and then the audience splitting up into breakout rooms of between 5 and 10 people, with one expert per room, and working on the exercises which the speaker had prepared. People could ask questions to each other or to the experts whenever they were stuck. A few years ago I would never have guessed that in 2020 I would have a Cambridge professor of number theory asking me for help in proving that the standard basis of \mathbb{R}^n was a basis — but when you have just started learning how to do mathematics in a new way, these are natural questions to ask. The answer is not hard — but you have to learn how to do it.

What is even better is that everything was recorded, so if you missed LftCM 2020, you can still join in. All the talks are up on a LftCM 2020 playlist on the Leanprover community YouTube site, and all of the exercises are available at the LftCM 2020 GitHub repository. If you have installed leanproject by following the instructions on the Leanprover Community website then all you have to do is to type

leanproject get lftcm2020

and then, using VS Code, open the lftcm2020 directory which just appeared on your computer. You’ll see all the exercises, and all the solutions. Furthermore, you have easy access to the very same experts who wrote the exercises because they all hang out on the Lean Zulip chat, and the #new members stream is dedicated to beginner questions. I will be live streaming my way through some of the exercises over the next few weeks on the Xena Project Discord server; I monitor the chat on most days, but aim to spend every Tuesday and Thursday on the server throughout July and August. If you’re a undergraduate mathematician, even if you’re a complete Lean beginner, you’re very welcome to join us. A good time to join us is Thursday evening UK time; there are often a whole bunch of us there then.

What did we learn?

One thing we learnt was that technology like Zoom works very well for a workshop of this nature. For a beginner, Lean’s error messages are very intimidating. But if a beginner shares their screen on Zoom so that an expert can read the error, then the expert often immediately knows what is wrong.

Something else we learnt was that a very good way to teach mathematicians how to use Lean is to give them a whole ton of exercises involving doing mathematics which they already understand conceptually, and asking them to attempt them in Lean. In some sense a large source of exercises corresponding to interesting mathematics is something which, before now, had been lacking in the Lean ecosystem. There is the very wonderful Theorem Proving In Lean, the book I read when learning Lean, but it does not talk about Lean’s mathematics library at all and focuses more on things like Lean’s underlying type system. I have always thought of it as more of a book for computer scientists. Jeremy Avigad, one of the authors of Theorem Proving In Lean, is currently working with Patrick Massot, Rob Lewis and myself on a new book, Mathematics In Lean (a book which can be read and run entirely within VS Code; read the instructions in chapter 1 on how to interact with it in this way). But this book is not yet finished. The book does not come with videos — Jeremy is writing words to explain how things work. The LftCM GitHub repository goes right to the point: every file, after a brief introduction, goes straight onto exercises, the vast majority of which can be solved in tactic mode, so it is a very natural continuation for anyone who has played the natural number game. If you don’t understand something in the file, you can watch the corresponding video and see if this helps. I will be very interested in seeing whether mathematicians find the repository a useful asset and I am almost certain that they will.

We learnt that Scott Morrison’s tireless efforts to teach both Lean and the Leanprover community category theory have now gone as far as giving us a whole host of accessible exercises, together with hints!

We learnt that smart people can pick up Lean very quickly. It was very interesting watching Sophie Morel learning to use the system, and seeing her assimilating more and more concepts as the week went on; I was especially interested in watching her learn a new trick to prove a theorem and then, instead of immediately moving on, playing around with her proof to understand the trick better. Although it sounds a bit ridiculous to compare a mathematician of her calibre to Kenny Lau (an Imperial undergraduate), watching her learning Lean this week reminded me very much of watching Kenny learning Lean back in 2017, when he was just a first year UG and within a few weeks of starting using Lean was formalising the theory of localisation of commutative rings. Kenny has gone on to contribute just under 20,000 lines of code to mathlib, including a lot of MSc level commutative ring theory. Sophie, many thanks for coming, and I hope you like what you saw. And Kenny, and Chris and Amelia, it was great to chat to you all this week and I am very much looking forward to working with all of you this forthcoming academic year on your Lean MSci projects.

One important thing, for me at least, that came out of the week, was that we got to see exactly the kind of problems which beginners run into. Lean is very much under development. New versions come out every few weeks of the community version of Lean, and the maths library is updated several times a day. There are many open issues on the Lean and mathlib github repositories, which are opened, dealt with, and closed. But as an experienced user it was interesting to see which ones were tripping people up. We know library_search sometimes fails because of the apply bug, and we know how to work around this. But this would completely throw beginners off. We know that nat.pow is not definitionally equal to monoid.pow and we know how to work around this. I would say that it is regarded as a low priority issue. But this week we saw people being totally confused by it. It is a reminder that we can still make Lean 3 a better system. I am not convinced by the “let’s just wait until Lean 4” argument. Johannes Hoelzl, a very experienced formaliser, told me that in his opinion porting mathlib from Lean 3 to Lean 4 whilst simultaneously refactoring it would be a bad idea. Despite the fact that I want to set up a theory of Picard groups, Ext and Tor, I should remember that the basics are still not 100% right and I can play my part in trying to make them better (beginning with that big subgroup refactoring).

I personally learnt that mathematicians do like Lean games. A couple of people asked me if it would be possible to make a filter game! I think this idea has definite potential! Until then, there’s always the max minigame, a game you can play in your browser like the natural number game and something which will eventually become part of the real number game.

I have learnt a lot this week. Watching other people using Lean is something I find very instructive, both as an educator and as a Lean user. Thank you to to all who attended, and to those who led sessions and wrote example sheets, and I very much hope to see many of you again at LftCM 2021.

Sketchy Dude
Posted in Learning Lean | Tagged , | Leave a comment

Division by zero in type theory: a FAQ

Hey! I heard that Lean thinks 1/0 = 0. Is that true?

Yes. So do Coq and Agda and many other theorem provers.

Doesn’t that lead to contradictions?

No. It just means that Lean’s / symbol doesn’t mean mathematical division. Let \mathbb{R} denote the real numbers. Let’s define a function f:\mathbb{R}^2\to\mathbb{R} by f(x,y)=x/y if y\not=0 and f(x,0)=0. Does making that definition give us a contradiction in mathematics? No, of course not! It’s just a definition. Lean uses the symbol / to mean f. As does Coq, Agda etc. Lean calls it real.div by the way, not f.

But doesn’t that lead to confusion?

It certainly seems to lead to confusion on Twitter. But it doesn’t lead to confusion when doing mathematics in a theorem prover. Mathematicians don’t divide by 0 and hence in practice they never notice the difference between real.div and mathematical division (for which 1/0 is undefined). Indeed, if a mathematician is asking what Lean thinks 1/0 is, one might ask the mathematician why they are even asking, because as we all know, dividing by 0 is not allowed in mathematics, and hence this cannot be relevant to their work. In fact knowing real.div is the same as knowing mathematical division; any theorem about one translates into a theorem about the other, so having real.div is equivalent to having mathematical division.

This convention is stupid though!

It gets worse. There’s a subtraction nat.sub defined on the natural numbers \{0,1,2,\ldots\}, with notation x - y, and it eats two natural numbers and spits out another natural number. If x and y are terms of type and x < y, then x - y will be 0. There’s a function called real.sqrt which takes as input a real number, and outputs a real number. If you give it 2, it outputs \sqrt{2}. I don’t know what happens if you give it the input -1, beyond the fact that it is guaranteed to output a real number. Maybe it’s 0. Maybe it’s 1. Maybe it’s 37. I don’t care. I am a mathematician, and if I want to take the square root of a negative real number, I won’t use real.sqrt because I don’t want an answer in the reals, and the type of real.sqrt is ℝ → ℝ.

Why can’t you just do it the sensible way like mathematicians do?

Great question! I tried this in 2017! Turns out it’s really inconvenient in a theorem prover!

Here’s how I learnt Lean. I came at it as a “normal mathematician”, who was thinking about integrating Lean into their undergraduate introduction to proof course. I had no prior experience with theorem provers, and no formal background in programming. As a feasibility study, I tried to use Lean to do all the problem sheets which I was planning on giving the undergraduates. This was back in 2017 when Lean’s maths library was much smaller, and real.sqrt did not yet exist. However the basic theory of sups and infs had been formalised, so I defined real.sqrt x, for x non-negative, to be Sup\{y\in\mathbb{R}\,|\,y^2\leq x\}, and proved the basic theorems that one would want in an interface for a square root function, such as \sqrt{ab}=\sqrt{a}\sqrt{b} and \sqrt{a^2}=a and \sqrt{a^2b}=a\sqrt{b} and so on (here a,b are non-negative reals, the only reals which my function would accept).

I then set out to prove \sqrt{2}+\sqrt{3}<\sqrt{10}, a question on a problem sheet from my course. The students are told not to use a calculator, and asked to find a proof which only uses algebraic manipulations, i.e. the interface for real.sqrt. Of course, the way I had set things up, every time I used the \sqrt{\phantom{2}} symbol I had to supply a proof that what I was taking the square root of was non-negative. Every time the symbol occurred in my proof. Even if I had proved 2 > 0 on the previous line, I had to prove it again on this line, because this line also had a \sqrt{2} in. Of course the proof is just by norm_num, but that was 10 characters which I soon got sick of typing.

I then moaned about this on the Lean chat, was mocked for my silly mathematician conventions, and shown the idiomatic Lean way to do it. The idiomatic way to do it is to allow garbage inputs like negative numbers into your square root function, and return garbage outputs. It is in the theorems where one puts the non-negativity hypotheses. For example, the statement of the theorem that \sqrt{ab}=\sqrt{a}\sqrt{b} has the hypotheses that a,b\geq 0. Note that it does not also have the hypothesis that ab\geq0, as one can deduce this within the proof and not bother the user with it. This is in contrast to the mathematicians’ approach, where the proof that ab\geq0 would also need to be supplied because it is in some sense part of the \sqrt{\phantom{2}} notation.

So you’re saying this crazy way is actually better?

No, not really. I’m saying that it is (a) mathematically equivalent to what we mathematicians currently do and (b) simply more convenient when formalising mathematics in dependent type theory.

What actually is a field anyway? For a mathematician, a field is a set F equipped with 0,1,a+b,-a,a\times b,a^{-1} where the inversion function a^{-1} is only defined for non-zero a. The non-zero elements of a field form a group, so we have axioms such as x\times x^{-1}=1 for x\not=0 (and this doesn’t even make sense for x=0). Let’s say we encountered an alien species, who had also discovered fields, but their set-up involved a function \iota :F\to F instead of our x^{-1}. Their \iota was defined, using our notation, by \iota(x)=x^{-1} for x\not=0, and \iota(0)=0. Their axioms are of course just the same as ours, for example they have x\times \iota(x)=1 for x\not=0. They have an extra axiom \iota(0)=0, but this is no big deal. It’s swings and roundabouts — they define a/b:=a\times\iota(b) and their theorem (a+b)/c=a/c+b/c doesn’t require c\not=0, whereas ours does. They are simply using slightly different notation to express the same idea. Their \iota is discontinuous. Ours is not defined everywhere. But there is a canonical isomorphism of categories between our category of fields and theirs. There is no difference mathematically between the two set-ups.

Lean uses the alien species convention. The aliens’ \iota is Lean’s field.inv , and Lean’s field.div x y is defined to be field.mul (x, field.inv y).

OK so I can see that it can be made to work. Why do I still feel a bit uncomfortable about all this?

It’s probably for the following reason. You are imagining that a computer proof checker will be checking your work, and in particular checking to see if you ever divided by zero, and if you did then you expect it to throw an error saying that your proof is invalid. What you need to internalise is that Lean is just using that function f above, defined by f(x,y)=x/y for y\not=0 and f(x,0)=0. In particular you cannot prove false things by applying f to an input of the form (x,0), because the way to get a contradiction by dividing by zero and then continuing will involve invoking theorems which are true for mathematical division but which are not true for f. For example perhaps a mathematician would say a/a=1 is true for all a, with the implicit assumption that a\not=0 and that this can be inferred from the notation. Lean’s theorem that real.div a a = 1 is only proved under the assumption that a\not=0, so the theorem cannot be invoked if a=0. In other words, the problem simply shows up at a different point in the argument. Lean won’t accept your proof of 1=2 which sneakily divides by 0 on line 8, but the failure will occur at a different point in the argument. The failure will still however be the assertion that you have a denominator which you have not proved is nonzero. It will simply not occur at the point when you do the division, it will occur at the point where you invoke the theorem which is not true for real.div.

Thanks to Jim Propp and Alex Kontorovich for bringing this up on Twitter. I hope that this clarifies things.

Posted in Learning Lean, M1F, M40001, Type theory, undergrad maths | Tagged , | 12 Comments

Equality, specifications, and implementations

Equality is such a dull topic of conversation to mathematicians. Equality is completely intrinsic to mathematics, it behaves very well, and if you asked a mathematician to prove that equality of real numbers is an equivalence relation then they would probably struggle to say anything of content; it’s just obviously true. Euclid included reflexivity and transitivity of equality as two of his “common notions”, and symmetry was equally clear from his language — he talks about two things being “equal to each other” rather than distinguishing between a = b and b = a.

One thing that has helped me start to understand why computer scientists make a fuss about equality is the observation that if you follow Peano’s development of the natural numbers (as I do in the natural number game) then you come to the following realisation: if you define addition by recursion in the second variable (i.e. a + 0 := a and a + succ(n) := succ(a + n) ) then life temporarily becomes asymmetric. The fact that a + 0 = a is an axiom. However the fact that 0 + a = a needs to be proved by induction. Now induction is also an axiom, so a mathematician would just say that despite the fact that the proofs have different lengths, 0 + a = a and a + 0 = a are both theorems, so the fact that digging down to the foundations shows that the proofs are of a different nature is really of no relevance.

To a computer scientist however, there is a difference in these proofs. This difference seems to be of no relevance to mathematics. But the difference is that, if you set the natural numbers up in this way, then a + 0 = a is true by definition, and 0 + a = a is not. Indeed, in Lean’s type theory (and probably in many others) there are three types of equality that you have to be aware of:

  1. Propositional equality;
  2. Definitional equality;
  3. Syntactic equality.

Let’s start by getting one thing straight: to a mathematician, all of these things are just called equality. In fact, even more is true: definitional equality is not a mathematical concept. All of these kinds of equalities are easy to explain, so let me go through them now.

Propositional equality: a = b is a propositional equality if you can prove it.

Definitional equality: a = b is a definitional equality if it’s true by definition.

Syntactic equality: a = b is a syntactic equality if a and b are literally the same string of characters.

For example, let’s go back to Peano’s definition of the natural numbers and the conventions for addition above. Then a + 0 = a is a definitional equality but not a syntactic one, 0 + a = a is a propositional equality but not a definitional one, and a = a is a syntactic equality. To show that 0 + a = a is not definitionally true, you have to ask yourself what the definition of 0 + a is; and because we don’t know whether a = 0 or not, 0 + a cannot be definitionally simplified any further (the definition of x + y depends on whether y = 0 or not).

[Technical note: syntactic equality does allow for renaming of bound variables, so \{a \in \mathbb{R}\, |\, a^2 = 2\} is syntactically equal to \{b \in \mathbb{R}\, |\, b^2=2\}. If you understand that idea that notation is syntax sugar then you’ll probably know that syntactic equality can see through notation too, which means to say that add(a,0) = a + 0 is also a syntactic equality. But that’s it.]

Of course 2 + 2 = 4 is not a syntactic equality; removing notation and writing S for “successor”, and working under the assumption that 2 is syntax sugar for S(S(0)) and 4 for S(S(S(S(0)))), we see that the left hand side is syntactically add(S(S(0),S(S(0)) and the right hand side is S(S(S(S(0)))). However it is a definitional equality! add(x,S(y))=S(add(x,y)) is true by definition, as is add(x,0)=x, and it’s fun to check that applying these rules a few times will reduce 2 + 2 to 4.

The reason that it’s important to understand the differences if you are writing Lean code, is that different tactics work with different kinds of equality. Lean’s refl tactic attempts to close the goal by showing that one side of an equality is definitionally equal to the other side. If your goal is X then change Y will work if and only if Y is definitionally equal to X. On the other hand Lean’s rw tactic works at the level of syntactic equality: if h : A = B then rw h will change everything syntactically equal to A in the goal, into a B. If h : a + 0 = b and your goal is a = b then rw h will fail because the equality a + 0 = a is not syntactic. However exact h will work fine, because exact works up to definitional equality.

Specification v implementation

The fact that a + 0 = a is a definitional equality in the natural number game, but 0 + a = a isn’t, is as far as I am concerned a proof that definitional equality is not a mathematical concept. Indeed one can clearly just define addition on the natural numbers by recursion on the first variable instead of the second, and then 0 + a = a would be definitional and a + 0 = a would not be. What is going on here was made clear to me after a conversation I had with Steve Vickers after a seminar I gave to the University of Birmingham CS theory group. Mathematicians have a specification of the natural numbers. We know what we want: we want it to satisfy induction and recursion, we want it to be a totally ordered commutative semiring (i.e. all the ring axioms other than those involving minus) and we can take it from there thank you very much. If you present me with an object which satisfies these theorems I can use it to prove quadratic reciprocity. I don’t care what the actual definition of addition is, indeed I know several definitions of addition and I can prove they’re all equivalent.

If you’re doing to do mathematics in a theorem prover, you have to make one definition. Mathematicians know that all the definitions of the natural numbers are the same. If you want to set up mathematics in set theory for example, then it doesn’t matter whether you decide to let 3 = \{2\} or 3 = \{0,1,2\}: any system which ensures that 3 isn’t any of the sets you’ve already made is clearly going to work. But in a computer theorem prover you need to make choices — you need to make implementations of 3 and of add — and the moment that choice is made you now have a dichotomy: some stuff is true by definition, and some stuff needs an argument like induction and is not true by definition.

The first time I really noticed the specification / implementation difference was when it dawned on me that Lean’s maths library had to choose a definition of the reals, and it went with the Cauchy sequence definition: a real number in Lean is an equivalence class of Cauchy sequences. An alternative approach would have been to define it as Dedekind cuts. As mathematicians, we don’t care which one is used, because we are well brought up and we promise to only ever access the real numbers via its interface, or its API to borrow a computing term. The interface is the specification. We mathematicians have a list of properties which we want the real numbers to satisfy: we want it to be a complete archimedean ordered field. Furthermore we have proved a theorem that says that any two complete archimedean ordered fields are uniquely isomorphic, and this is why we do not care one jot about whether we are using Cauchy sequences or Dedekind cuts. Lean gives me access to an interface for the real numbers which knows these things, and it’s all I need to build a theory of Banach spaces. As mathematicians we somehow know this fact implicitly. If I am trying to prove a theorem about Banach spaces, and I have a real number \lambda, I never say “Ok now for the next part it’s really important that \lambda is under the hood defined to be a Dedekind cut”. If I want the Dedekind cut associated to \lambda, I can just make it. I don’t care whether it equals \lambda by definition or not, because definitional equality is not a mathematical concept. All I care about is access to the interface — I’m proving a theorem about Banach spaces here, and I just need to have access to stuff which is true. The job of Lean’s file data.real.basic is to give me access to that interface, and I can build mathematics from there.

Computer scientists on the other hand — they have to care about definitional equality, because it’s often their job to make the interface! If two things are definitionally equal then the proof they’re equal is refl, which is pretty handy. Different definitions — different implementations of the same specification — might give you a very different experience when you are making an interface for the specification. If you really have too much time on your hands this lockdown, why not go and try proving that addition on the real numbers is associative, using both the Dedekind cuts definition and the Cauchy sequences definition? For Cauchy sequences it just boils down to the fact that addition is associative on the rationals. But you’ll find that it’s a real bore with Dedekind cuts, because Dedekind cuts have this annoying property that you need a convention for the cuts corresponding to rational numbers: whether to put the rational number itself into the lower or upper cut. Neither convention gives a nice definition of addition. You can’t just add the lower cuts and the upper cuts, because the sum of two irrationals can be a rational. Multiplication is even worse, because multiplication by a negative number switches the lower and upper cut, so you have to move a boundary rational between cuts. You can see why Lean went for the Cauchy sequences definition.

I ran into this “which implementation to use for my specification” issue myself recently. I notice that I have been encouraging students at Imperial to formalise courses which I have been lecturing, which recently have been algebra courses such as Galois theory. I am by training an algebraic number theorist, and really by now I should have turned my attention the arithmetic of number fields and their integers: as far as I can see, finiteness of the class group of a number field has been formalised in no theorem provers at all, so it is probably low-hanging fruit for anyone interested in a publication, and we surely have all the required prerequisites in Lean now. I thought that I would try and get this area moving by formalising the definitions of a Dedekind Domain and a discrete valuation ring (DVR). I looked up the definition of discrete valuation ring on Wikipedia and discovered to my amusement that there are (at the time of writing) ten definitions 🙂

Now here I am trying to be a theory builder: I want to make a basic API for DVRs so that students can use them to prove results about local and global fields. So now I have to decide upon a definition, and then prove that it is equivalent to some of the other definitions — I need to make enough interface to make it easy for someone else to take over. As far as I could see though, what the actual definition of a DVR is, is of no real importance, because it doesn’t change the contents of the theorems, it only changes the way you state them. So I just chose a random one 😛 and it’s going fine!

Equality of terms, equality of types

When talking about propositional and definitional equality above, my main examples were equality between natural numbers: 0 + a = a and what have you. Set-theoretically, we can happily think about 0 + a = a as equality between two elements of a set — the set of natural numbers. In type theory we are talking about equality between two terms of type T, where T : Type . But one can take this a level up. Say A : Type and B : Type (for example say A is the Cauchy reals, and B is the Dedekind reals). What does A = B mean? These are now not elements of a set, but objects of a category. Certainly the Cauchy reals and the Dedekind reals are not going to be definitionally equal. Can we prove they are propositionally equal though? No — of course not! Becuase they are not actually equal! They are, however canonically isomorphic. A fourth type of equality!

All this equality naval-gazing is important to understand if you are talking about equality of terms. This, we have nailed. For mathematicians there is one kind of equality, namely “it is a theorem that a = b“. For computer scientists there are three kinds, and understanding the distinction might be important for interface extraction.

But for equality of types, something funny is going on. Which is the most accurate? (A \otimes B) \otimes C \cong A \otimes (B \otimes C) or (A \otimes B) \otimes C = A \otimes (B \otimes C)? This is just more notational naval-gazing, who cares whether these things are isomorphic or equal – they are manifestly the same, and we can always replace one by the other in any reasonable situation because they both satisfy the same universal property. However, I am realising that as a Lean user I need to say very carefully what I mean by “a reasonable situation”, and actually the safest way to do that is to not prove any theorems at all about (A \otimes B) \otimes C other than the fact that it satisfies its universal property, and then instead prove theorems about all objects which satisfy that universal property. Mathematicians do not use this technique. They write their papers as if they are proving theorems about the concrete object (A \otimes B) \otimes C, but their proofs are sensible and hence apply to any object satisfying its universal property, thus can be translated without too much problem, once one has extracted enough API from the universal property. There is an art to making this painless. I learnt from Strickland the three key facts that one should prove about a localisation R \to R[1/S] of rings: the kernel is the elements annihilated by an element of S, the image of every element of S is invertible, and the map from R\times S to the target sending (r,s) to r/s is surjective. These three facts together are equivalent to the universal property of R[1/S] and now you can throw it away and build the rest of your localisation API on top of it. Indeed, when my former MSc student Ramon Fernández Mir formalised schemes in Lean, he needed this result from the stacks project to prove that affine schemes were schemes, but more generally for the case of rings only satisfying the same universal properties as the rings in the lemma. At the time he needed it (about 18 months ago) the proof only used the three facts above isolated by Strickland, and so was easy to prove in the generality he required.

However, in Milne’s book on etale cohomology, it is decreed in the Notation section that = means “canonical isomorphism”, and so there will be a lot more of this nonsense which we will have to understand properly if we want to formalise some serious arithmetic geometry in Lean.

Posted in Algebraic Geometry, General, Type theory | Tagged , , | 6 Comments

Teaching dependent type theory to 4 year olds via mathematics

We had a family Zoom chat today and I ended up talking to my relative Casper, who is 4 and likes maths. He asked me to give him some maths questions. I thought 5+5 was a good place to start, and we could go on from there depending on responses. He confidently informed me that 5+5 was 10, and that 5-5 was zero. For 5*5 he asked me “is that five fives?” and I said yes, and he asked me “Is that 5 lots of 5?” and I said yes, and then he began to count. “1,2,3,4,5. 6,7,8,9,10. 11,12,13,14,15. 16,17,18,19,20. 21,22,23,24,25. It’s 25!” he said. I guess he’s proved this by refl. Every natural number is succ (succ (succ (...(succ zero))...) so to evaluate one of them, you just break it down into this canonical normal form. He did not know what division was, so I thought it was time to move on from 5.

I figured that if he didn’t know his 5 times table we needed to be careful with multiplication, in case of overflow errors, so I decided I would go lower. I next asked him 0+0, 0-0 and 0*0 (“so that’s zero lots of zero?” Yes. “So that’s zero zeros?” Yes). He got them all right, and similarly for 1+1, 1-1 and 1*1. I then thought we could try 0 and 1 together, so I asked him 0+1 to which he confidently replied 1, and 0-1, to which he equally confidently replied 0.

This answer really made me sit up. Because Lean thinks it’s zero too.

#eval 0 - 1 

I am pretty sure that a couple of years ago I would have told him that he was wrong, possibly with a glint in my eye, and then tried to interest him in integers. Now I am no longer sure that he is wrong. I am more convinced that what has happened here is that Casper has internalised some form of Peano’s axioms for arithmetic. He knows . In some sense this is what I have been testing. He knows that every natural is succ succ … succ 0, and then for some reason he has to learn a song that goes with it, with a rigid rhythmical beat and written in 10/4 time: “One, two, three four, five, six, seven, eight, nine, ten. Eleven, twelve,…” and so on ad infinitum, some kind of system which is important for computation but is currently not of interest to me. Also, Casper knows that there’s something called subtraction, and when given junk input such as 0-1 he has attempted to make some sense of it as best he could, just like what the functional programmers who like all functions to be total have done.

We then went onto something else. I asked him what 29+58 was and he essentially said that it was too big to calculate. I asked him if he thought that whatever it was, it was the same as 58+29, and he confidently said it was, even though he did not have a clue what the answer was. I asked him how he was so sure but I did not get a coherent answer.

I asked him what three twos were, and he counted “1, 2, 3, 4, 5, 6“. Three lots of two is 6. I then asked him what two threes were and he immediately replied 6 and I asked him if he’d done the calculation and he said no, two threes were the same as three twos. I asked him why but all he could say was that it was obvious. It’s simp. I asked him if he was thinking about a rectangle and he said no. So he knows simp and refl.

I then did quite a mean thing on him, I did overflow his multiplication buffer. I asked him what ten 3’s were. We spent ages getting there, and he needed help, because he hasn’t learnt how to sing the song in 3/4 time yet. But eventually refl proved that 10*3=30. I then asked him if he thought 3*10=10*3 and he was confident that it was. I then asked him what 3 tens were and whilst he didn’t immediately do this, he knew a song (in 4/4) which started at “one ten is10 (rest)” and finished at “ten tens-are-a hun dred.”. Using the song, we figured out pretty quickly what three tens were, and deduced that ten 3’s were 30 again. I then asked him what ten sevens were, and we talked about how how difficult it would be to sing the song in 7/4 and how long the song would be, and then we talked together through the argument that it was also “obviously” seven tens, so by the song it was seventy; apparently the fact that seventy sounds similar to 7 (even though thirty doesn’t sound that much like 3) is evidence that the patterns in the counting song can be helpful to humans who want to calculate.

I then remembered the junk subtraction answer, so I thought I’d try him on another function which returns junk in Lean, namely pred, the “number before” function on the naturals. I asked him what the number before 3 was, and he said 2. He was also confident that the number before 2 was 1, and the number before 1 was 0. I then asked him what the number before 0 was and he thought a bit and then said….”1?”.

pred is defined by recursion in Lean. We have pred (succ a) = a, and pred 0 is defined to be 0, because if you follow Lean’s “make all functions total, it’s easier for programmers” convention then it has to be something. But the choice of the actual value of pred 0 is probably not seriously used in Lean, and they could have defined pred 0 to be 1 or 37 and probably not much would break, and what did break would probably be easily fixed. Because of whining linter I recently needed to supply a junk value to Lean (it wanted me to prove that the quotient of a ring by an ideal was not the empty set) and I supplied 37 as the witness. pred is a fundamental function defined by recursion, as far as computer scientists are concerned. So why is it not even mentioned in the natural number game? Because you can’t really define functions by induction, you define them by recursion, and I don’t want to start getting involved in new recursive definitions, because this sort of thing cannot go within a begin end block. I could have decided to explain pred but I would have had to address the issue of pred 0 not being error and I realised that actually there was always a “mathematical workaround”. By this I mean the following. The natural proof of succ_inj uses pred, for example. The successor function is injective. The proof using pred is easy. But I tell people in the natural number game that succ_inj is an axiom, as is zero_ne_succ, and now we mathematicians seem to be able to develop a huge amount of theory without ever even defining subtraction. We don’t really want to use Lean’s natural number subtraction. We know that 0 ≠ junk is true by definition but unfortunately `0-0=0-1` in Lean, which is not right.

We then talked about the number line a bit, and I then told him about a level in the 2d Mario franchise where the exit is way off to the right but right at the beginning you can completely counterintuitively go off to the left and pick up some bonus stuff. I then told him that there were some other kinds of numbers which he hadn’t learnt about yet, and I said they were called the integers (note: not “the negative numbers”). I said that the integers included all the numbers he knew already, and some new ones. I told him that in the integers, the number before 0 was -1. I then asked him what he thought the number before -1 would be and he said “-2?” And I told him he was right and asked him what the number before -2 was and he confidently said -3, and we did this for a bit and chatted about negative numbers in general, and then I asked him what the number before -9 was and he said -10. I then asked him what the number before -99 was and he said -100 and then I asked him what the number before -999 was and he said he didn’t know that one and did I want to play Monopoly. I took this as a sign that it was time to stop, and we agreed to play a game of monopoly via this online video chat, and then it turned out that he didn’t know any of the rules of monopoly (he’d just found the box on the floor) and he couldn’t read either, so he just put the figures on random places on the board and we talked about the figures, and what we could see on the board, and what was in the monopoly box, and the maths lesson was over.

I was pretty appalled when I saw Lean’ s definition of int, a.k.a. . It is so ugly. There are two constructors, one of_nat n which takes a natural number n and returns the integer n (an invisible function, set up to be a coercion), and the one called something really weird like neg_one_of_nat n which takes in a natural n and returns the integer -1-n, complete with hideous notation. This definition somehow arrogantly assumes some special symmetry of the integers about the point -0.5. The mathematician’s definition, that it’s a quotient of \mathbb{N}^2 by the equivalence relation (a,b)\sim(c,d)\iff a+d=b+c, a.k.a. the localisation of the additive monoid of naturals at itself (i.e. adding additive inverses to every element). This definition is a thing of beauty. The symmetry is never broken. It is the canonical definition. But even though this definition as a quotient is the most beautiful definition, classifying as it does the integers up to unique isomorphism because of the universal property of localisation (thanks so much Amelia Livingston), implementation decisions are system specific and int in Lean is the concrete inductive type with two constructors and there’s nothing we can do about this. So mathematically I am kind of embarassed to say that today I basically taught Lean’s definition of int to a kid, just as an experiment.

Another weird thing is that Lean has a really ugly proof of a+b=b+a in int but for some reason this is going to be “obvious” when negative numbers are taught to him in school, and will not need a proof. I guess ac_refl will do it.

When Casper had to go, I told him before he left to ask his parents what the number before -9 was. I think it’s a pretty tricky question if you’re caught off-guard. I told him that negative numbers were like mirror world. Is there educational research about how children model numbers? What different ideas do they have about before they even start to learn their multiplication tables? Should we teach them induction now, before they are crushed by having to learn so many long and boring songs telling them things like six sevens are 42, because of the belief in the education system currently that memorising this is somehow important in the year 2020 when most teenage kids have a calculator app in their pocket.

Casper loves my daughter Kezia’s art. He says “everything is alive except the rainbow”.

Everything is alive, except the rainbow.
Posted in computability, Learning Lean, number theory, Type theory | Tagged , , | 8 Comments

Mathematics in type theory.

What is maths? I think it can basically be classified into four types of thing. There are definitions, true/false statements, proofs, and ideas.

Definitions (for example the real numbers, or pi) and true/false statements (for example the statement of Fermat’s Last Theorem or the statement of the Riemann Hypothesis) are part of the science of mathematics: these are black and white things which have a completely rigorous meaning within some foundational system.

Proofs are in some sense the currency of mathematics: proofs win prizes. Constructing them is an art, checking them is a science. This explains, very simply, why computer proof verification systems such as Lean, Coq, Isabelle/HOL, Agda… are much better at checking proofs than constructing them.

And ideas are the purely artistic part of mathematics. That “lightbulb” moment, the insight which enables you to solve a problem — this is the elusive mathematical idea.

Ideas are the part of mathematics that I understand the least, in a formal sense. Here are two questions:

  • What is a group?
  • How do you think about groups?

The first one is a precise “scientific” question. A group is a set equipped with some extra structure, and which satisfies some axioms. The formal answer is on Wikipedia’s page on groups. A group is a definition. But the second question is a different kind of question. Different people think about groups in different ways. Say G is a group generated by an element x satisfying x^5=x^8=1. What can you say about G? If you are a mathematics undergraduate who has just seen the formal definition of a group, you can probably say nothing. If you have a more mature understanding of group theory, you instantly know that this group is trivial, because you have a far more sophisticated model of what is going on. Ideas are complicated, and human-dependent. A computer’s idea of what a group is, is literally a copy of the definition in Wikipedia, and this is one of the reasons that computers are currently bad at proving new theorems by themselves. You can develop a computer’s intuition by teaching it theorems about groups, or teaching it examples of groups, or trying to write AI’s which figure out group theory theorems or examples of groups automatically. But intuition is a very subtle thing, and I do not understand it at all well, so I will say no more about these ideas here. I think that the concept of a map being “canonical” is an idea rather than a definition — I think different mathematicians have different ways of thinking about this weasel word. In this post I’m going to talk about how the three other concepts are implemented in type theory, in the Lean theorem prover.

Definitions, true/false statements, and proofs

In contrast to ideas, the other parts of mathematics (the definitions, theorems/conjectures, and proofs) can be formalised in a foundational system, and hence can be created and stored on a computer in a precise way. By this, I don’t mean a pdf file! Pdf files are exactly what I want to move away from! I mean that people have designed computer programming languages which understand one of the various foundations of mathematics (set theory, type theory, category theory) and then mathematicians can write code in this language which represents the definition, true/false statement or proof in question.

I am certainly not qualified to explain how all this works in category theory. In set theory, let me just make one observation. A definition in set theory, for example the definition of the real numbers, or \pi, is a set. And a proof is a sequence of steps in logic. A definition and a proof seem to me to be two completely different things in set theory. A group is a mixture of these things — a group is an ordered quadruple (G,m,i,e) satisfying some axioms, so it’s a set with some logic attached.

In type theory however, things are surprisingly different. All three things — definitions, true/false statements, and proofs — are all the same kind of thing! They are all terms. A group, a proof, the real numbers — they are all terms. This unification of definitions and proofs — of sets and logic — are what seems to make type theory a practical foundational system for teaching all undergraduate level mathematics to computers.

Universes, types, and terms.

In type theory, everything is a term. But some terms are types. Not every term is a type, but every term has a type. A colon is used to express the type of a term in Lean — the notation x : T means that x is a term of type T. For example, the real number π (pi) is a term in Lean, and the real numbers is a type, and we have π : ℝ , that is, π is a term of type . In set theory one writes π ∈ ℝ, in type theory we write π : ℝ. They both express the same mathematical concept — “π is a real number”.

Now π is a term but it’s not a type. In Lean, x : π makes no sense. In set theory, x ∈ π does happen to make sense, but this is a weird coincidence because everything is a set. Furthermore, the actual elements of π will depend on how the real numbers are implemented (as Dedekind cuts or Cauchy sequences, for example), and hence in set theory x ∈ π has no mathematical meaning; it happens to make sense, but this is a quirk of the system.

I claimed above that every term has a type. So what is the type of ? It turns out that ℝ : Type. The real numbers are a term of a “universe” type called Type — the type theory analogue of the class of all sets.

Many of the mathematical objects which mathematicians think of as definitions either have type Type, or have type T where T : Type. As a vague rule of thumb, things we write using capital letters (a group, a ring,…) or fancy letters (the reals, the rationals) have type Type, and things we write using small letters (an element g of a group, a real number r or an integer n) have got type T where T is what we think of as the set which contains these elements. For example 2 : ℕ and ℕ : Type, or if g is an element of the group G then in Lean we have g : G and G : Type. You can see that there is a three-layer hiearchy here — terms at the bottom, types above them, and the universe at the top.

  • Universe : Type
  • Examples of types : , , G (a group), R (a ring), X (something a set theorist would call a set), a Banach space, etc. Formally, we say ℝ : Type.
  • Examples of terms: π (a term of type ), g (an element of the group G, so a term of type G), x (an element of X, so a term of type X). Formally, we say g : G.

This hierarchy is more expressive than the hierarchy in set theory, where there are only two levels: classes (e.g. the class of all sets), and sets.

There is a standard use of the colon in mathematics — it’s in the notation for functions. If X and Y are sets (if you’re doing set theory) or types (if you’re doing type theory), then the notation for a function from X to Y is f : X → Y. This is actually consistent with Lean’s usage of the colon; Lean’s notation for the collection \mathrm{Hom}(X,Y) of functions from X to Y is X → Y , which is a type (i.e. X → Y : Type, corresponding to the fact that set theorists think of \mathrm{Hom}(X,Y) as a set), and f : X → Y means that f is a term of type X → Y, the type-theoretic version of f \in \mathrm{Hom}(X,Y), and the way to say that f is a function from X to Y in type theory.

(Not for exam) Strictly speaking, universes are types, and types are terms, but this is a linguistic issue: often when people speak of types, they mean types which are not universes, and when people speak of terms they mean terms which are not types. But not always. This confused me when I was a beginner.

Theorems and proofs

This is where the fun starts. So far, it just looks like a type is what a type theorist calls a set, and a term is what they call an element. But let’s now look at another universe in Lean’s type theory, the universe Prop of true/false statements, where our traditional mental model of what’s going on is quite different. We will see how theorems and proofs can be modelled in the same way as types and terms.

So, how does this all work? As well as the universe Type, there is a second universe in Lean’s type theory called Prop. The terms of type Prop are true/false statements. There is an unfortunate notation clash here. In mathematics, the word proposition is often used to mean a baby theorem, and theorems are true (or else they would be conjectures or counterexamples or something). Here we are using the the word Proposition in the same way as the logicians do — a Proposition is a generic true/false statement, whose truth value is of no relevance.

This will all be clearer with examples. 2 + 2 = 4 is a Proposition, so we can write 2 + 2 = 4 : Prop. But 2 + 2 = 5 is also a Proposition, so 2 + 2 = 5 : Prop as well. I’ll say it again — Propositions do not have to be true! Propositions are true/false statements. Let’s see some more complex examples.

The true/false statement that x+0=x for all natural numbers x is a Proposition: in Lean this can be expressed as (∀ x : ℕ, x + 0 = x) : Prop . A Proposition is a term of type Prop (just like the types we saw earlier were terms of type Type). Let RH denote the statement of the Riemann Hypothesis. Then RH : Prop. We don’t care if it’s true, false, independent of the axioms of mathematics, undecidable, whatever. A Proposition is a true/false statement. Let’s look at the part of Lean’s type theory hierarchy which lives in the Prop universe.

  • Universe: Prop
  • Examples of types : 2 + 2 = 4, 2 + 2 = 5, the statement of Fermat’s Last Theorem, the statement of the Riemann Hypothesis.
  • Examples of terms: ??

So what are the terms in this three-layer Prop hierarchy? They are the proofs!

Propositions are types, proofs are terms.

This is where the world of type theory seriously diverges from the way things are set up in set theory, and also the way things were set up in my brain up until three years ago. In trying to understand what was going on here, I even realised that mathematicians take some liberties with their language here. Before we start, consider this. The Bolzano-Weierstrass theorem is some statement in analysis about a bounded sequence having a convergent subsequence. I want to talk a little bit about how mathematicians use the phrase “Bolzano-Weierstrass theorem” in practice. A mathematician would say that the Bolzano-Weierstrass theorem is this statement about sequences having convergent subsequences. But if they are in the middle of a proof and need to apply it in order to continue with their proof, they say “by the Bolzano-Weierstrass theorem we deduce that there’s a convergent subsequence”. Nothing seems at all funny about any of this. But what I want to point out is that mathematicians are using the phrase “the Bolzano-Weierstrass theorem” in two different ways. When they say what it is, they are referring to the statement of the theorem. But when they say they’re using the Bolzano Weierstrass theorem, what they are actually using is its proof. The Birch and Swinnerton-Dyer conjecture is a perfectly well-formed true/false statement, you can certainly say what it is. But you can’t use the Birch and Swinnerton-Dyer conjecture in the middle of a proof of something else if you want your proof to be complete, because at the time of writing the conjecture is an unsolved problem. Making a clear distinction between the statement of a theorem, and the proof of a theorem, is important here. A mathematician might use the phrase “the Bolzano-Weierstrass theorem” to mean either concept. This informal abuse of notation can confuse beginners, because in the below it’s really important to be able to distinguish between a theorem statement, and a theorem proof; they are two very different things.

In the natural number game, I use this abuse of notation because I am trying to communicate to mathematicians. The statement ∀ x : ℕ, x + 0 = x is a true statement, and I say things like “this is called add_zero in Lean”. In the natural number game I write statements such as add_zero : ∀ x : ℕ, x + 0 = x. But what this means is that the term called add_zero in Lean is a proof of ∀ x : ℕ, x + 0 = x! The colon is being used in the type theory way. I am intentionally vague about this concept in the natural number game. I let mathematicians believe that add_zero is somehow equal to the “idea” that x+0=x for all x. But what is going on under the hood is that ∀ x : ℕ, x + 0 = x is a Proposition, which is a type, and add_zero is its proof, which is a term. Making a clear distinction between the statement of a theorem, and its proof, is important here. The statements are the types, the proofs are the terms.

  • Universe: Prop
  • Examples of types: 2 + 2 = 4, 2 + 2 = 37, the statement of Fermat’s Last Theorem — ∀ x y z : ℕ, n > 2 ∧ x^n + y^n = z^n → x*y = 0.
  • Examples of terms: the proof that 2 + 2 = 4 (a term of type 2 + 2 = 4), the proof of Fermat’s Last Theorem (a term of type ∀ x y z : ℕ, n > 2 ∧ x^n + y^n = z^n → x*y = 0)

Elements of a theorem

So our mental model of the claim π : ℝ is that , the type, is “a collection of stuff”, and π, the term, is a member of that collection. If we continue with this analogy, it says that the statement 2 + 2 = 4 is some kind of collection, and a proof of 2 + 2 = 4 is a member of that collection. In other words, Lean is suggesting that we model the true/false statement 2 + 2 = 4 as being some sort of a set, and a proof of 2 + 2 = 4 is an element of that set. Now in Lean, it is an inbuilt axiom that all proofs of a proposition are equal. So if a : 2 + 2 = 4 and b : 2 + 2 = 4 then a = b. This is because we’re working in the Prop universe — this is how Propositions behave in Lean. In the Type universe the analogue is not remotely true — we have π : ℝ and 37 : ℝ and certainly π ≠ 37. This special quirk of the Prop universe is called “proof irrelevance”. Formally we could say that if P : Prop, if a : P and if b : P then a = b. Of course if a Proposition is false, then it has no proofs at all! It’s like the empty set. So Lean’s model of Propositions is that the true ones are like sets with 1 element, and the false ones are like sets with 0 elements.

Recall that if f : X → Y then this means that f is a function from X to Y. Now say P and Q are Propositions, and let’s say that we know P\implies Q. What does this mean? It means that P implies Q. It means that if P is true, then Q is true. It means that if we have a proof of P, we can make a proof of Q. It is a function from the proofs of P to the proofs of Q. It is a function sending an element of P to an element of Q. It is a term of type P → Q. Again: a proof h of P\implies Q is a term h : P → Q. This is why in the natural number game we use the symbol to denote implication.

Let false denote a generic false statement (thought of as a set with 0 elements), and let true denote a generic true statement (thought of as a set with 1 element). Can we construct a term of type false → false or a term of type true → true? Sure — just use the identity function. In fact, in both cases there is a unique function — the hom sets have size 1. Can we construct a term of type false → true? Sure, there is a function from the set with 0 elements to a set with 1 element, and again this function is unique. But can we construct a term of type true → false? No we can’t, because where do we send a proof of true? There are no proofs of false to send it to. So true → false is a set of size 0. This corresponds to the standard truth table for , where the first three statements we analysed were true and the last was false.

The proof of Fermat’s Last Theorem is a function

So what does a proof of ∀ x y z : ℕ, n > 2 ∧ x^n + y^n = z^n → x*y = 0 look like? Well, there is an arrow involved in that Proposition, so the statement of Fermat’s Last Theorem is some kind of set of the form \mathrm{Hom}(A,B), which means that in Lean, a proof of Fermat’s Last Theorem is actually a function! And here is what that function does. It has four inputs. The first three inputs are natural numbers x, y and z. The fourth input is a proof: it is a proof of the Proposition n > 2 ∧ x^n + y^n = z^n. And the output of this function is a proof of the Proposition x*y = 0. This is quite an unconventional way to think about what the proof of Fermat’s Last Theorem is, and let me stress that it does not help at all with actually trying to understand the proof — but it is a completely consistent mental model for how mathematics works. Unifying the concept of a number and a proof — thinking of them both as terms — enables you to think of proofs as functions. Lean is a functional programming language, and in particular it is designed with functions at its heart. This, I believe, is why theorem provers such as Lean, Coq and Isabelle/HOL, which use type theory, are now moving ahead of provers such as Metamath and Mizar, which use set theory.

Prove a theorem! Write a function!

Universe TypeProp
Type examples2 + 2 = 5, (∀ a : ℕ, a + 0 = a)
Term examples37, πadd_zero, rfl
Cheat sheet
Posted in Learning Lean, Type theory, undergrad maths | Tagged , , , | 22 Comments

The Sphere Eversion Project

The perfectoid project.

Johan Commelin and I worked with Patrick Massot on the Lean perfectoid space project, but Patrick is not an algebraic number theorist, and he found our literature hard to read in places. He was constantly picking up on what I would regard as “typos” or “obvious slips” in Wedhorn’s unpublished (and at that time unavailable) notes — stuff which was throwing him off. Wedhorn’s notes contain an explicit warning on the front that they are not finished and you read them at your own risk, but Johan and I were between us expert enough to be able to explain precisely what was going on whenever Patrick asked about operator precedence or the meaning of some undefined \cdot . Torsten Wedhorn — thank you for your blueprint, now finally available on ArXiv . We learnt a lot from it — both about the definition of an adic space, and about how to write blueprints.

The point Patrick wanted me to understand was this: If I as a mathematician thought I really knew the definition of a perfectoid space, why couldn’t I come up with a precise and error-free and furthermore self-contained mathematical document that he, as a non-expert but as a professional mathematician, would be able to read without difficulty? Where was my plan for formalising perfectoid spaces? The answer to that was that the plan was in my head. Along the way, Patrick in particular learnt that there are problems with this approach.

The cap set problem.

In stark contrast to the perfectoid shambles was work of Sander R. Dahmen, Johannes Hölzl, Robert Y. Lewis on the Cap Set Problem — see their ArXiv paper. Their main result is a complete computer proof formalisation of the main theorem in the 2017 Annals of Mathematics paper by Ellenberg and Gijswijt on the cap set problem. Sander, Johannes and Rob organised their workflow very differently. They started with a detailed pdf document written by Dahmen (a mathematician). This old-fashioned proof was self-contained, just did normal maths in a normal way, and didn’t mention Lean once. It is a document which will convince any proper mathematician who cares to read it carefully of the correctness of the proof. Dahmen’s work is a completely rigorous old-fashioned proof of the theorem. There is no doubting its correctness. But because it is complete, correct, and self-contained modulo undergraduate level mathematics, this means that people who are not necessarily specialists in mathematics can now begin to work together, using a computer system which knows a lot of undergraduate mathematics, and they can turn Dahmen’s blueprint into a proof of Ellenberg-Gijswijt stored as a virtual mathematical object in the memory of a computer. Such an object is far more amenable to AI analysis than the corresponding pdf file, because computers have a hard time reading natural language, even if written by mathematicians. Apparently we don’t always explain things correctly.

My point here is: Dahmen’s blueprint was an important part of the process.

The sphere eversion project.

But back to Patrick. He is conjecturing that Lean 3 is sufficiently powerful to formalise a complete proof of Sphere Eversion, the Proposition that you can turn a sphere inside-out without creasing it, as long as it is made out of material which has self-intersection 0, like light but bendier. There’s a video. 3D geometry is involved.

The amazing news is that Patrick has now written a blueprint for the proof. This is a normal mathematical document containing a proof of sphere eversion which anyone who knows enough maths to know what a manifold is, could check and see was a completely rigorous and self-contained proof. It is written by an expert in the area, and typeset in a clever way so it displays beautifully in a browser. It is an interactive blueprint. It is the beginning. It is half of the Rosetta Stone that Patrick is creating, which will explain one story in two languages — a human language, and a computer language.

Patrick has also started a Lean sphere eversion project on GitHub . This is a currently mostly empty Lean repository, which will ultimately contain a formal proof of sphere eversion if the project is successful.

Those two parts together form Patrick’s Sphere Eversion Project.

At some point in the near future, Patrick is going to need mathematicians to formalise parts of the proof. The people he’s looking for are perhaps people with degrees in mathematics who are interested in trying something new. Patrick and I, together with Rob Lewis and Jeremy Avigad, are still working hard on our forthcoming book Mathematics in Lean, an introduction to the skills that a mathematician will need in order to participate. Once you have learned how to write Lean, Patrick has some computer game puzzles which you might be interested in playing.

If anyone has questions, they could ask in the sphere eversion topic in #maths in the Zulip chat (login required, real names preferred, be nice), a focussed and on-topic Lean chat where many experts hang out. People who are looking for a less formal setting are welcome to join the Xena Discord server (meetings Thursday evening at 5ish).

The schemes project

What is so amazing about projects like these is that they are teaching us how to use dependent type theory as a foundation for all of pure mathematics. We have had occasional problems along the way. Every division ring is a ring and hence a monoid and thus a semigroup. Invisible functions piled up in type class inference. The perfectoid project helped guide us to the realisation that type class inference was becoming problematic at scales which mathematicians needed it to work, and the Lean developers have responded to this issue by completely redesigning the system in Lean 4.

But let me finish with my Lean schemes project — my first serious Lean project, written with Chris Hughes and Kenny Lau, both at the time first year undergraduates. The project is completely incompatible with modern Lean and mathlib, but if you compile it then you get a sorry-free proof that an affine scheme is a scheme. During our proof we ran into a huge problem because our blueprint, the Stacks Project, assumed that R[1/f][1/g]=R[1/fg], and this turns out to be unprovable for Lean’s version of equality: this equality is in fact one of Milne’s “canonical” isomorphisms. The Lean community wrestled with this idea, and has ultimately come up with a beefed-up notion of equality for which the identity is now true. Amelia Livingston is just putting the finishing touches on its implementation in Lean — many thanks indeed Amelia. It has been an extraordinary journey, and one which has taught me a great deal about localisation and how to think about it. I had never realised before that the rationals might not be equal to the field of fractions of the integers — whether those two fields are actually equal is an implementation issue of no relevance to mathematicians. What we need to know is merely that they are isomorphic and that there is a preferred isomorphism in each direction. This has design consequences which we are only just beginning to understand properly.

I wish the sphere eversion project every success. I am confident that it will teach us more about how to formalise mathematics in dependent type theory.

Posted in Imperial, Learning Lean, Type theory | Tagged , , , , , , , | 2 Comments

The complex number game

Mohammad and I spruced up the natural number game over the last few weeks. It now saves your progress, which is great (thanks Mohammad). Also coming along nicely is the real number game, currently written mostly by Dan Stanescu, but Gavin Thomson has just joined the party and I promise that I will be contributing in June when marking is over. Anyone who wants to get sneak previews of it should hang around on the Xena Discord server, which is where our Thursday evening Xena Project meetings are taking place this term; I’ll probably release a secret alpha this coming Thursday. Last Thursday on the Discord we had people any% speedrunning and racing the Lean tutorial project . This fits very well into my general worldview: I think that doing mathematics in Lean is like solving levels in a computer puzzle game, the exciting thing being that mathematics is so rich that there are many many kinds of puzzles which you can solve. Level creators are conjecture makers. If you are into puzzle games and know a little Lean then you might also want to check out the Codewars website, where support for Lean just moved from beta to official; right now there are over 50 approved Lean Kata, ranging from easy facts about odd and even numbers to some very tricky Diophantine equations.

Another project I’ve been involved in is the forthcoming book Mathematics in Lean. Again I am guilty of doing very little so far. What I’ve been trying to do is to formalise “random” basic undergraduate mathematics in Lean as best I can, using as much automation as I can, in order to see whether it is possible to write basic tactic proofs in “the way a mathematician would write them”, but none of this stuff has made it into the book yet. It is miserable when when you are trying to prove a theorem in Lean and you’ve reduced it to a statement which is completely mathematically obvious (e.g. the goal is to prove that two things are “the same”), but you can’t persuade Lean to believe this, so the art is to set things up in such a way that this issue doesn’t occur, and one way of doing this is by seeking advice from people who understand type theory better than I do.

So I’ve been doing several experiments, hoping that some of them will turn from ideas into chapters of this book, and one which I think went quite well was a construction of the basic properties of the complex numbers, which I liked so much that I’ve put into its own repo for now: The Complex Number Game.

This repo came about with me looking at the official mathlib file for the complex numbers, and “remixing” it, re-ordering it so that it was a bit more coherent, removing some of the more obscure computer-sciency lemmas, and heading straight for the proof that the complexes are a ring. My rewrite of the mathlib proof is heavily documented and the proofs of all the lemmas along the way are in tactic mode; in my view this makes things more accessible to mathematicians. The file data.complex.basic in Lean’s library has been examined and revisited so many times since 2017 that it is now in very good shape — in particular lemmas are correctly tagged with simp and this makes simp a very powerful tool.

If you want to play the complex number game right now, you have to install Lean first. I would love to share this game on CoCalc but right now I can’t get imports to work. I will come back to this in June.

If installing Lean really is not something you want to do, then you can play a crappier preliminary version using the Lean Web Editor (this is before I reorganised the material to make it more mathematician-friendly and turned more proofs from term mode to tactic mode). Or you can just watch me play it! I live streamed on Twitch a walkthrough of the tutorial level (the complexes are a commutative ring) last Thursday, and here is the video. This Thursday (28th May 2020) at 5pm UK time (1600 UTC) I’ll be on Twitch again, live streaming a walkthrough of the first two levels: sqrt(-1), and complex conjugation. I’ll be on Twitch for around an hour, and then we will retire to Discord for a Q&A and more any% racing. Mathematicians who are Lean beginners are welcome!

Someone else doing some remixing is my daughter Kezia Xena Buzzard (after whom the project is named). She made my new profile pic from Chris Hughes’ proof of quadratic reciprocity 🙂

Posted in General, Learning Lean, undergrad maths | Tagged , | 4 Comments

Xena Project Thursday meetings start with talk tomorrow 21st May

Usually, the Xena project meets every Thursday evening (5pm to 10pm UK time) during Imperial College’s term time. Because students have been doing exams and the campus has been closed, there have not been any meetings this term so far, but now exams are winding down I am going to try and start things up again on Thursday evenings, from now until the end of June (i.e., the end of term).

We’re launching at 5pm UK time (=1600 UTC) on Thurs 21st May 2020 (check out the Xena project Google calendar) with a live demo by me of the new complex number game (which will be live on GitHub by tomorrow, assuming I’ve finished writing it). I’ll walk through the tutorial world, which proves that the complex numbers are a commutative ring. I’ll do it live on Twitch. Afterwards we can all retire to the Xena project discord server and plan world domination.

Posted in Learning Lean, undergrad maths | Tagged , | Leave a comment