## Proofs by induction

Term is over and I have spent the entire day thinking about induction. I was mainly concerned about the fact that induction somehow seemed hard for mathematicians to do in Lean. Even proving that the sum of the first `n` odd positive integers was `n^2` seemed to be complicated. But I now know several ways of doing it, and I want to see if I can guide you to coming up with one yourself. Note that I think these Lean exercises are hard for mathematicians, because unfortunately mathematicians (correctly in my opinion) would regard the statement that the sum of the first $n$ odd positive integers was $n^2$ as “trivial by induction”. Let me try and show you several ways of doing induction in Lean. Not because I want to know several ways to do it, but, unfortunately, because I fear that in practice people really might occasionally find themselves in a situation where they need a new kind of proof by induction, and if they read this then maybe afterwards they’ll understand the exact statements that they need to prove to get induction working.

This entire exercise is available for the Lean Web Editor by the way. It’s also available in the Xena project (in the blog directory) if you want to download it to run in a local copy of Lean (which is highly recommended).

OK so let’s start with the two functions in our induction proof:

``````
def odd : ℕ → ℕ := λ i, 2 * i + 1
def square : ℕ → ℕ := λ i, i ^ 2
``````

You can play around with them, check $odd(2)=5$ by typing `#reduce odd 2`
and you can check that $1000^2=1000000$ by typing `#eval square 1000` (for extra credit: what happens if you type `#reduce square 1000`? Why?). It’s well-known that the sum of the first $n$ positive odd integers is $n^2$ and it’s well-known that one can prove it by induction, with the key inductive step being that $d^2 + (2d+1)=(d+1)^2$. We can formalize this neatly in Lean:

``````
theorem odd_square_inductive_step (d : ℕ) : square d + odd d = square (succ d)
``````

and one can prove it relatively easily, using the `ring` tactic, as one can see in the downloadable file.

What is left is the question on how we are going to write the function which takes $n$ as input and writes down the sum $odd(0)+odd(1)+\cdots+odd(n-1)$. Then everything should be straightforward. But there are several ways to define this function!

Exercise 1.

Define a Lean function which takes as input a function `f : ℕ → ℕ` and returns the function which sends a natural number n to $f(0)+f(1)+f(2)+...+f(n-1)$.

``````
definition my_sum_to_n (summand : ℕ → ℕ) : ℕ → ℕ :=
sorry -- delete this and then fill in your definition here.
``````

Exercise 2.

The function you defined comes with a law of induction, and you now need to prove the zero and successor results for your function.

``````
theorem my_zero_theorem (summand : ℕ → ℕ) : my_sum_to_n summand 0 = 0 := sorry

theorem my_successor_theorem (summand : ℕ → ℕ) (n : ℕ) :
my_sum_to_n summand (nat.succ n) = my_sum_to_n summand n + summand n := sorry
``````

Exercise 3

Now can you prove my_odd_square_theorem, the fact that the sum of the first $n$ odd positive integers is $n^2$?

``````
theorem my_odd_square_theorem (n : ℕ) : my_sum_to_n odd n = square n := sorry
``````

Exercise 4

Finally, find someone who has given a completely different definition of `my_sum_to_n` (try looking in the comments below, for example) and
add their definition to your session (to your “context”):

``````
definition other_sum_to_n (summand : ℕ → ℕ) : ℕ → ℕ := sorry
``````

and now see if you can prove that their definition is the same as yours.

``````
theorem consistency : my_sum_to_n = other_sum_to_n := sorry
``````

[You might want to start
`begin apply funext,intro f,apply funext,intro n,`…]

I will give some possible definitions for `my_sum_to_n` in the comments, if nobody else gets there first.

Advertisements
Posted in General | 3 Comments

## No confusion over no_confusion

I’ve always been a bit confused about `no_confusion`. Functions with names like this are occasionally applied in Theorem Proving In Lean but with little explanation (for example in section 10.1 of TPIL they use it twice to prove `2 ≠ 1`). Of course you can always try and find out your own explanations, but in the past I’ve struggled trying to get to the bottom of `no_confusion_type` with no docs and no hints as to what its purpose is. But Chris Hughes and I sat down last night and tried to figure out what was going on, and I think we cracked it. So here is a hopefully unconfused explanation of at least some of the aspects of `no_confusion`. Before I start on it though, here are some facts about inductive types.

## Free gifts with every inductive type.

I found the explanation of inductive types in TPIL very helpful. But let me flag a few things which didn’t sink in the first time.

Let’s create a really simple inductive type — an enumerated type with two constructors.

``````
inductive mytype
| AA : mytype
| ZZ : mytype
``````

[try it here — lean web editor]

What actually happens when this code is processed? Lean adds a new constant `mytype` to the environment, which is a fancy way of saying that it now makes sense to talk about things of type `mytype`. It also adds some other stuff. It adds the constructors `mytype.AA` and `mytype.ZZ` — these are the two ways of creating new things of type `mytype`. And it adds the eliminator `mytype.rec`. Let’s take a closer look at this eliminator. The output of `#check @mytype.rec` is

``````
mytype.rec : Π {C : mytype → Sort u_1}, C mytype.AA → C mytype.ZZ → Π (n : mytype), C n
``````
``` ```

which reads as follows: "Say we have a function `C`from `mytype` to somewhere. If we know `C AA` and we know `C ZZ`, then for all `n` of type `mytype`, we know `C n`."

The point that I want to emphasize here is that these things -- the new type, its constructors and eliminator -- are added into the environment as axioms, or constants or whatever you want to call them. You can't "look at the proof" of `mytype.rec` in Lean -- it just is. If you try ```#print mytype.rec ``` you're just informed that `mytype.rec` is an eliminator. Take a look at the output, as well as the output of `#print mytype` and `#print mytype.AA` to see more examples of new things which have just appeared in your Lean environment.

However there is a whole bunch of other stuff which has also just appeared in the environment. If you type `#print prefix mytype` you will see, as well as the constructors and the eliminator, about 14 other new functions which have magically appeared, including the dreaded (or formerly dreaded, in my case) `mytype.no_confusion`, which appears to be a function whose output is of type `mytype.no_confusion_type`. One thing which perhaps needs to be stressed here though, is that these 14 other functions are not just magic new axioms like the constructors and eliminator -- they have actually been proved or constructed using our new axioms. You can take a look at the proof of `mytype.no_confusion` with
`#print mytype.no_confusion`. Doing this might well leave you confused, and it's the purpose of this post to try and unconfuse you.

## What does `no_confusion` do?

The idea behind inductive types is that two instances of an inductive type should be equal if and only if they were ultimately constructed in the same way. What does this actually mean? The constructors and eliminator do not seem to say this explicitly, so what is going on? Certainly (at least in constructive logic) every expression whose type is `mytype` will, at the end of the day, either equal `AA` or `ZZ`. But can we prove that `AA ≠ ZZ`? This sounds like the sort of thing which should somehow be an axiom, or part of the system in some way. But we've just seen how the system has changed when we created our new type -- we have `mytype` and `AA` and `ZZ` and the eliminator, and nothing else. If we want to prove `AA ≠ ZZ` then we are going to have to do it from these axioms -- we are going to have to prove it by induction somehow.

Here is quite a crazy proof that `AA ≠ ZZ`. I do know a simpler one, but the simpler proof doesn't generalise, and this slightly crazy proof I present here will turn out to be an instructive one (or, at least, I found it instructive).

First let's define a function which takes as input two expressions `s` and `t` of type `mytype`, and outputs a Proposition. The Proposition will be `true` if `s` and `t` are equal, and `false` otherwise. We can easily do this, using the `cases` tactic, which is in this setting just the tactic version of `mytype.rec`. Here's the function.

``````
def mytype_equal : mytype → mytype → Prop :=
begin
intros s t,
cases s,
{ -- s = AA
cases t,
{ -- s = AA, t = AA
exact true },
{ -- s = AA, t = ZZ
exact false }
},
{ -- s = ZZ
cases t,
{ -- s = ZZ, t = AA
exact false },
{ -- s = ZZ, t = ZZ
exact true }
}
end
``````

[ One could even write the same code as a function, without using tactic mode, and check that it equals
the tactic version:

``````
definition mytype_equal' : mytype → mytype → Prop :=
λ s t, mytype.rec (mytype.rec true false t) (mytype.rec false true t) s

example : mytype_equal = mytype_equal' := rfl
``````

]

Now here's the trick! Let's write a function which takes two terms `s` and `t` of type `mytype`, plus a proof that `s = t`, and returns a proof of `mytype_equal s t`. This sounds feasible -- after all, if `s = t` then we can just rewrite `mytype_equal s t` into `mytype_equal t t` which will always evaluate to `true`, and recall that `trivial` is a proof of `true`. So the code looks something like this:

``````
definition mytype_no_confusion (s : mytype) (t : mytype) : s = t → mytype_equal s t :=
begin
intro H, -- H : s = t
rw H, -- goal now mytype_equal t t
cases t, -- goal 1 now my_type_equal AA AA
{ trivial },
-- goal 2 now my_type_equal ZZ ZZ
{ trivial }
end
``````

If we run this code with `s` and `t` different elements of `mytype`, then we have proved `s = t → false`, which is exactly `s ≠ t`!

``````
open mytype -- so we can write AA instead of mytype.AA

theorem unconfused : AA ≠ ZZ := mytype_no_confusion AA ZZ
``````

That's how `no_confusion` works in this simplest case.

## The other thing `no_confusion` does

This is not quite the end of the story, however. For enumerated types this is all we need, but what about types like the natural numbers? Let's roll our own and see what else we need to do to give some rigour to the idea that "two natural numbers are the same if and only if they are constructed in the same way".

``````
inductive xnat
| zero : xnat
| succ : xnat → xnat
``````
``` ```

The reason we think of these as the natural numbers is that it's not hard to convince oneself that the only ways to build instances of this type are `zero`, `succ zero`, `succ (succ zero)`, `succ (succ (succ zero)))` and so on. Again, for these to be a good model of the natural numbers it's crucial that all of these things are distinct.

Exercise. Mimic the `mytype` proof and write code for `xnat_equal`, `xnat_no_confusion` and then prove `∀ n : xnat, zero ≠ succ n`.

But there's something else. We also need to prove ` ∀ m n : xnat, succ m = succ n → m = n` and our current set-up doesn't let us do this. The problem is that `xnat_equal (succ m) (succ n)` is `true` and we've lost any information about `m` and `n` by this point. You may have noticed when writing `xnat_equal` that we threw away `s` and `t` when defining `xnat_equal (succ s) (succ t)`. This was our error -- we need instead to return a Prop which depends on `s` and `t`. So how about this?

``````

def xnat_no_confusion_prop (s t : xnat) : Prop :=
begin
cases s with m,
cases t with n,
exact true, -- s = t = zero
exact false, -- s = zero, t = succ n
cases t with n,
exact false, -- s = succ m, t = zero
exact (m = n), -- new idea when s = succ m, t = succ n!
end
``````

Now we can still prove

``````
def xnat_no_confusion' (s t : xnat) (H : s = t) : xnat_no_confusion_prop s t :=
begin
rw H,
cases t with n,
exact trivial, -- t = 0
show (n=n), -- t = succ n
refl,
end
``````

but this time we have enough left in the tank to get the injectivity of `succ`:

``````
example (m n : xnat) : succ m = succ n → m = n := λ H, xnat_no_confusion' _ _ H
``````

[this is just term mode for `intro H, exact xnat_no_confusion' _ _ H`.]

## The whole truth

We used Props like `m = n` to pull off this trick. As you can imagine, for more complicated situations we might need some more general types. In fact the actual no_confusion_type and no_confusion functions built by Lean when the type is initialised look like the below. They take as input an arbitrary Type `P` and return Types instead of the Props I used above:

``````
variables m n : xnat
variable P : Sort*

#reduce xnat.no_confusion_type P zero zero -- P → P
#reduce xnat.no_confusion_type P (succ m) zero -- P
#reduce xnat.no_confusion_type P zero (succ n) -- P
#reduce xnat.no_confusion_type P (succ m) (succ n) -- ((m = n) → P) → P
``````

I would imagine that this gives more flexibility in some cases; at this point I do not have a clear idea of why this greater flexibility is needed.

[Finally let me comment that an easier proof of injectivity of `nat.succ` is just to write a one-sided inverse `pred`. This was the idea behind the simpler proof I was alluding to earlier. But this idea doesn't work in general, and it's an interesting exercise to figure out why!]

Posted in Uncategorized | Leave a comment

## Mini-tutorial : Currying.

A couple of mathematics undergraduates have asked me about this recently. Currying is something which is important in computer science, but is based on a fact which is trivial mathematically.

As it’s not hard to see, `→` is not associative. In other worse, if `P`, `Q` and `R` are propositions, then the truth values of `P → (Q → R)` and `(P → Q) → R` might not be the same (for example, if all of `P`, `Q` and `R` were false). But computer scientists like writing `P → Q → R` because it looks terrifically clever and exciting, so we have to decide which one it means.

Convention: `P → Q → R` means `P → (Q → R)`.

If we know `P → (Q → R)`, how might we prove `R`? Well we could first prove `P`, which enables us to deduce `Q → R`, and then we could prove `Q` as well, which lets us deduce `R`. So in fact maybe `P → (Q → R)` is the same as `(P ∧ Q) → R`, which we can write as `P ∧ Q → R` because of some version of BIDMAS on steroids which computer scientists invented.

So in fact let’s test our hypothesis in Lean.

``````
example (P Q R : Prop) : (P ∧ Q → R) ↔ (P → (Q → R)) :=
begin
cases classical.em P with hP hP;
cases classical.em Q with hQ hQ;
cases classical.em R with hR hR;
simp [hP,hQ,hR]
end
``````

This rather cheeky piece of code says “split into two cases P true and P false, then (because of the semicolon) split each of those two cases into two cases Q true and Q false, then split again on R, and then just simplify everything in each case”. There is also a constructive way to prove it — why not try? You might need to know the tactics `split,intros,cases,exact,assumption`. Here is a direct link to the lean web editor where you can try.

If you do this exercise, you might begin to understand why someone working in a functional language would choose to write `P → Q → R` rather than `P ∧ Q → R` — it saves you from having to do cases on the `and` — you can just use `intro`.

Posted in Learning Lean | Leave a comment

## Schemes!

Kenny Lau and I just defined a scheme in Lean!

The reason I am excited about this is that although most computer proof verification systems have plenty of definitions in, e.g. most of them seem to have stuff like groups and rings etc, the definition of a scheme is actually really complicated. Unlike the definition of a group, we had to actually prove theorems before we could formulate the definition of a scheme. Why is this? Not only does the definition rely on all sorts of auxiliary structures like a sheaf of rings on a topological space (which relies on definitions of presheaves of rings and of sets), the main issue is that a scheme must “locally look like the spectrum of a ring”, and to formulate this rigorously one has to define localizations of rings at multiplicative sets and prove a whole bunch of things about these localizations (almost all of this part of the project was done by Kenny).

My next goal is perfectoid spaces.

Posted in General | Leave a comment

## The integers.

Lean has integers inbuilt as part of the core library. You can call them `int` if you want to be old skool, but if you type `\Z` (and then hit the space bar) in VS Code or emacs or the lean web editor you should get a nice looking `ℤ`.

A word of warning before we start — if you just type some number like 3, then Lean will by default assume that it is a natural number, and the natural number 3 is a different 3 to the integer 3 — that’s how type theory works. You can demand the integer 3 by writing `(3:ℤ)` (the brackets are essential).

Many basic operations on the integers are defined by recursion (which just means induction for definitions — we prove by induction and define by recursion). This means that secretly there is a computer program in the background which is running the show, complete with proofs of its beliefs. One consequence of this is that often to check explicit calculations using the magic words “dec_trivial” (which I think means “this is trivial, by decidability of equality on the integers”). For example

``````
theorem easy : (4:ℤ) - (5:ℤ) = (-1:ℤ) := dec_trivial
``````

[Just a brief reminder of how these things are thought about in type theory — the idea is that the statement `4-5=-1` is a type (it also has a type, which is `Prop`), “easy” is a new Lean term whose type is `4-5=-1`, and `dec_trivial` is a computer program which has produced this term. If this still makes your head hurt, forget it.]

In short, “explicit” calculations with integers can
be done using `dec_trivial`. But what about simple lemmas? How do we prove that addition is commutative on the integers? Well, if you worked through the explicit proof that addition on the natural numbers was commutative, then you could imagine that it’s going to be some sort of delicate proof by induction. But fortunately this has all been done for you by the lean developers, and we can just invoke the theorem in the core lean library to check addition is commutative:

``````
example : ∀ a b : ℤ, a + b = b + a := add_comm
``````

Indeed there are lots of basic facts about the integers which are inbuilt into Lean — you can see a few more here or (same words, different presentation) here. I wrote that document. Let me know which version you prefer and I will write further such documents using the style you like best. Learning to navigate your way around the library is a very important skill, and one which I say a little more about below when I leave you to work on this stuff on your own.

## Congruences.

Let’s now define what it means for two integers `a` and `b` to be congruent mod `m`. We know exactly what it means though — it just means that `b-a` is a multiple of `m`. And what does it mean for `b-a` to be a multiple of `m`? It means that there’s some integer `n` such that `b-a=m*n`. We know what subtraction and multiplication mean — they are inbuilt into Lean. So we have our definition. Let’s also add some notation for it. You can get the standard `≡` symbol by typing `\==` in Lean.

``````
namespace xena

definition cong (a b m : ℤ) : Prop := ∃ n : ℤ, b - a = m * n

notation a ` ≡ ` b ` mod ` m  := cong a b m

#check (7 ≡ 22 mod 5)

end xena
``````

What does that `#check` mean? Unfortunately it doesn’t mean “check this is true” — it means “check this makes sense, and if it does, return its type”. If you click on the `#check` (and if you have the messages buffer open — click one of those symbols that looks like some white rectangular blocks in a pile on top of each other if you don’t) you will see `7 ≡ 22 mod 5 : Prop`. So Lean is telling us that `7 ≡ 22 mod 5` is a `Prop`, in other words a true-false statement. It is not telling us whether it’s true or false though! Indeed, if you change the numbers to `#check (7 ≡ 8 mod 5)` or anything else, Lean will still happily report that it is a true-false statement.

## Proving that congruences are true.

So how do we prove `7 ≡ 22 mod 5`? Well it looks like some concrete statement involving only known integers, so maybe we can use `dec_trivial` to prove it.

`example : 7 ≡ 22 mod 5 := dec_trivial -- fails!`

Why does fail when things like subtraction of concrete integers succeeds? It’s because Lean is trying to check that there exists an integer `n` such that `5 * n` is `15`, and this is beyond `dec_trivial`‘s pay grade. The proof that `7 ≡ 22 mod 5` needs to read like this: “`n=3` works, and the proof of this is trivial”. So how do we feed this information into Lean? Well, it turns out we do it like this:

`example : 7 ≡ 22 mod 5 := ⟨3,dec_trivial⟩ -- works!`

The pointy brackets on the right can be typeset with `\<` and `\>`. To prove something of the form `∃ n, H` (“there exists n such that H is true”) we use `⟨n0,proof⟩`, where `n0` is our choice for `n` and `proof` is our proof that it works. In the case above, `H` is `22-7=5*3` and this is exactly the sort of thing that `dec_trivial` can produce a proof of.

## Proving facts about congruences.

[NB we are going to be writing simple proofs, here, and I would recommend that you have taken a look at at least some of this older post about building the natural numbers before you go much further into this post here.]

We can only use `dec_trivial` when all the numbers are known. If we have variables then we need to use theorems. Let’s see this in action. Let’s try and prove that for a fixed m, congruence mod m is an equivalence relation. Let’s start with reflexivity. Even here we will find we need a theorem.

## Reflexivity.

Let’s state the theorem.

``````
namespace xena

definition cong (a b m : ℤ) : Prop := ∃ n : ℤ, b - a = m * n

notation a ` ≡ ` b ` mod ` m  := cong a b m

theorem cong_refl (m : ℤ) : ∀ a : ℤ, a ≡ a mod m :=
begin

end

end xena
``````

[or click here to try in the Lean Web Editor]. The begin/end block put us into “tactic mode”, which is by far my favourite lean mode. Our job is to fill in the blank between the begin and the end. Our code currently has an error, saying that we wrote “end” but we hadn’t finished yet (indeed we haven’t started).

We are trying to prove something of the form `for all a, ...` so we can start with `intro a` (don’t forget the comma afterwards) and now our goal becomes `a ≡ a mod m`. Now this is _actually_ notation, and whilst it is what Lean prints, our goal is actually the meaning of the notation, which is `cong a a m`. We can’t unfold notation, but we can
unfold that cong. We don’t have to (indeed when we’re finished try deleting that unfold and notice that the proof still works fine), but let’s do it anyway:

``````
theorem cong_refl (m : ℤ) : ∀ a : ℤ, a ≡ a mod m :=
begin
intro a,
unfold cong,

end
``````

and if we put the cursor in that gap, we see that our goal is now
`∃ (n : ℤ), a - a = m * n`, and clearly our next step is going to be to prove this by suggesting that the reason it is true is because `n=0` works. We saw in “term mode” (i.e. outside a begin/end block) that we could use some pointy brackets, and we could do something fancy-schmancy like `refine ⟨0,_⟩` in tactic mode, but there’s an explicit tactic which will also work for us — the `existsi` tactic, which we can use to suggest the value to use in an exists statement. So let’s try

`existsi (0:ℤ),`

for the next line, and our goal now becomes

`a - a = m * 0`

That definitely looks true! But we can’t prove it with `dec_trivial` because there are variables in it. We need theorems, or automation (i.e. tactics which do heavy lifting). Here’s how to do it using theorems. First we observe that there’s a theorem called `mul_zero` which says that `x*0=0` for any `x`. We can rewrite our goal using that theorem:

`rw [mul_zero]`

[NB what do you think the theorem that says that for all `x`, `0*x=0` is called? Can you see why this is a different theorem?]

Now we have to show that any integer, subtract itself, is zero. This theorem is called `sub_self`, and rewriting again is going to finish the job:

`rw [sub_self]`

If you followed along carefully and remembered all the commas, you now have

``````
theorem cong_refl (m : ℤ) : ∀ a : ℤ, a ≡ a mod m :=
begin
intro a,
unfold cong,
existsi (0:ℤ),
rw [mul_zero],
rw [sub_self],
end
``````

and no red lines any more, so we’re done.

There is however a nicer way to finish the job after our brilliant idea to set `n=0`, namely letting automation finish the job. For example

``````
theorem cong_refl (m : ℤ) : ∀ a : ℤ, a ≡ a mod m :=
begin
intro a,
unfold cong,
existsi (0:ℤ),
simp
end
``````

The simp tactic does all the easy calculations for us. It does use `mul_zero` to simplify the right hand side, but it doesn’t use `sub_self` for the left — those that are interested in seeing the lemmas it does use can type `set_option trace.simplify true` just before the line stating the theorem and then click on the simp line (which should now be underlined in green as it is producing output in the messages terminal now) and take a look at what it actually does.

Here’s everything in the Lean Web Editor.

## Symmetry.

Here’s what we’re after. I would add this after reflexivity if I were you (but before `end xena`).

``````
theorem cong_symm (m : ℤ) : ∀ a b : ℤ, (a ≡ b mod m) → (b ≡ a mod m) :=
begin

end
``````

Notation is fickle; I had hoped we could get away without the brackets but we don’t seem to be able to. I guess they do clarify what’s going on.

After `intro`ing `a` and `b`, we can also `intro` our hypothesis `H : a ≡ b mod m`. Our hypotheses and goal should now look like this:

``````
m a b : ℤ,
H : a ≡ b mod m
⊢ b ≡ a mod m
``````

We now have a new problem with `∃` — instead of wanting to get rid of it in a goal, we want to use the fact that we have one in a hypothesis: we now have an _assumption_ that something exists (remember that `H` really says `∃ (n : ℤ), b - a = m * n` as you will be able to see if you type

`unfold cong at H,`

). So how can we use this? Well we need to use a new command — `cases`.

`cases H with t Ht,`

destroys `H` and replaces it with an integer `t` (the integer that exists) and a new hypothesis `Ht`, the proof that `t` does what it is supposed to do. At this point our hypotheses and goal looks like this:

``````
m a b t : ℤ,
Ht : b - a = m * t
⊢ b ≡ a mod m
``````

In particular we now have four integers available to us, namely `m, a, b, t`, plus the hypothesis that `b-a` is `m*t`. Our goal is notation and, unfolding it in our heads (or even using `unfold cong` we see that it is `∃ (n : ℤ), a - b = m * n`.

I think I might leave the rest to you.

Q1 Now comes the maths! We need to write `existsi [something]`. And once we’ve done that, we need to invoke some theorems. Here’s a useful one — `mul_neg_eq_neg_mul_symm`. Use `#check mul_neg_eq_neg_mul_symm` to find out what it does. You are going to be using rewrites and simps. Did you know that you can write `rw ←Ht`? If you didn’t know that, you might want to do the exercise about the natural numbers before attempting this one.

### Transitivity.

Q2 Formulate and prove transitivity. Did you ever get stuck? Please let me know where. If you need to find lemmas in the lean libraries I would recommend typing `#check` and then some useful-looking keywords into Lean, and then hitting ctrl-space (and then escape and then ctrl-space again — I don’t know why but it works better the second time). This link might help you with keywords — make sure you know the difference between `sub` (a binary operator) and `neg` (a unary one). Remember also that `A_of_B` means “B implies A”.

### More.

Q3 Can you now prove

`theorem cong_is_equiv_reln (m : ℤ) : equivalence (λ a b, cong a b m) := sorry` ?

Can you prove that if `a ≡ b mod m` and `c ≡ d mod m` then `a + c ≡ b + d mod m`? And what about `a * c ≡ b * d mod m`? Can you prove `a ^ n ≡ b ^ n mod m`? You might have trouble getting `^` to work — that is because core lean has been designed with an operator overloading issue which has not yet been fixed yet and might not be fixed for quite some time. Work your way around it with `local infix ` ^ ` := monoid.pow` at the top of your code [edit: curses! You need to have mathlib for this to work You need to get `import algebra.group_power` to work as well :-/]. If you can do all of these things, well done!

Posted in Learning Lean | Leave a comment

## Recent student successes

Here’s what some of Imperial’s undergraduates have been doing.

## Blue-eyed Islanders.

Chris Hughes typed up a solution to the blue eyed islanders problem! The question is here (M1F sheet 5) (question 6) (and also on Terry Tao’s blog here), and Chris’ solution is here.

I was told last year by an undergraduate that the official pdf solution I circulated to the students in 2016 was incomplete and the undergraduate did half-convince me of this. However I know for sure that Chris’ proof is complete, although philosophically there is still the issue of whether his formalisation of the problem is correct. This is an issue of interpretation rather than mathematics.

One of my first year tutees, Jin Su (Cris), told me the following great question: Let’s say I tell two of my tutees secret non-negative integers, and then I wrote two distinct non-negative integers on my whiteboard; one is the sum of the two secret integers, and the other one is not. I then take it in turns asking my tutees whether they know the other tutee’s number! Prove that eventually one of them will (truthfully) say “yes”.

I would imagine that this question can be dealt with in a similar manner. Mario Carneiro on the Lean gitter chat said that “usually knowledge is represented by a predicate of possible worlds rather than “things I know” ” . Can anyone (a) work out what he means and (b) reprove the Islanders problem using this approach or (c) solve Cris’ problem using any method at all?

## Matrices.

Ellen Arlt proved that if R was a ring, then the set of n by n matrices over R was also a ring. Proving that inductive types have a complex structure (such as a ring structure) is harder than you might imagine; there were Lean issues which we had to solve. There were also mathematical issues which we had to solve; we had to learn how to manipulate finite sums (e.g. change order of summation), so we ended up learning a lot of generally useful skills. Ellen — please PR your work and I’ll link to it! A preliminary version is here but she finished the job yesterday. One thing that surprised me about the work was that she did not assume that the ring R was commutative! For sure it would be difficult to define the determinant of a matrix if the entries were in a non-commutative ring (even the case of a diagonal matrix is an issue). But the basic ring structure does not need commutativity of R, so I learnt something myself here.

## Equivalence relations and congruences.

Julian Wykowski formalised several of the results which I proved in M1F about congruence of positive integers. In particular he proved that being congruent mod m was an equivalence relation, and various other important facts about congruences (such as how they behave well with respect to addition and multiplication). Julian’s work is here . It’s very clearly written; why not goof around with it online by following this link ?

## The Exponential Function

Chris Hughes is formalising the exponential function and its basic properties, such as $e^{x+y}=e^xe^y$. This is very interesting to me because of all the theorems I proved in M1F, the de Moivre stuff is definitely the stuff that lies “furthest away from the axioms” and hence needs the most work.

## Localisation of a ring.

Last but most definitely not least — Kenny Lau has formalised a sufficiently large amount of Atiyah–Macdonald’s book on commutative algebra to enable us to localise a commutative ring at a multiplicative set. We are now well on the way to defining affine schemes in Lean and once these are defined, defining a scheme in full generality should be not too much of a problem. Kenny’s work can be found here and in particular his localisation work is here although I believe it has just been accepted into mathlib! Well done Kenny!

Thank you to all students who have contributed recently.

Posted in Uncategorized | Leave a comment

## What maths is already in Lean?

I sit around chatting to Imperial students on Thursday nights, and some of them will be writing Lean code. I encourage various people to work on various projects but in general I am quite happy to let people work on whatever they like. In the recent past I have encouraged people to prove that matrix multiplication is associative and distributive over matrix addition, that every prime which is 1 mod 4 is the sum of two squares, or that the spectrum of a commutative ring is a topological space. But in the past I have also encouraged people, foolishly, to do stuff which is already in core Lean or mathlib, and this was mainly because I had no good understanding of exactly what was there.

I decided to change this recently, by reading through all of core Lean and mathlib, and making notes on what was there. I now have a far clearer idea myself about what is there, and occasionally find myself telling students that something they are working on has already been done (of course this is no reason to stop working on it, but one might argue that there is still plenty of low-hanging fruit to work on).

So I decided to try and get some of my notes down on this blog, so that students can look for themselves. The current state of what I have managed to write is here, and I hope to update this page over the next few weeks until it contains a reasonable summary of most of the mathematics which Lean already knows. Once I have finished, it might be interesting to then continue by writing a list of all the mathematics which I think Lean could and should know, and which is accessible to undergraduates…

Posted in Uncategorized | 1 Comment