I can’t make Thursday 16th or Thursday 23th November, so the Imperial sessions for those weeks will take place on Monday 13th and Monday 20th November.

Sorry for the inconvenience.

Kevin

I can’t make Thursday 16th or Thursday 23th November, so the Imperial sessions for those weeks will take place on Monday 13th and Monday 20th November.

Sorry for the inconvenience.

Kevin

Advertisements

Posted in Imperial
Leave a comment

It’s not actually that hard to build the non-negative integers from scratch in Lean. Let’s call them `xnat`

, because Lean already has an inbuilt `nat`

. The code just looks like this:

```
```inductive xnat
| zero : xnat
| succ : xnat → xnat

and that’s it — now `xnat`

is a representation of the non-negative integers (computer scientists call them the “natural numbers”). Let me explain a bit about how this works.

This `inductive`

command creates a new type (think of it as a new set) called `xnat`

. Lean already has a copy of the non-negative integers, called `nat`

, so we’ve called the new ones `xnat`

to avoid confusion.

The `inductive`

command above also tells you that there are exactly two ways that you can construct an element of type `xnat`

. The two ways correspond to the two lines of code under the `inductive`

command, which are definitions of the `zero`

object and the `succ`

function.

Firstly, you can just type `zero`

, and you’ll get an `xnat`

.

And secondly, if you have an `xnat`

called `a`

already, you can apply the function `succ`

to it, and get a new `xnat`

called, unsurprisingly, `succ a`

. As you can see, the `succ`

function above looks like a function from `xnat`

to `xnat`

, and that’s exactly what it is.

Now let’s imagine we want to construct some things of type `xnat`

. At the beginning we don’t have anything at all of type `xnat`

, so the `succ`

function is useless to us. We have to use the `zero`

construction, giving us a natural number which we could call zero. Here’s the full code (together with some other stuff which you don’t need to worry about).

```
```namespace xena
inductive xnat
| zero : xnat
| succ : xnat → xnat
open xnat
#check zero -- it's an xnat
end xena

You can run this code online in your browser by clicking here . Wait for a few seconds for everything to process, and then click on the `#check`

and see that `zero`

is an `xnat`

.

Now we have one `xnat`

, we can build more using the `succ`

function. Add the below lines to your Lean session (before the `end xena`

line):

```
```#check succ zero
#check succ (succ zero)

Because `zero`

and `succ`

are the *only* ways to make elements of `xnat`

, it’s pretty clear that the only elements of `xnat`

are either zero or `succ (succ (... (succ zero)))...)`

. If we think of `succ`

as meaning “successor”, or “add one”, then we can see that the only things of type `xnat`

are zero, one, two, three… . So indeed `xnat`

is a good model for the non-negative integers.

What next? We could define one and two:

```
```definition one := succ zero
definition two := succ one

Now could we prove that one add one equals two? Well no we can’t yet, because we haven’t defined add yet! Let’s add a definition of add. We’ll define it by induction! Well, more precisely, by recursion. The idea is that `a+0`

will just be `a`

, and `a+succ(b)`

will just be `succ(a+b)`

.

```
```definition add : xnat → xnat → xnat
| n zero := n
| n (succ p) := succ (add n p)

It would be nice to use the usual notation `+`

for add:

```
```notation a + b := add a b

Now let’s check that `one + one = two`

. This should just be a case of unravelling the definitions of two, of one, and of add.

```
```theorem one_add_one_equals_two : one + one = two :=
begin
unfold two,
unfold one,
unfold add,
end

It worked! In fact there is an easier way — the tactic `refl`

just

unravels everything and then checks that you’re left with something of the

form `x=x`

:

```
```example : one + one = two :=
begin
refl
end

[NB an example is just a theorem without a name — if you have two theorems with the same name then Lean complains].

There’s also a harder way. Instead of using `unfold`

, you can use `dunfold`

, which does nothing more than the unfolding, so you really have to push it across the line at the end:

```
```theorem one_add_one_equals_two_again : one + one = two :=
begin
dunfold two,
dunfold one,
dunfold add,
refl,
end

Now what I’d really like to do is to prove that `a+b=b+a`

, because that’s a pretty important property of addition. To do that, we will only need to know a few Lean commands. This is why these calculations are not too hard to learn and to do, in stark contrast to M1F example sheets, which turn out to be insanely hard (indeed I am still only on sheet 2 and I am the lecturer).

So what exactly will we need to learn here? We will need to know about `refl`

and `unfold`

, which we’ve just seen. We’ll need to know how to prove things by induction, because to prove `a+b=b+a`

we will do it by induction. And we’ll need to know about the `rewrite`

command, which substitutes one equation or theorem into another. *And that’s it*. However the proof will involve several intermediate lemmas, so it does need some thought. I guide you through the proof in the exercises below (note: exercises are currently being written).

Our definition of addition is quite delicate. We can see that `n+zero=n`

by *definition of add*. Let’s prove this in Lean.

```
```theorem add_zero (n : xnat) : n + zero = n :=
begin
unfold add
end

On the other hand, it is less obvious that `zero+n=n`

. We need to prove this by induction. Let’s do it!

```
```theorem zero_add (n : xnat) : zero + n = n :=
begin
induction n with t Ht,
refl,
unfold add,
rewrite [Ht],
end

I think it’s very hard to follow what is going on in the above code, unless you run the code in Lean and then put your cursor just before the `t`

in `theorem`

and then move it up and down using the arrow keys. I’ll talk you through it — but here’s a link to everything we’ve done so far so you can follow along (remember you have to wait to let lean process the input for a few seconds before it all gets going — wait until it stops saying “running” at the top):

Online version of everything so far .

Put your cursor at the beginning of the line near the bottom that says `theorem zero_add...`

and then move down using the arrow keys to see the proof in action. Here’s what’s happening.

Induction turns one goal into two — a base case and an inductive step (complete with inductive hypothesis, with variable `t`

and hypothesis `Ht`

). The base case we can just prove by definition (you could replace `refl`

with `unfold add`

or even `dunfold add`

followed by `refl`

but don’t forget the commas). The inductive step can be proved in two lines. First we unfold the definition of addition. Then we notice that substituting in the hypothesis `Ht`

(the inductive hypothesis) will turn the goal into something of the form `x=x`

, which Lean proves automatically. That substitution can be done with the `rewrite`

command.

So there are your four Lean commands : induction, unfold, refl, rewrite. We are going to be able to get a long way with these commands.

What next? We’re not quite ready for `a+b=b+a`

yet, but it’s not far off. Why not try these exercises? They will get you there, and beyond (when I’ve written them).

**Exercise 1**

Can we prove “associativity of addition”? More precisely:

```
```theorem add_assoc (a b c : xnat) : (a + b) + c = a + (b + c) :=
begin
admit
end

The “admit” there means “admit defeat”, i.e. give up and prove the answer. Can you delete that “admit” and replace it with an actual proof, by induction on `c`

?

**Exercise 2**

This exercise shows you that the `rewrite`

command can be used to apply previous theorems, as well as current hypotheses. It assumes that you already have proofs of `zero_add`

and `add_zero`

in your Lean session. Here is a link to an online session with everything set up just right.

Let’s prove that `zero + n = n + zero`

for all `n`

using `zero_add`

and `add_zero`

.

```
```theorem zero_add_eq_add_zero (n : xnat) : zero + n = n + zero :=
begin
rewrite [zero_add],
admit
end

Replace admit with one line of code which finishes the job.

Bonus question: you can rewrite more than one theorem or hypothesis in one line, using e.g. `rewrite [A,B,C]`

, or even `rw [A,B,C]`

if you get tired of writing `rewrite`

. Can you prove `zero_add_eq_add_zero`

in one line?

**Exercise 3**

It’s nice to know that `succ n`

is `one+n`

and also `n+one`

. I’ll prove the easy one:

```
```theorem add_one_eq_succ (n : xnat) : n + one = succ n :=
begin
unfold one,
unfold add,
end

Now you prove the harder one.

```
```theorem one_add_eq_succ (n : xnat) : one + n = succ n :=
begin
admit
end

**Exercise 4**

Can you prove

```
```theorem add_comm (a b : xnat) : a + b = b + a :=

? If you try, you might run into the following issue. Recall that we have already proved the case `a=0`

:

```
```theorem zero_add_eq_add_zero (n : xnat) : zero + n = n + zero

However if you try proving `add_comm`

by induction on `b`

, the base case is the other way around: `a+zero=zero+a`

. You might hence find that

```
```rw [←zero_add_eq_add_zero],

is useful. To get that left-arrow use `\l`

(then a space) in VS Code. Another way of doing it: `zero_add_eq_add_zero`

is actually a function, which sends an xnat n to a proof of the fact that `zero+n=n+zero`

. So if you do induction on `a`

then one way of doing the base case is

```
```exact zero_add_eq_add_zero b,

[`exact`

means that you’re giving an exact proof] and if you’re doing induction on `a`

then you need to switch around the sides of the equality, which you can do like this:

```
``` exact eq.symm (zero_add_eq_add_zero a),

[the `eq.symm`

function changes a proof of `X=Y`

to a proof of `Y=X`

]. You don’t need to know about these things, I’m just explaining other approaches rather than using the `rewrite`

command.

**Exercise 5**

Can you prove cancellation? I’ll start you off with a proof which uses the `split`

command to break an `iff`

into two implications.

```
```theorem eq_iff_succ_eq_succ (a b : xnat) : succ a = succ b ↔ a = b :=
begin
split,
exact succ.inj,
assume H : a = b,
rw [H]
end

Now try cancellation:

```
```theorem add_cancel_right (a b t : xnat) : a = b ↔ a+t = b+t :=

**Exercise 6**

Take a look at the definition of addition. Do you think you can define multiplication in a similar way?

```
```definition mul : xnat → xnat → xnat
| n zero := ???
| n (succ p) := ???

Note: to get the right arrow try `\r`

(then a space). Note that because addition is defined, we can use it to help us define multiplication. Can you also define notation `*`

for multiplication? Ask for help if you can’t do this, because if you can’t do it then you won’t get any further. Look at how I did all this with addition. Check you got it right either painstakingly with something like

```
```example : one * one = one :=
begin
dunfold one,
dunfold mul,
dunfold add,
refl
end

or painlessly with

```
```example : one * one = one :=
begin
refl
end

**Exercise 7**

How many of the following can you prove:

```
```theorem mul_zero (a : xnat) : a * zero = zero := sorry
theorem zero_mul (a : xnat) : zero * a = zero := sorry
theorem mul_one (a : xnat) : a * one = a := sorry
theorem one_mul (a : xnat) : one * a = a := sorry
theorem right_distrib (a b c : xnat) : a * (b + c) = a* b + a * c := sorry

**Exercise 8**

If you’ve done all the ones above, here’s `left_distrib`

as a gift:

```
```theorem left_distrib (a b c : xnat) : (a + b) * c = a * c + b * c :=
begin
induction c with n Hn,
unfold mul,
refl,
rw [←add_one_eq_succ,right_distrib,Hn,right_distrib,right_distrib],
rw [mul_one,mul_one,mul_one],
rw [add_assoc,←add_assoc (b*n),add_comm (b*n),←add_assoc,←add_assoc,←add_assoc],
end

Check you understand how it works. If you do then you should be able to manage

```
```theorem mul_assoc (a b c : xnat) : (a * b) * c = a * (b * c) := sorry
theorem mul_comm (a b : xnat) : a * b = b * a := sorry

**Exercise 9**

Let’s do inequalities!

```
```definition lt : xnat → xnat → Prop
| zero zero := false
| (succ m) zero := false
| zero (succ p) := true
| (succ m) (succ p) := lt m p
notation a < b := lt a b

Can you see how that works? Can you prove inequality A1 that we assumed in lectures?

```
```theorem inequality_A1 (a b t : xnat) : a < b → a + t < b + t := sorry

**Exercise 10**

For the ultimate challenge (boss level), how about

```
```theorem inequality_A2 (a b c : xnat) : a < b → b < c → a < c := sorry
theorem inequality_A3 (a b : xnat) : (a < b ∨ a = b ∨ b < a)
∧ (a < b → ¬ (a = b))
∧ (a < b → ¬ (b < a))
∧ (a = b → ¬ (b < a)) := sorry
theorem inequality_A4 (a b : xnat) : zero < a → zero < b → zero < a * b := sorry

These are the four standard inequalities that we assumed about the real numbers in M1F. I found some of these quite hard.

**Further exercises**

Can you define even / odd? Can you prove Lemma 1.1 of M1F? (recall: this said that `n^2`

was even iff `n`

was even. Maybe just use `n*n`

for `n^2`

for now). Can you define a prime number? Can you define “a divides b”? Can you prove that every positive integer is uniquely the product of primes? Can you define powers? Can you prove basic things about powers like `pow x n * pow x m = pow x (n+m)`

or `x*x=pow x 2`

? Can you state Fermat’s Last Theorem in Lean? Nobody has ever proved Fermat’s Last Theorem in any computer proof verification system and this would be a very interesting goal for computer proof verification systems in general.

Posted in Learning Lean
3 Comments

Lean’s core library has no support for rational or real numbers 😦

They are however in the maths library — or we can just make some fake real numbers and use them instead.

-- let's define the real numbers to be a number system which satisfies

-- the basic properties of the real numbers which we will need.

```
```noncomputable theory

constant real : Type

@[instance] constant real_field : linear_ordered_field real

-- This piece of magic means that "real" now behaves a lot like

-- the real numbers. In particular we now have a bunch

-- of theorems:

example : ∀ x y : real, x * y = y * x := mul_comm

`#check mul_assoc`

At Imperial I’ve just been talking to the 1st years about taking a rational power of a positive real number. I guess we could set all this up in Lean.

open nat

```
```definition natural_power : real → nat → real

| x 0 := 1

| x (n+1) := (natural_power x n) * x

`theorem power_comm : ∀ x:real, ∀ m n:nat, natural_power x (m+n) = natural_power x m *natural_power x n :=`

begin

admit

end

That’s a definition (by recursion) of for real and a non-negative integer. It’s also a theorem which needs proving. We can prove that theorem by induction, I suspect. I’ll try it later on.

But what I really want is a rational power. We’re going to need existence of positive th roots.

constant nth_root (x : real) (n : nat) : (x>0) → (n>0) → real

```
```

`axiom is_nth_root (x : real) (n : nat) (Hx : x>0) (Hn : n>0) : natural_power (nth_root x n Hx Hn) n = x`

Now we can consider trying to define rational powers!

constant nth_root (x : real) (n : nat) : (x>0) → (n>0) → real

```
```axiom is_nth_root (x : real) (n : nat) (Hx : x>0) (Hn : n>0) : natural_power (nth_root x n Hx Hn) n = x

`definition rational_power_v0 (x : real) (n : nat) (d : nat) (Hx : x > 0) (Hd : d > 0) : real :=`

natural_power (nth_root x d Hx Hd) n

This code is now up on xena’s github:

Let’s see what we can do with it!

Posted in Learning Lean, M1F
Leave a comment

What have we managed to tell Xena so far?

So far I managed to explain to Xena (i.e. type up in Lean) solutions to all the questions on the first M1F example sheet, and some of the questions on the second sheet. I skipped some of sheet 1 Q5 and all of sheet 1 Q6 because Lean’s type theory as it stood couldn’t make sense of these questions. Kenny Lau implemented enough classical set theory in Lean to do the parts I’d skipped. So sheet 1 really is done. You can see the current state of the write-up here.

As for the second M1F example sheet, to do question 1 I needed the floor function and this was not implemented in Lean, but I now know what I’m doing (to a certain extent) so I could implement this function myself. I’ve solved sheet 2 questions 1 and 2, and also 3(a). To solve 3(b) I need to know that the square root of 3 exists, and unfortunately Lean does not have a square root function, so I am having to implement this myself. This is quite interesting stuff — I will prove later on in M1F that square roots exist using sups and infs and completeness of the reals, and this explicit proof translates rather neatly into Lean. So hopefully soon I will be able to convince Lean that the square root of r exists for r any positive real number. Once I’ve done this I think the rest of sheet 2 should be relatively straightforward. The current state of the solutions is here.

Kenny Lau has implemented enough of the theory of sigma fields in Lean to prove some of the basic statements that Prof McCoy has covered in M1S! His Lean write-up is here.

Looking ahead, the definition and basic properties of the complex numbers shouldn’t be too much trouble, although the stuff about sin(theta)+i.cos(theta) might be. The example sheet questions seem to rely on knowing stuff like what the sin of 45 degrees is, which will be tricky as these things will need to be evaluated algebraically rather than geometrically. Then after that we get a little more abstract and actually things will get a lot easier.

I was hoping I’d be able to do the example sheets as quickly as I was releasing them to the students, but I’m getting behind! Need to work harder…

Posted in Uncategorized
Leave a comment

Lean uses something called type theory. This can be a little bit counterintuitive to mathematicians. For example there are lots of different kinds of 7. The core Lean library provides us with the natural number `7 = 7:nat`

, and the integer ` 7:int`

. If you have access to the maths library then (after e.g. `import topology.real`

or wherever they have put the reals nowadays, I heard it’s moving to `analysis.real`

) you have access to at least two further kinds of 7, namely `7:rat`

and `7:real`

, the rational number 7 and the real number 7. Mathematicians implicitly regard all of these 7’s as the same. Lean is a bit more wary about this kind of thing, although sometimes it will secretly change the type of something from nat to int to rational if it thinks that this is the only way to make things make any sense.

But I’m not here to talk about that, I’m here to talk about types. You can see the type of something with the `#check`

command. Let’s take a look at some types. Note that anything after the `--`

signs means a comment.

#check 4 -- nat

#check nat -- Type

#check (7:int) -- int

#check int -- Type

#check Type -- kind of interesting. But not relevant to us.

If you’re using VS Code and you can find a little icon in the top right hand corner of one window that looks like four thin straight lines and says “Info View : Display Messages (Ctrl+Shift+Alt+Enter)” when you hover over it, then either by clicking on it or by typing Ctrl+Shift+Alt+Enter you can open a new little window where you can see the answers to all these checks. Alternatively you can just hover your cursor over the `#check`

to see the answer.

I think Type is a kind-of weird name for an *example* of a type, because there are types like `nat`

which are still types but which are not equal to Type. I guess the point is that Type is quite a general kind of type. If something has type Type, I like to think of it as a set. For example 4 is a number, not a set, so its type is `nat`

and not Type; but the set `nat`

of all natural numbers is a set, and indeed its type is Type.

Here are some more interesting types.

variable f: nat -> nat -- a function from the naturals to the naturals

#check f -- some sort of function type.

#check f 5 -- it's got type nat.

Note that we haven’t even defined the function f! So we wouldn’t actually be able to evaluate it, but even though it’s not been defined we can still figure out where it’s going from and to. This is called “type checking” I think. It’s some sort of basic sanity test which you can do in quite complicated situations to check that everything has the type it’s supposed to have.

Now here’s one of the most interesting types that there is.

#check (2+2=4) -- Prop

#check (forall x y : nat, x = y ) -- Prop

#check (forall x y : nat, x = x ) -- Prop

#check forall x y z n : nat, not ((x>=1) ∧ (y>=1) ∧ (z>=1) ∧ (n>=3) ∧ (x^n+y^n=z^n)) -- Prop

A “Prop” is a very common type, and it’s the type that true/false statements have.

It’s very easy to construct things of type `nat`

. For example any non-negative integer that you can think of has type `nat`

.

We just saw above that it’s also quite easy to construct things of type `Prop`

. For example `2+2=4`

has type `Prop`

, as does the formulation of Fermat’s Last Theorem above.

But it turns out that `(2+2=4)`

is *also* a type!

So can we build something of type `(2+2=4)`

?

Sure we can:

theorem t : (2+2=4) := by trivial

#check t

There’s a proof of the trivial theorem that 2+2=4. What this means in Lean is that `t`

is actually a thing of type `2+2=4`

, and it was constructed using the function `by trivial`

. You could use `begin/end`

instead:

theorem t2 : 2+2=4 :=

begin

trivial

end

```
```

`#check t2`

That’s a bit weird. Actually it’s not weird at all, it sort of makes things symmetric. For example `4`

has type `nat`

and `nat`

has type `Type`

, which is some sort of generic type that begins with a capital letter. Similarly a proof of 2+2=4 has type `2+2=4`

, which has type `Prop`

, which is some sort of generic type that begins with a capital letter.

Because `(forall x y : nat, x = y ) `

is simply not true, we can’t construct a proof of it. So there’s an example of a type for which we can’t construct any elements of that type! In fact there’s an even easier type with this property:

#check false

The type `false`

is a `Prop`

which has been specifically designed so that it is impossible to construct anything of that type. Very counter-intuitive if you’re coming to this from a world like C++ or something, where types are actually designed so that you can build useful things of these types.

The difference between a theorem and an example in Lean is that theorems name the objects they’re constructing (proofs of propositions) and examples don’t. So we can prove `2+2=4`

a third time like this:

example : 2+2=4 := by trivial

and here what we did was that we constructed an element of type `2+2=4`

, didn’t give it a name, and then threw it away.

When you are in the middle of a big proof, you have a bunch of hypotheses with names, and a goal. For example here’s a very elaborate proof of some easy fact.

example : forall x : nat, x <= x+x :=

begin

intro x,

have H : 0<=x,

apply nat.zero_le,

-- at this point one could say "H is the hypothesis that 0 <= x"

-- but one could also say that H is an object of type 0<=x

-- or in other words a proof that 0<=x.

have H2 : x+0<=x+x,

apply nat.add_le_add_left,

assumption,

have H3 : x+0 = x,

trivial,

rewrite H3 at H2,

assumption

end

If you step through the proof, you can see that extra hypotheses slowly appear — these are objects representing proofs of mathematical true/false statements.

Those of you who are paying attention might notice that Lean doesn’t distinguish between a mathematician’s “function arrow” and their “implies sign” . This is because, to a computer scientist, they’re the same thing! For example the squaring function on the natural numbers is `f : nat -> nat`

, and the input is a natural number (i.e. something of type nat) and the output is a natural number (i.e. something else of type nat). But if `P`

and `Q`

are Propositions, then a function `f : P -> Q`

takes something of type `P`

and returns something of type `Q`

. Now something of type `P`

is a proof of `P`

, and something of type `Q`

is a proof of `Q`

, and this means that a function `P->Q`

is something which takes a proof of `P`

and returns a proof of `Q`

— so in particular it means that if `P`

is true (a notion which I am going to conflate with “provable”) then `Q`

must also be true! In particular we can really think of this function as a proof that .

This is called the Curry–Howard correspondence apparently.

Posted in Learning Lean
Leave a comment

Jeremy Avigad runs a Logic and Proof course at Carnegie Mellon, which uses Lean. He teaches Lean in a methodical way which might be perfect for some people. The link I posted is still a work in progress — I believe his course is running right now and some things aren’t quite finished, or need certain libraries which may or may not be installed by default — but the course certainly contains some very illustrative examples of how to use Lean to do some basic mathematics.

Note that this is a course running in a computer science department, so there is more emphasis on things like the lambda calculus than one would normally see in a mathematics course. On the other hand I am well aware that some readers of this blog are on a joint mathematics and computer science degree…

Posted in Learning Lean
Leave a comment

I’ve solved Q2 and Q3 of M1F Sheet 1 and pushed the solutions to github. Q4 I somehow want to persuade Xena to do herself, but I don’t quite know the best way to do this. These questions were much easier than Q1.

It’s really hard learning this new language but honestly I cannot begin to stress that things which seemed so so complicated to me two months ago now look much easier.

Thanks everyone who came this evening.

Posted in M1F
Leave a comment