I am interested in teaching and outreach, and often get asked to give talks to schoolkids. I gave one today, to a room full of year 10 kids (aged 14-15), and I showed them Lean. This was the youngest group of students I’ve ever attempted to show Lean to. I decided to show them a proof that 2 + 2 = 4, starting from definitions of 2 and +. However by year 10 the students have not seen induction yet, so I started by showing them parts of a YouTube video with stuff in a line falling over (e.g. dominoes, humans). I then defined the naturals (I called them “numbers”) as an inductive type in Lean, showed how to define recursively the function which adds two to a number, and finally we proved 2 + 2 = 4 by unwrapping the definitions of 2 and + and so on.

I really didn’t know how it was going to go. But, to my relief, when we were doing the big calculation, I asked the audience to fill in the next line for all seven lines, and these kids managed to do it! I was very pleased.

Here are the pdf slides. In the actual talk the text appeared slowly (there were beamer pauses) and there were times when I spent a minute or two talking to the kids about what was going on.

On page 12 of the pdf I say “Let’s go and make some numbers using Lean” and this is what I coded up live. I finish by trying to formalise the statement that 2+2=4 but there’s a problem because although I’ve defined two and four I’ve not yet defined addition. We then go back to the pdf, and on page 15 I return to Lean and define addition like this. Then comes the moment of truth: the calculation on line 16 (which appears line by line in the actual presentation) and which I got them to do. Having made it through, I then showed how Lean could prove it with rfl, and claimed that this was because even though we’d only just taught it numbers it was already an expert in numbers. I then asked the audience what other kinds of maths did they think it was possible to teach Lean and immediately one kid said “all of it”. I agreed. I then made some provocative statements about why people like my daughter should not be forced to learn about cosines, and then asked if there were any questions. The first one was “How do I learn more about this stuff?”. But I did feel that I’d lost plenty of people in the room.

I think that age 14-15 is a little too young for this talk. These kids had self-selected — they were at Imperial for an open day (during a UK school holiday) so were probably interested in science. But I wonder if the material presented here would be a little more suited to kids a little older. I’m giving a couple more school talks in the future, so maybe I’ll find out soon enough.

Posted in Uncategorized | Leave a comment

Lean in LaTeX

Patrick Massot wrote a formatter, which can turn Lean code into beautiful LaTeXxy html. The undergraduates and I have been formalising Imperial’s first real analysis course M1P1 in Lean. Put them together and you get a 21st century proof of the sandwich theorem.

That last link is a link to a proof of the theorem that says that if three real-valued sequences (a_n), (b_n), (c_n) with a_n\leq b_n\leq c_n satisfy a_n\to\ell and c_n\to\ell, then b_n\to\ell. If you click on a line in the proof then it opens up the Lean code (which is probably incomprehensible if you don’t know any Lean) but the beauty is that if you then click on one of the grey rectangles either side of that line of Lean code then you get to see the tactic state, which a mathematician might well find easy to understand because it is written in standard mathematical language. This way a mathematician who does not know any Lean can still step through the formal proof, and can figure out how to get from each line to the next themselves, because each individual step is typically very short.

Ultimately this means that we can write formally-verified course notes for the Imperial analysis course which the first year undergraduates are currently attending, and the notes could be of use even to students who do not know the technicalities of how to use Lean and do not have Lean installed on the computers they have access to.

Posted in Imperial, undergrad maths | Tagged , | 2 Comments

A word on BIDMAS

Maybe it’s not called that in other countries, but in the UK BIDMAS (which I swear was “BODMAS” when I was at school, or maybe I just went to a weird school) was the rule that was used to figure out how to work out what things like 1+2*3-4/(5+6-7*8-9) meant. The BIDMAS rule was that B for Brackets came first, and then there’s this I which might stand for “indices” i.e. ^, and then division, multiplication, addition and subtraction. There seemed to be some disagreement amongst my own children (all of whom have learnt this at school over the last few years) as to whether the “DM” in BIDMAS meant that division had to be done before multiplication, or whether division and multiplication had equal “strength”. I ultimately realised that I was not entirely sure I understood this BIDMAS thing myself! I am pretty sure that BIDMAS doesn’t mean that addition must be done before subtraction, otherwise a-b+c would come out to be a-(b+c), whereas I am almost certain that it is supposed to mean (a-b)+c. [Interestingly though, it seemed to me that doing all divisions before all multiplications, from left to right, seems to give the same answer as treating division and multiplication with the same weight.]

We see from this discussion that these pieces of notation like + and *, and of course other symbols such as , and had better also have some sort of BIDMAS rule. Furthermore, we had also better know whether we evaluate them from left to right or from right to left, because sometimes this matters and sometimes it depends on the operator! For example 10-5-4 is (10-5)-4=1 and not 10-(5-4)=9, however if you type 2^1^2 into your favourite computer algebra package (or 2**1**2 if you’re using python) then the chances are you’re going to get 2^(1^2)=2^1=2 rather than (2^1)^2=2^2=4. Indeed, you can check that Lean thinks it’s 2:

#eval 2 ^ 1 ^ 2 -- Lean says 2
#reduce 2 ^ 1 ^ 2 -- still 2

How does this actually work then?

For any piece of notation like -, we need to know two things: First, its power (in the sense that BIDMAS says that * has more power than -; the power of an operator is a natural number) and secondly we need to know its associativity (we need to know whether a-b-c means (a-b)-c or a-(b-c)). It turns out that this is all you need to know. Lean’s parser is a so-called Pratt parser but all you really need to know is how Lean figures out the power (technically called the “left binding power”) and associativity of all the operators you run into. These operators include functions like nat.add, and notation like + (note in particular that even though nat.add 2 3 means the same as 2 + 3, the parser treats them differently). Note also that even though (a+b)+c=a+(b+c), Lean still has to know exactly which one is meant by a+b+c; the fact that they are equal is a theorem rather than being true by definition.

The first rule you need to know is that functions have very high power — they eat the next thing they see and that’s it.

#eval nat.add 3 nat.add 4 5 -- error

Functions are greedy. They eat the next thing presented to them and only afterwards check that it has the right flavour. The first nat.add eats 3, and then it eats the second nat.add and chokes because nat.add isn’t a natural number. You have to fix it with brackets, which have the highest power of all:

#eval nat.add 3 (nat.add 4 5) -- evaluates to 12

So that’s brackets and functions. Now what about notation? When notation is defined, its power is also defined. You can see the power of some notation by looking at at the numbers printed out with Lean’s #print notation command. For example

#print notation +
-- _ `+`:65 _:65 := has_add.add #1 #0

It’s the 65’s we need to understand here. The first number 65 tells you that `+` has power 65. The second number 65 tells you that + is left associative, that is, a+b+c=(a+b)+c by definition. Why? Well, in practice that second number is either equal to the first one (meaning the notation is left associative) or one less (meaning right associative). For example

#print notation ^
-- _ `^`:80 _:79 := has_pow.pow #1 #0

The power notation has a higher power than addition, as expected, but 79 is one less than 80, meaning that it’s right associative, as we saw above. Functions and brackets have power over 1000, by the way.

To learn more about left binding power and how Pratt parsers really work, e.g. to learn what the second number actually means and why it’s either equal to or one less than the first number, try the links on the Wikipedia website. But for Lean users who just want to figure out how things are being parsed, this trick should pretty much always tell you what you need to know. It’s worth just typing

#print notation

and looking at all the output — you’ll learn that ^ has power 80, * and / have power 70, and + and - have power 65, which tells you that Lean knows BIDMAS (and that addition and subtraction do indeed have the same power). Interestingly, we also learn that = has power 50. Had it ever occurred to you that = also needed a binding power? Lean also needs to know what P = Q = R means! It turns out that = is left associative, so 1 = 2 = (1 = 2) is true, by reflexivity.

If anyone has run into Lean’s (or Haskell’s) $ as an alternative for brackets, the way it works is that f $ x is just defined to be f x, but $ has power 1, which is so low that everything else gets evaluated first. For example

#eval nat.add 3 $ nat.add 4 5

works! Lean evaluates the $ at the very end, after it’s figured out that nat.add 4 5 is 9, so this time the addition succeeds.

The one gotcha I have run into is that the comma in ∀ (x : X), P x also has a super-low power (it has power 0 in fact), meaning that the innocent-looking

example (P Q : ℕ → Prop) :
∀ x, P x ∧ Q x → ∀ x, Q x ∧ P x := sorry

doesn’t mean what I would instinctively think it meant: Lean brackets it as ∀ x, (P x ∧ Q x → ∀ x, Q x ∧ P x), because has power 25, bigger than the power of the comma. To get this:

example (P Q : ℕ → Prop) :
(∀ x, P x ∧ Q x) → ∀ x, Q x ∧ P x := sorry

you have to add those brackets in explictly.

Posted in Learning Lean, Uncategorized | Leave a comment

Column addition

Lean has two copies of basic number types such as the naturals or integers. The reason is that Lean users might want to do two completely different things with numbers such as these. Some people might want to prove theorems about numbers, such as associativity of addition, or Fermat’s Last Theorem. Other people might want to do actual computations with numbers, for example proving that 617 is prime or figuring out 10000 * 10000. One thing I learnt from Seul Baek at Lean Together 2019 is that these two concepts fight against each other! Computer scientists will understand this well, but as a mathematician I am only just coming to terms with it.

As a mathematician I really like Lean’s definition of the natural numbers:

inductive nat
| zero : nat
| succ (n : nat) : nat

[and I should say in passing that as someone who has now looked at a lot of web pages containing Lean code, I wish these code blocks looked better in wordpress].

The advantage of this definition is that if you are trying to prove a theorem about natural numbers, then applying the built-in induction tactic to the variable in question reduces the question into exactly the problems you expect — the base case and the inductive step. Actually the real advantage is that you can use the equation compiler to make definitions by recursion, and it just works.

But apparently computer scientists are not particularly fond of this data type:

#reduce (10000 : ℕ) * 10000 -- deep recursion error

Apparently, making a term that looks like succ (succ (succ (... succ zero)))))...) with a hundred million succs can take up a lot of space.

Computer scientists hence prefer storing naturals in binary format. This code is essentially from mathlib’s data/num/basic.lean:

inductive pos_num : Type
| one  : pos_num
| bit0 : pos_num → pos_num
| bit1 : pos_num → pos_num

inductive num : Type
| zero  : num
| pos   : pos_num → num

A pos_num is a finite string of binary digits starting with a 1 (e.g. 6 is 110 in binary and bit0 (bit1 one) in Lean), and a num is either a pos_num or zero.

Computer scientists can now implement computationally efficient versions of addition and multiplication on num, for example after defining succ on pos_num in the natural recursive way, they can make definitions such as

  def add : pos_num → pos_num → pos_num
  | 1        b        := succ b
  | a        1        := succ a
  | (bit0 a) (bit0 b) := bit0 (add a b)
  | (bit1 a) (bit1 b) := bit0 (succ (add a b))
  | (bit0 a) (bit1 b) := bit1 (add a b)
  | (bit1 a) (bit0 b) := bit1 (add a b)

No surprises here — addition is defined by recursion on length of binary expansion, and in the obvious way. Multiplication is also defined, basically as long multiplication:

  protected def mul (a : pos_num) : pos_num → pos_num
  | 1        := a
  | (bit0 b) := bit0 (mul b)
  | (bit1 b) := bit0 (mul b) + a

OK so great, now we can do ten thousand squared:

import data.num.basic

#reduce (10000 : pos_num) * 10000 -- ugly output corresponding to 10^8 in binary

But actually we have to be careful here — what if I have made a typo in my definition of add on pos_num? I had better check that my definitions of add on nat and pos_num coincide! To put it another way — we need to prove that column addition is actually a valid algorithm for adding two numbers! Without this, Lean will not be able to prove anything about nats using the computationally efficient num.

This boils down to constructing maps between nat and num and verifying that these maps commute with addition.

To define a map from nat to num one has to “put a nat structure on num”, i.e. define zero and succ for num. This is not hard. To define the map in the other way, one has to define bit0 and bit1 on nat and this isn’t hard either.

But to prove that these maps are bijections is already some work. It suffices to prove that the composition in both ways is the identity. I tried this myself in Lean and the harder way was to show that if you start with a num then go to nat and back, you end up where you started. The problem with nums is that the version of induction you have is induction on length of binary representation, which is not so well-behaved; you first have to prove lemmas about the maps from num to nat and nat to num commuting with succ, bit0 and bit1.

After this, one wants to prove that the computer science definition of + is the same as the mathematics one. I jumped straight in, and soon I ran into a goal of the form (a+1)+1=a+(1+1) on num, so I figured I’d prove associativity for pos_num in general (after all, I’d already proved it for nat):

theorem challenge (a b c : pos_num) : (a + b) + c = a + (b + c) := sorry

But this turns out to be quite subtle! I would be interested in hearing from anyone who manages to do this directly. The inductive hypotheses you generate from induction on pos_num initially seem to be useless. Here is a link to the Lean web editor where you can try it yourself (or just cut and paste everything into VS Code if you have Lean installed locally).

I talked to computery-minded people at Lean Together and they were unanimous: don’t prove this directly! Prove associativity of nat, prove that the addition on nat and num agree, and deduce associativity on num from associativity on nat. I had actually rolled my own nat for this exercise, and all of a sudden I had to prove things like (a+b+1)+(a+b+1)=(a+a+1)+(b+b+1) on my nat with no high-powered tactics like ring available to me 🙂 Lucky I had a journey from Amsterdam back to London to do all this stuff on, and I already knew how to prove commutativity and associativity of addition on nat. Even more luckily, associativity of addition on num was not needed to show that addition on nat and num coincided. Whilst I do not really understand the details, I seemed to need associativity for num when proving that addition commuted with the map from nat to num, but when I gave up on this approach and instead tried to prove that addition commuted with the map from num to nat everything came together. I had already proven that these maps were bijections, so I could deduce that addition commuted with the map from nat to num from previous results.

The moral

When your schoolteacher taught you column addition back when you were a kid, you were wise not to ask them why column addition was associative, because the direct proof seems tricky!

PS it’s not impossible though 🙂

Posted in Learning Lean, number theory | Leave a comment

Learning Lean by example

I am interested in Lean for lots of reasons, and one of these reasons is that I am convinced that somehow it can be used to teach university mathematics students about proof in a new way. However I have run into two problems in practice.

First, Lean has a high learning curve, at least if you go about learning it the way I did, which was by trying to do undergraduate maths example sheets in Lean; the main problem is that the questions are in the wrong order, and one is constantly running into things which are easy in maths but hard in Lean (usually because the infrastructure is not yet there).

The second problem is that the resources are not yet there; the existing Lean literature is very good, but doesn’t ever actually get on to doing mathematics, it spends most of the time talking about types and the equation compiler and so on rather than actually working with fields like the real numbers — perhaps this is unsurprising, because the teaching literature tends to stick to core Lean whereas the real numbers are defined in the maths library.

I am going to change this by writing teaching material for mathematicians, but this will take time; I plan on working hard on it this term. I started by going through the material which I have generated whilst learning Lean myself, and some of this I now realise is quite good for beginners. I am just going to post some of this stuff on this blog and tag it “Learning Lean” to see if it’s of any help to anyone.

Here is some code which defines two inductive propositions on natural numbers. The first one is “I am even”. It’s defined like this — 0 is even, and if `n` is even then `n+2` is even; and that’s it. One can really work with this definition; I give some examples in the code below.

The second inductive proposition is “I satisfy the Collatz conjecture” (otherwise known as the 3n+1 problem). Recall that this conjecture says that if you start with any positive integer, and then halve it if it’s even, or multiply it by 3 and add 1 if it’s odd, then eventually you’ll get to 1. It’s cute, but not hard to see, that one can define this as an inductive proposition. I goof around with it a little in the below. I have documented the code; one can just cut and paste it into any Lean session and one doesn’t even need the maths library.

Click here to run the below code with the Lean Web editor (NB I have problems with firefox, but it works in Chrome for me). Or just cut and paste it into a Lean session.

-- Inductive propositions.

-- Example 1: even numbers.

-- inductive definition of "is_even" predicate on the real numbers
inductive is_even : ℕ → Prop 
| even0 : is_even 0 
| evenSS : ∀ n : ℕ, is_even n → is_even (n + 2)

open is_even  

theorem four_is_even : is_even 4 :=
  -- goal now `is_even 4`
  apply evenSS,
  -- goal now `is_even 2`
  apply evenSS,
  -- goal now `is_even 0`
  exact even0  

-- can do it with functions too
theorem six_is_even : is_even 6 := evenSS 4 $ evenSS 2 $ evenSS 0 even0

theorem one_is_not_even : ¬ (is_even 1) :=
  intro H,
  -- H : is_even 1
  -- clearly H can not have been proved using either even0 or evenSS
  -- so when we do this
  cases H,
  -- there are no cases left, so the proof is over!

-- cases can also be used to pull down higher odd numbers
theorem five_is_not_even : ¬ (is_even 5) :=
  intro H5,
  cases H5 with _ H3, 
  cases H3 with _ H1,
  cases H1,

-- example of usage
lemma even_iff_SS_even (n : ℕ) : is_even n ↔ is_even (n + 2) :=
  { -- easy way
    exact evenSS n,
  -- other way a bit trickier
  intro HSSn,
  cases HSSn with _ Hn,

-- Example 2: the Collatz conjecture.

/-- collatz n means the collatz conjecture is true for n -/
inductive collatz : ℕ → Prop
| coll0 : collatz 0
| coll1 : collatz 1
| coll_even : ∀ n : ℕ, collatz n → collatz (2 * n)
| coll_odd : ∀ n : ℕ , collatz (6 * n + 4) → collatz (2 * n + 1)

open collatz

theorem coll10 : collatz 10 :=
apply coll_even 5,
apply coll_odd 2,
apply coll_even 8,
apply coll_even 4,
apply coll_even 2,
apply coll_even 1,
apply coll1

theorem coll10' : collatz 10 :=
  coll_even 5 $ coll_odd 2 $ coll_even 8 $ coll_even 4 $ coll_even 2 $ coll_even 1 coll1

-- if m is odd then 3m+1 is always even, so we can go from m=2n+1 to 6n+4 and then to 3n+2 in one step.
lemma coll_odd' (n : ℕ) : collatz (3 * n + 2) → collatz (2 * n + 1) :=
  intro H3n2,
  apply coll_odd,
  -- unfortunately applying coll_even will tell us about collatz (2 * (3 * n + 2))
  -- rather than collatz (6 * n + 4), so let's fix this
  rw (show 6 * n + 4 = 2 * (3 * n + 2), by rw [mul_add,←mul_assoc];refl),
  -- with tactic.ring imported from mathlib this line could just be
  --  rw (show 6 * n + 4 = 2 * (3 * n + 2), by ring),

  apply coll_even,

-- now we can go a bit quicker
theorem coll10'' : collatz 10 :=
  apply coll_even 5,
  apply coll_odd' 2, 
  apply coll_even 4,
  apply coll_even 2,
  apply coll_even 1,
  apply coll1

-- Of course one should really be using a home-grown tactic to figure out which
-- of coll_even, coll_odd and coll1 to apply next.

/- this is the Collatz conjecture -/
theorem collatz_conjecture : ∀ n : ℕ, collatz n := sorry
-- Nobody knows how to prove this.
Posted in Learning Lean | Leave a comment

Blue-eyed islanders (guest post)

Thanks very much indeed to Chris Hughes, who has written up some thoughts on the blue-eyed islanders problem. Before we start, here’s the history. I (KMB) heard about this problem a few years ago, and thought it was cute, but then forgot about it. Richard Thomas then ran into it in an old post on Terry Tao’s blog and suggested I put it onto an M1F example sheet. I did this, and knocked up a solution. Rohan Mitta then suggested that my solution was incomplete — my inductive argument did not prove that the blue-eyed islanders left on day 100, but only that they left on day at most 100. I agreed with him, and then realised that actually I found this sort of question hard to reason about (because I knew nothing about modal logic, it turned out). Chris then independently invented enough modal logic himself to put the question into Lean and gave what I now regard as a definitive solution to the problem.

Chris writes about a bunch of lean code in his post; the code for his first proof is here and the code for the decidability proof (shorter code, takes longer to run though) is here. For maximising the chances of understanding, it’s probably best to look at this code in Lean whilst reading the post.

Here is Chris’ post.


A few people have asked me about how my Lean proof of the blue eyed islanders question on Kevin Buzzard’s M1F problem sheet works
[Question 6, on http://wwwf.imperial.ac.uk/~buzzard/maths/teaching/18Aut/M1F/exsht05.pdf]. I didn’t have a readable proof of this until now – I first wrote the proof when I was much less proficient at Lean, so in this blog post I present a more readable version of the proof. This is actually a proof of a slightly simplified version of the problem, since the brown eyed islanders do not have to leave the island in this version. I also strengthen the proof by proving that the blue-eyed islanders do not leave until day 100. This version still demonstrates the most interesting parts of the question.


The hardest part of this proof to understand is the statement of the problem. I defined the function island_rules to represent what the blue eyed islanders know. Given a day number d, and the actual number of blue eyed islanders b, and a hypothesized number of blue eyed islanders B, it returns whether or not B is a possible number of blue eyed islanders from the blue eyed islanders perspective. If the only consistent value for B is b then the blue eyed islanders have deduced the number of blue eyed islanders, and thus worked out their own eye colour and must leave the island. Given that the islanders are perfect logicians, they know exactly whether this function returns true or false for any given values (there is a philosophical question here about how perfect the logicians are, and whether they can solve undecidable propositions, for the purposes of this proof they can, although this problem is constructively true).

def island_rules : ℕ → ℕ → ℕ → Prop
| 0     b B := (B = b ∨ B = b - 1) ∧ 0 < B
| (d+1) b B := island_rules d b B ∧
((∀ C, island_rules d b C → C = b) ↔ ∀ C, island_rules d B C → C = B)

On day 0 there are two possibilities for the number of blue eyed islanders b and b - 1 unless b = 1, in which case there is only one possibility, since it is also known that the actual number is greater than zero.

On subsequent days, a hypothesized value `B` is possible if and only if it was possible the previous day and it is consistent with whether or not islanders left the previous day. The statement ((∀ C, island_rules d b C → C = b) ↔ ∀ C, island_rules d B C → C = B) is the hardest part to understand. The left hand side is true if and only if the blue eyed islanders deduced their own eye colour the previous day. The right hand side is true if and only if the islanders would have left the previous day if the actual number of blue eyed islanders was B. Therefore the entire iff statement is true if and only if the actual events the previous day are consistent with what would have happened the previous day if B was the actual number of blue eyed islanders.

To illustrate how this works try substituting b = 2 into this function.

On day zero it returns island_rules 0 2 B := (B = 2 ∨ B = 1) ∧ 0 < B, so 1 and 2 are the only possibilities, and since there are two possibilities the islanders do not leave.

On day one it returns island_rules 0 2 B ∧ ((∀ C, island_rules 0 2 C → C = 2) ↔ ∀ C, island_rules 0 B C → C = B). island_rules 0 2 B := (B = 2 ∨ B = 1) ∧ 0 < B which equals B = 2 ∨ B = 1, so we know that the only possibilities are 1 and 2 for now. Now we can check which of these values is consistent with the second part of the proposition. Obviously B = 2 is consistent since substituting B = 2 leaves both sides of the iff statement the same.

Let’s try B = 1. The the statement becomes (∀ C, island_rules 0 2 C → C = 2) ↔ ∀ C, island_rules 0 1 C → C = 1. We know island_rules 0 2 C is equivalent to the statement C = 2 ∨ C = 1, so the left hand side becomes ∀ C, C = 2 ∨ C = 1 → C = 2. This is clearly false, C = 1 is a counterexample to this.

Now look at the right hand side. By definition island_rules 0 1 C is equal to (C = 1 ∨ C = 0) ∧ 0 < C. Clearly this is equivalent to C = 1. So the left hand side becomes ∀ C, C = 1 → C = 1, which is true. This is because if the actual number of blue eyed islanders was 1, they would have left the previous day. So for C = 1 the iff statement is equivalent to false ↔ true, so B = 1 is not a possibility. Therefore the only possibility on day 1 (note out by one error due to starting at day 0) is 2, so the blue eyed islanders will leave.

Now that we have defined the “rules”, the statement of the theorem proved is this. The left hand side indicates whether or not the blue eyed islanders leave.

lemma blue_eyed_islander {d b} (hb0 : 0 < b) : (∀ B, island_rules d b B → B = b) ↔ b ≤ d + 1


First we prove a trivial lemma, saying that the conditions on day zero hold on any other day

lemma island_rules_zero_of_island_rules : ∀ {d b B : ℕ}, island_rules d b B → (B = b ∨ B = b - 1) ∧ 0 < B
| 0     b B h := h
| (d+1) b B h := island_rules_zero_of_island_rules h.left

Next we prove the first direction of the iff, that they leave by day b - 1. This proof is easier to read in a Lean editor.

theorem blue_eyed_islander_mp : ∀ {d b}, 0 < b → b ≤ d + 1 → ∀ B, island_rules d b B → B = b
| 0 b hb0 hdb B hir := begin
    have : b = 1, from le_antisymm hdb (succ_le_of_lt hb0),
    unfold island_rules at hir,
    cases hir.left with h h,
    { exact h },
    { simp [this, *, lt_irrefl] at * }
| (d+1) b hb hdbB B hir :=
  have (B = b ∨ B = b - 1) ∧ 0 < B, from island_rules_zero_of_island_rules hir,
    unfold island_rules at hir,
    cases this.left with h h,
    { exact h },
    { /- the case B = b - 1 -/
      have hb0 : 0 < b - 1, from h ▸ this.right,
      have hb1 : 1 ≤ b, from le_trans (succ_le_of_lt hb0) (nat.sub_le_self _ _),
      have hdbB' : b - 1 ≤ d + 1, from nat.sub_le_right_iff_le_add.2 hdbB,
      /- by our induction hypothesis, they would have left on day d if the actual number was b - 1 -/
      have ih : ∀ C : ℕ, island_rules d B C → C = B, from h.symm ▸ blue_eyed_islander_mp hb0 hdbB',
      /- From the definition of island_rules, this means they left yesterday -/
      rw ← hir.right at ih,
      /- Slightly strange proof, even though we know B = b - 1 and we're trying to prove
        B = b, we don't find a contradiction, we just prove B = b directly -/
      exact ih B hir.left }

Next we prove a trivial lemma about naturals that kept coming up

theorem nat.sub_one_ne_self_of_pos {a : ℕ} (ha : 0 < a) : a - 1 ≠ a :=
ne_of_lt (nat.sub_lt_self ha dec_trivial)

Next we prove the other direction of the if and only if. This is the harder direction, and it has been stated in a slightly different, but equivalent way to make it easier to prove.

lemma blue_eyed_islander_mpr : ∀ {d b}, 0 < b → d + 1 < b → ∀ B, island_rules d b B ↔ (B = b ∨ B = b - 1)
| 0     b hb0 hdb B := begin
  unfold island_rules,
  { exact λ h, h.left },
  { assume hbB,
    { exact hbB },
    { cases hbB with hbB hbB,
      { exact hbB.symm ▸ hb0 },
      { exact hbB.symm ▸ nat.sub_pos_of_lt hdb } } }
| (succ d) b hb0 hdb B :=
  { exact λ h, (island_rules_zero_of_island_rules h).left },
  { assume hbB,
    have hb10 : 0 < b - 1, from nat.sub_pos_of_lt (lt_trans dec_trivial hdb),
    have hdb1 : d + 1 < b - 1, from nat.lt_sub_right_of_add_lt hdb,
    /- Use our induction hypothesis twice. For both possible values of B, the islanders
      did not leave the previous day. This means we have no new information -/
    have ihb : ∀ B : ℕ, island_rules d b B ↔ B = b ∨ B = b - 1,
      from blue_eyed_islander_mpr hb0 (lt_trans (lt_succ_self _) hdb),
    have ihb1 : ∀ B : ℕ, island_rules d (b - 1) B ↔ B = b - 1 ∨ B = b - 1 - 1,
      from blue_eyed_islander_mpr hb10 hdb1,
    unfold island_rules,
    { rw ihb,
      exact hbB },
    /- the interesting part of the proof starts here, we have to prove that either value of B is
      possible -/
    { cases hbB with hbB hbB,
      { /- case B = b is easy -/
        rw hbB },
      { /- case B = b - 1 is harder -/
        /- By our induction hypotheses it was impossible to deduce the value of `b` yesterday in both
          the real world, and for our hypothesized value of `B`, which is `b - 1`. This means both sides
          of the iff statement are false -/
        simp only [ihb, ihb1, hbB],
        /- After applying the induction hypothesis, it is now obvious that both sides are false,
          and the proof is easy from now on -/
        apply iff_of_false,
        { assume h,
          have : b - 1 = b, from h (b - 1) (or.inr rfl),
          exact nat.sub_one_ne_self_of_pos hb0 this },
        { assume h,
          have : b - 1 - 1 = b - 1, from h (b - 1 - 1) (or.inr rfl),
          exact nat.sub_one_ne_self_of_pos hb10 this } } } }

Proving the final lemma from the iff requires a small amount of work.

lemma blue_eyed_islander {d b} (hb0 : 0 < b) : (∀ B, island_rules d b B → B = b) ↔ b ≤ d + 1 :=
  { assume h,
    refine le_of_not_gt (λ hbd : d + 1 < b, _),
    have := blue_eyed_islander_mpr hb0 hbd,
    have : b - 1 = b, from h (b - 1) ((this (b - 1)).mpr (or.inr rfl)),
    exact nat.sub_one_ne_self_of_pos hb0 this },
  { exact blue_eyed_islander_mp hb0 }

Alternative proof

In the proof above we proved it for any number of blue eyed islanders. However the problem sheet only asked us to prove it in the case where there were 100 blue eyed islanders, and there is a simple way to prove it in this case.

If we look at our function island_rules, it has type ℕ → ℕ → ℕ → Prop, which is actually the same as ℕ → ℕ → set ℕ , since set ℕ = (ℕ → Prop) by definition. The set it returns is actually always a finite set, since it is always a subset of {b, b - 1}. There is a Type in Lean called a finset, for finite sets, which is not ℕ → Prop, but actually stores a list of all the elements, meaning you can check whether an number is in the finset or not, and actually print a list of all the elements. We can rewrite our function island_rules to return a finset ℕ, instead of a set ℕ

def island_rules2 : ℕ → ℕ → finset ℕ
| 0     b := ({b - 1, b} : finset ℕ).filter (> 0)
| (d+1) b := (island_rules2 d b).filter
  (λ B, (∀ C ∈ island_rules2 d b, C = b) ↔ (∀ C ∈ island_rules2 d B, C = B))

filter is just a function that filters a finset according given a decidable predicate. Decidable means that, given a natural number, there is a Lean program for determining whether the predicate holds for that natural number or not. > 0 is obviously decidable.

The second predicate (λ B, (∀ C ∈ island_rules2 d b, C = b) ↔ (∀ C ∈ island_rules2 d B, C = B)) is also decidable. An iff statement is decidable, if both sides of the iff are decidable, and (∀ C ∈ island_rules2 d b, C = b) is decidable since island_rules2 d b is a finite set, Lean can determing this by checking it for all the elements of this finite set.

This means the function is computable, and we can evaluate the set of possibilities, given any day or number of blue eyed islanders

#eval island_rules2 5 7 -- {6, 7}
#eval island_rules2 6 7 -- {7}

To prove the lemma we simply have to evaluate this function on every day less than 100, and on day one hundred. After providing a few more decidable instances, we can prove the lemma with dec_trivial. This proof takes a long time for Lean to check.

lemma island_rules_iff (d : ℕ) : ∀ b B, island_rules d b B ↔ B ∈ island_rules2 d b :=
by induction d; simp [island_rules, island_rules2, *]; finish

instance (d b B) : decidable (island_rules d b B) :=
decidable_of_iff _ (island_rules_iff _ _ _).symm

lemma island_rules_iff' (d : ℕ) : ∀ b, (∀ B, island_rules d b B → B = b) ↔ (∀ B ∈ island_rules2 d b, B = b) :=
by simp [island_rules_iff]

instance dec2 : decidable_rel (λ d b, ∀ B, island_rules d b B → B = b) :=
λ d b, decidable_of_iff _ (island_rules_iff' d b).symm

lemma blue_eyed_islander : ∀ d < 100, (∀ B, island_rules d 100 B → B = 100) ↔ d = 99 :=
Posted in Learning Lean, M1F, undergrad maths | Leave a comment

Maths undergraduate example sheets

I’m teaching a course called M1F this term — the basic “introduction to proof” course which all first year maths (and joint maths/computing) students have to take at Imperial College London. The course is supposed to be managed via our VLE, but in practice this means that no other staff members can see what’s going on, so I also maintain a web page, which is here. The problem sheets will slowly appear over the course of the term.

One thing which is kind of obvious if you know about this sort of thing, but somehow I still find striking, is how incredibly varied in difficulty the problems are if you try to solve them in Lean. There seems to be barely any correlation between how difficult the questions are for a human and how difficult they are in Lean. Because of the way type theory works, questions involving integers, rationals and reals can get quite messy in Lean. Trivial observations such that “this integer is greater than 2 and less than 4, so it’s 3” can be hard to do in Lean, especially if you don’t know how to begin.

On the other hand, one thing I’ve noticed this year as I work through the questions in Lean for the second time in my life, is how much easier they are than the first time I tried them (this time last year). This is for two reasons. Firstly it’s because, obviously, I have a lot more Lean experience this time around. But secondly it’s really noticeable how many more tools there are — some are there because Mario Carneiro and others reacted to my frustration as a mathematician trying to use Lean and wrote specific tools in order to make my life easier, and some are there because members of the community are just writing tools anyway. In particular Rob Lewis wrote a tactic called linarith which tries to solve basic linear inequalities and I just noticed yesterday how much easier this can make my life. Here’s an example (it’s sheet 2 question 2).

A “largest element” of a set S\subseteq\mathbb{R} is an element s\in S which is greater than or equal to every element of S. One of the questions I ask the students is to prove that the open interval (0,1) has no largest element (exercise for students: why isn’t 0.999999\dots the largest element?).

The mathematical idea behind the solution is this: if m\in(0,1) is an element, then it’s not the largest element because the average \frac{1+m}{2} of m and 1 is larger. Looking back at the proof I wrote last year, it was 25 lines long and used exotic inbuilt theorems such as div_lt_of_mul_lt_of_pos, the statement that c > 0 and b < ac implies b / c < a (this is not surprising — one has to prove \frac{1+m}{2}<1 somehow…).

This year I'm putting up the Lean solutions in a github repository and this year the proof looks like this:

import data.real.basic
import tactic.linarith

def open_zero_one := {x : ℝ | 0 < x ∧ x < 1}

-- open_zero_one has no maximal element
theorem Q2 : ¬ (∃ m : ℝ, m ∈ open_zero_one ∧ 
                  ∀ x : ℝ, x ∈ open_zero_one → x ≤ m) :=
  rintro ⟨m,⟨H1,H2⟩,Hx⟩,
  -- now apply max hypothesis to (m + 1) / 2.
  have H := Hx ((m + 1) / 2),
  -- Can only use the hypothesis if we know (m + 1) / 2 is in the interval.
  have Hav : (m + 1) / 2 ∈ open_zero_one,
      -- now 0 < m is a hypothesis and 0 < (m + 1) / 2 is the goal
      linarith, -- and this tactic does it
    -- similarly hypothesis m < 1 implies goal (m + 1) / 2 < 1
  -- now use that m is supposed to be the max.
  have Hwrong := H Hav, 
  -- hypothesis Hwrong (m + 1) / 2 <= m contradicts hypothesis m < 1.
  -- Contradictory hypos means we can even prove a false goal.

The proof is eight lines long (not counting the comment lines). Mario’s rintro tactic does all the introductory stuff and Rob’s linarith does all the trivial inequality argument; pretty much the only thing we have to do “manually” is have the idea that \frac{m+1}{2} is relevant.

Posted in Imperial, Learning Lean, M1F, tactics, undergrad maths | Leave a comment