In equality part 1 I talked about the difference between equality-because-of-a-theorem and equality-by-definition, with the uncomfortable example of `n+0=n`

and `0+n=n`

; when you look at the definition of addition, you find that the former is true by definition and the latter needs a proof (by induction on `n`

).

But there is an even more picky version of equality, which is *syntactic equality*. Two terms are syntactically equal if, basically, you have to press the same buttons on your keyboard in the same order to make them. As a non-example, `n + 0`

and `n`

, even though they’re equal by definition, are not syntactically equal. In fact a great example of a term syntactically equal to `n + 0`

is `n + 0`

, and there aren’t too many more examples.

Actually, syntactic equality doesn’t have to be literally the same keypress. There are two things you can get away with. The first is notation, because notation is just dealt with by Lean’s parser and prettyprinter — it’s some extra layer between Lean and humans, made to make our lives a bit easier. So if `n : ℕ`

then the easy-on-the-eye `n + 0`

will be syntactically equivalent to `n + has_zero.zero ℕ`

which is what the notation means. And also, Lean is not too fussed about the names of bound variables — for example `∃ n : ℕ, n < 2`

is syntactically equivalent to `∃ w : ℕ, w < 2`

. But that’s it. Thinking about equality made me appreciate the difference between notation and definitions in fact — for example the zero for the natural numbers,

, is defined to be `has_zero.zero ℕ`

`nat.zero`

, but

and `has_zero.zero ℕ`

`nat.zero`

are not syntactically equal, just definitionally equal.

Once I’d started getting the hang of definitional equality, and realised that in many cases Lean simply didn’t care if you changed a term to a definitionally equal term, I started to get the impression that definitional equality was “all that mattered”. If you were trying to prove an equality, you applied theorems until the terms were definitionally equal, and then proved the theorem by `rfl`

. But in practice things were not so easy. The first thing that I learnt was that `rw`

will not work with terms that are merely definitionally equal. If you’re in tactic mode with a hypothesis `h : X = Y`

and your goal is `X' = Z`

where `X'`

is definitionally equal to `X`

, then `rw h`

will fail. You have to first apply `show X = Z`

to get the goal into its current form, and then the rewrite will succeed.

That last para I’ve known for a while, but it was only when reading through `data/complex/basic.lean`

in Lean’s maths library this week that it dawned on me that the same is true for `simp`

. I have never really used `simp`

much in my Lean code because I always felt that I hadn’t really got the hang of it. What made the penny drop was that I was developing the theory of complex numbers myself, given the real numbers, and proving that the complex numbers were a ring. How might this code look? Well first you import the reals, and then define the complexes as being a pair of reals.

```
-- import the reals...
import data.real.basic
-- ...and now we can build the complex numbers as an ordered pair of reals.
structure complex : Type :=
(re : ℝ) (im : ℝ)
notation `ℂ` := complex
```

Now let’s think about what needs to be done. To prove the ring axioms for the complexes, e.g. , we need to prove that two complex numbers are equal. We can prove this by showing that their real parts and imaginary parts are equal. Checking that the real parts are equal just involves checking that some real number made by sums and products of real and imaginary parts of equals some other number, and the `ring`

tactic does this. So first we need to define things like addition, multiplication, negation, zero and 1 something like this:

`definition add (z w : ℂ) : ℂ := ⟨z.re + w.re, z.im + w.im⟩`

and so on, and then to check stuff like we will want Lean to prove `(a * (b + c)).re = (a * b + a * c).re`

. Now the left hand side is *definitionally equal* to `a.re * (b.re + c.re) + -(a.im * (b.im + c.im))`

and the right hand side is definitionally equal to `a.re * b.re + (a.re * c.re + (-(a.im * b.im) + -(a.im * c.im)))`

, and if we can get the left and right hand sides into this expanded form then the `ring`

tactic will immediately solve our goal.

But `ring`

by itself will not prove

. Definitional equality is not enough. It is not `(a * (b + c)).re = (a * b + a * c).re`

`ring`

‘s job to get our terms into exactly the right state! We need to use the fact that `(z * w).re = z.re * w.re - z.im * w.im`

and so on; each of these lemmas is proved by `rfl`

because the two sides are definitionally equal, however these lemmas need to be explicitly applied, and all in the right order, in order to turn our terms into the form that `ring`

needs.

Now we could apply all the lemmas by hand with a big string of rewrites, but it would be easier to use `simp`

here. The `simp`

tactic is very good at figuring out at what we want to do to our terms here, it will apply all the rewrites in a sane order. So we need to add results like this:

```
@[simp] lemma mul_re (a b : ℂ) :
(a * b).re = a.re * b.re - a.im * b.im := rfl
```

tagged with the `[simp]`

tag, proved with `rfl`

, and telling Lean that every time `simp`

sees the real part of a product, we want it to expand it out.

The first time I tried to get all this to work, I had some trouble, because when defining the ring structure on the complexes, I wrote `neg := complex.neg`

. This innocuous line of code caused me all sorts of problems, and it was only after I cheated and looked at mathlib’s `complex.lean`

did I realise that I should have written `neg := has_neg.neg`

. This was a bit subtle for me, so I’ll finish this post by trying to explain why this was an issue.

The point I’m trying to make is that sometimes syntactic equality really matters. `simp`

and `rw`

want to match terms using syntactic equality, or perhaps they will allow some slight leeway involving unfolding reducible definitions, but syntactic equality is the thing to focus on. This means that we need to think about a “canonical form” for everything we want to say. Should we write `re a`

or `a.re`

? Well, it doesn’t matter, I guess, but what does matter is that we are *consistent*. We need to make a choice and then stick to it. I was randomly mixing `0`

and `complex.zero`

and then finding that I needed two simp lemmas `0.re = 0`

and `zero.re = 0`

. We could use `complex.add`

or `has_add.add`

but really we should stick to exactly one (and `has_add.add`

is definitely the best one!). I was using `has_add.add`

but `complex.neg`

because as far as I could see there was *no difference at all* between the two choices. Sure, they’re definitionally equal, but that doesn’t mean they’re the same.

I got it all working really nicely in the end. My code for proving that the complexes are a ring, which is basically literally a subset of mathlib’s `data/complex/basic.lean`

but with some added comments, is here in the xena project on github. The proof ultimately is very short:

```
-- and now let's prove all of the axioms of a commutative ring using
-- the same technique.
instance : comm_ring ℂ :=
by refine { zero := 0, add := (+), neg := has_neg.neg, one := 1, mul := (*), ..};
{ intros, apply ext; simp; ring }
```

`ext`

is the assertion that two complex numbers are equal if their real and imaginary parts are equal; `simp`

puts the real and imaginary parts into the canonical form we need, and `ring`

then proves the equality we need.