The word “filter” means different things to different people; just to clarify, this week we’ll be learning about the mathematical notion of a filter on a set. I’ve written about these filters before, but since then I’ve managed to pick up a much better understanding of how to think about filters, and I hope this shows here. When I wrote that post in 2018 I knew that filters were “something to do with limits”, but now I realise that this is wrong. They are *used* to talk about limits, but what a filter itself is, is simply a generalisation of a subset of a set.

## What is a filter?

Let `X`

be a type, i.e. what most mathematicians call a set. Then `X`

has subsets, and the collection of all subsets of `X`

has some really nice properties — you can take arbitrary unions and intersections, for example, and if you order subsets of `X`

by inclusion then these constructions can be thought of as sups and infs and satisfy a bunch of axioms which one might expect sups and infs to satisfy, for example if for all in an index set then . In short, the subsets of a set form what is known in order theory as a *complete lattice*.

A filter can be thought of as a kind of generalised subset of `X`

. Every subset `S`

of `X`

gives rise to a filter on `X`

, called the principal filter `π S`

associated to `S`

, and we have `π S = π T`

if and only if `S = T`

. However if `X`

is infinite then there are other, nonprincipal, filters `F`

on `X`

, which are slightly vaguer objects. However, filters still have an ordering on them, written `F β€ G`

, and it is true that `S β T β π S β€ π T`

(indeed we’ll be proving this today). To give an example of a filter which is not principal, let’s let `X`

be the real numbers. Then for a real number `x`

there is a filter `π x`

, called the neighbourhood filter of `x`

, with the property that if `U`

is any open subset of containing `x`

then `π {x} `

. In other words, `<`

`π x `

`<`

`π`

U

is some kind of “infinitesimal neighbourhood of `π x`

`x`

“, strictly bigger than `{x}`

but strictly smaller than every open neighbourhood of `x`

. This is a concept which cannot be formalised using sets alone, but can be formalised using filters.

## The formal definition of a filter.

Let me motivate the definition before I give it. Say `F`

is a filter. Let’s define `F.sets`

to be the subsets of `X`

which contain `F`

, i.e., the `S`

such that `F β€ π S`

. Here is a property of filters which I have not yet mentioned: If two filters `F`

and `G`

satisfy `F.sets = G.sets`

, then `F = G`

; in other words, a filter is determined by the principal filters which contain it. This motivates the following definition: why not define a filter `F`

to *be* the set of subsets of `X`

which contain it? We will need some axioms — what are reasonable axioms? We don’t want a filter to be bigger than `X`

itself, and we want to make sure that if `S`

contains `F`

then `T`

contains `F`

for any `T β S`

; finally if both `S`

and `T`

contain `F`

then we want `S β© T`

to contain `F`

. That’s the definition of a filter!

```
structure filter (Ξ± : Type*) :=
(sets : set (set Ξ±))
(univ_sets : set.univ β sets)
(sets_of_superset {x y} : x β sets β x β y β y β sets)
(inter_sets {x y} : x β sets β y β sets β x β© y β sets)
```

A filter on `X`

, or, as Lean would like to call it, a term `F : filter X`

of type `filter X`

, is a collection `F.sets`

of subsets of `X`

satisfying the three axioms mentioned above. That’s it. Unravelling the definitions, we see that a sensible definition of `F β€ G`

is that `G.sets β F.sets`

, because we want `G β S`

to imply `F β S`

(or, more precisely, we want `G β€ π S`

to imply `F β€ π S`

).

It’s probably finally worth mentioning that in Bourbaki, where this concept was first introduced, they have an extra axiom on their filters — they do not allow `π β
`

to be a filter — the empty set is not a generalised set. In this optic this looks like a very strange decision, and this extra axiom was dropped in Lean. Indeed, we bless `π β
`

with a special name — it is `β₯`

, the unique smallest filter under our `β€`

ordering. The (small) advantage of the Bourbaki convention is that an ultrafilter can be defined to literally be a minimal element in the type of all filters, rather than a minimal element in the type of all filters other than `β₯`

. This would be analogous to not allowing a ring `R`

to be an ideal of itself, so one can define maximal ideals of a ring to be the maximal elements in the set of all ideals of the ring. However this convention for ideals would hugely break the functoriality of ideals, for example the image of an ideal along a ring homomorphism might not be an ideal any more, the sum of two ideals might not be an ideal, and so on. Similarly, we allow

to be a filter in Lean, because it enables us to take the intersection of filters, pull filters back and so on — it gives a far more functorial definition. `β₯`

## What’s in today’s workshop?

The material this week is in week_5 of the formalising-mathematics GitHub repo which you can download locally if you have `leanproject`

installed or, if you have the patience of a saint and don’t mind missing some of the bells and whistles, you can try online (Part A, and Part B). NB all this infrastructure didn’t just appear by magic, I wrote the code in the repo but I had nothing to do with all these other tricks to make it easier for mathematicians to use — we have a lot to thank people like Patrick Massot and Bryan Gin-ge Chen for.

In Part A we start by defining principal filters and we make a basic API for them. I give a couple more examples of filters too, for example the cofinite filter `C`

on `X`

, which is all the subsets of `X`

whose complement is finite. This filter is worth dwelling on. It corresponds to a generic “every element of `X`

apart from perhaps finitely many” subset of `X`

, perhaps analogous to a generic point in algebraic geometry. However, there exists no element `a`

of `X`

such that `π {a} β€ C`

, because `X - {a}`

is a cofinite subset not containing `a`

. In particular, thinking of filters as generalised subsets again, we note that whilst a generalised set is determined by the sets containing it, it is definitely not determined by the sets it contains: indeed, `C`

contains no nonempty sets at all.

In Part B we go on to do some topology. We define neighbourhood filters and cluster points, and then talk about a definition of compactness which doesn’t involve open sets at all, but instead involves filters. I am still trying to internalise this definition, which is the following:

```
def is_compact (S : set X) := β β¦Fβ¦ [ne_bot F], F β€ π S β β a β S, cluster_pt a F
```

In words, a subset `S`

of a topological space is *compact* if every generalised non-empty subset `F`

of `S`

has closure containing a point of `S`

.

Let’s think about an example here. Let’s stick to `S = X`

. Say `S`

is an infinite discrete topological space. Then the cofinite filter is a filter on `S`

which has no cluster points at all, meaning that an infinite discrete topological space is not compact. Similarly imagine `S`

is the semi-open interval . Then the filter of neighbourhoods of zero in , restricted to this subset (i.e. just intersect all the sets in the filter with ), again has no cluster points, so this space is not compact either. Finally let’s consider itself. Then the `at_top`

filter, which we will think about in Part A, consists of all subsets of for which there exists some such that . This “neighbourhood of ” filter has no cluster points in (note that would be a cluster point, but it’s not a real number). Hence is not compact either. We have certainly not proved here that this definition of compact is mathematically equivalent to the usual one, but it is, and if you’re interested, and you’ve learnt Lean’s language, you can just go and read the proof for yourself in Lean’s maths library.

The boss level this week is, again, that a closed subspace of a compact space is compact. But this time we prove it with filters. As last time, we prove something slightly more general: if `X`

is any topological space, and if `S`

is a compact subset and `C`

is a closed subset, then `S β© C`

is compact. Here’s the proof. Say `F`

is a nonempty generalised subset (i.e. a filter) contained in `S β© C`

. By compactness of `S`

, `F`

has a cluster point `a`

in `S`

. But `F`

is contained in `C`

, so all cluster points of `F`

are cluster points of `C`

, and the cluster points of `C`

are just the closure of `C`

, which is `C`

again. Hence `a`

is the element of `S β© C`

which we seek. No covers, no finite subcovers.

Whatβs the benefit of using filters in Lean? Many point-set topology textbooks (or algebraic topology textbooks for that matter) do not even mention the word filter. Is it possible to do topology in Lean while completely avoiding filters? I once tried reading the API for a topological space but got lost pretty soon because I could not translate the theorems I saw there into point-set topology theorems.

LikeLiked by 1 person

Oh sure you can do it without filters. It’s just that it seems easier to use filters. Proofs which use open sets often contain auxiliary constructions — “now let’s construct this open cover, prove it’s a cover and take a finite subcover”. With filters it is easier to argue backwards, which means that proofs can be more easily turned into functions.

LikeLiked by 1 person

I see. Very interesting, thanks!! I will try writing some topology proofs in Lean. Hopefully it’ll become apparent then.

LikeLiked by 1 person

There seems to be a typo after the lean definition of filter, it should be “G \le S” (order two filters,

where G is a finer filter in the sense that S.sets \subset G.sets), implying “F \le S”?

LikeLiked by 1 person

I guess I wanted to imply that S and T were sets, and F and G were filters. I added a clarifying comment. Thanks!

LikeLiked by 1 person