Liquid tensor experiment

This is a guest post, written by Peter Scholze, explaining a liquid real vector space mathematical formalisation challenge. For a pdf version of the challenge, see here. For comments about formalisation, see section 6. Now over to Peter.

1. The challenge

I want to propose a challenge: Formalize the proof of the following theorem.

Theorem 1.1 (Clausen-S.) Let 0<p'<p\leq 1 be real numbers, let S be a profinite set, and let V be a p-Banach space. Let \mathcal M_{p'}(S) be the space of p'-measures on S. Then

\mathrm{Ext}^i_{\mathrm{Cond}(\mathrm{Ab})}(\mathcal M_{p'}(S),V)=0

for i\geq 1.

(This is a special case of Theorem 9.1 in www.math.uni-bonn.de/people/scholze/Analytic.pdf, and is the essence of the proof of Theorem 6.5 there.)

Below, I will explain what all the terms in the theorem are, and why I care. I apologize in advance that the background story is a little longer, but I think it’s a fascinating story and I tried to be as brief as possible.

2. Getting condensed

The first thing to explain is the category of condensed abelian groups \mathrm{Cond}(\mathrm{Ab}), in which this computation takes place. This is a variant of the category of topological abelian groups, but with much better properties. It is part of the “condensed mathematics” that Dustin Clausen and myself started to develop two years ago. I gave two courses in Bonn about it: www.math.uni-bonn.de/people/scholze/Condensed.pdf and www.math.uni-bonn.de/people/scholze/Analytic.pdf, and recently we gave a Masterclass in Copenhagen about it, https://www.math.ku.dk/english/calendar/events/condensed-mathematics/.

Condensed mathematics claims that topological spaces are the wrong definition, and that one should replace them with the slightly different notion of condensed sets. Before giving their definition, let me state a few properties:

— in practice, most topological spaces of interest are “compactly generated weak Hausdorff”. In fact, this is the class of topological spaces customarily used within algebraic topology; also, all usual topological vector spaces (e.g., all metrizable ones, or just first-countable ones) have this property. Now compactly generated weak Hausdorff spaces also embed fully faithfully into condensed sets, so in this large class of examples, the transition is completely inconsequential.

— condensed sets have very nice categorical properties, in particular all limits and colimits (like topological spaces). But they also have internal mapping objects: if X, Y are condensed sets, there is a condensed set \mathrm{Hom}(X,Y) such that for any other condensed set T, maps T\to \mathrm{Hom}(X,Y) are functorially identified with maps T \times X \to Y. In fact, the same is true relatively, for maps over some base condensed set S. In category-speak, “condensed sets are locally cartesian closed”.

— even better, up to mild set-theoretic problems, condensed sets form a topos. (More precisely, they satisfy Giraud’s axioms except for the existence of a set of generators. One can build a variant of the category of condensed sets, known as the category of pyknotic sets of Barwick–Haine, that is a topos, essentially by restricting to the subcategory generated by some set of generators.)

Let me now explain what condensed sets are. Clausen and I came to it from different trajectories (and Barwick and Haine yet from a different one); there are some precursors in the literature, notably Spanier’s quasi-topological spaces (in algebraic topology) and Waelbroeck’s compactological spaces (in functional analysis). For me, it started (unsurprisingly…) in relation to my work on perfectoid geometry, but feel free to skip this paragraph. The starting point of perfectoid geometry is the idea to study something like a p-adic annulus \{T\in \mathbb C_p, |T|=1\} by extracting all p-power roots of T, leading to an infinite tower of covers, and then pass to the limit object, which is a perfectoid space. One wants to do this as, surprisingly, perfectoid spaces are in some ways nicer than the usual annulus; in particular, there is the tilting procedure relating them to characteristic p (and again, surprisingly, characteristic p is in some ways easier than characteristic 0). Technically, one defines the pro-étale site of the annulus (or any rigid-analytic variety), and shows that locally in the pro-étale site, the space is perfectoid. But let us forget all about perfectoid geometry, and let us simply look at what the analogue of this story would be for a point: In the fibre over a point, say T=1, in each finite stage one covers it by finitely many points, and in the limit by a limit of finite sets, i.e. a profinite set. (A profinite set is by definition an object of the category Pro-(finite sets). Their category is equivalent to the category of totally disconnected compact Hausdorff spaces, and I will often tacitly identify the two concepts.)

This led Bhatt and myself to introduce the pro-étale site of a point (or a general scheme). This is the category of profinite sets S, where covers are given by finite collections of jointly surjective maps.

Definition 2.1. Condensed sets are sheaves on the pro-étale site of a point.

This makes it sound like condensed sets are a topos, but above I mentioned set-theoretic problems. Indeed, the category of profinite sets is not small, so one has to resolve this in some way. Let me gloss over this point here; it is not essential for any of the following discussion.

This definition (without the name) is already in my 2013 paper with Bhatt. But I never tried to thoroughly understand this notion. I knew that there is a functor from topological spaces to condensed sets (up to set-theoretic problems): Given a topological space T, one can send any profinite set S to the continuous maps from S to T. What I did not understand is that this functor is fully faithful on a very large class of topological spaces, and that condensed sets are actually an improvement over topological spaces. This is what Clausen quickly convinced me of when he arrived in Bonn in 2018 as a PostDoc.

Before going on, let me describe what a condensed set X “is”: For each profinite set S, it gives a set X(S), which should be thought of as the “maps from S to X”, so it is measuring how profinite sets map into X. The sheaf axiom guarantees some coherence among these values. Taking S=\ast a point, there is an “underlying set” X(\ast). Beware however that there are condensed sets X with X(\ast)=\ast a point, but with X(S) big for general S. We will see an example below. It is important to allow such examples in order to have a nice general theory. In practice, they do not produce much trouble either. It’s a free world.

Example 2.2. Let T be a compact Hausdorff space. Then a classical and somewhat weird fact is that T admits a surjection S\to T from a profinite set S. One construction is to let S be the Stone-Čech compactification of T^\delta, where T^\delta is T considered as a discrete set. This lets one recover T as the quotient of S by the equivalence relation R = S\times_T S\subset S\times S. Thus, compact Hausdorff spaces can be thought of as quotients of profinite sets by profinite equivalence relations. Of course, this seems like a horrible way to look at compact Hausdorff spaces, but this is what happens in the condensed perspective, which only records maps from profinite sets. Part of the reason for the challenge is the question: Can this possibly be a good perspective?

Example 2.3. Let T=[0,1] be the interval. In high-school, we learn to think of real numbers in terms of decimal expansions. I claim that this produces a surjection from a profinite set onto T! Why? Any point of [0,1] can be written as 0.a_1a_2a_3\ldots with coefficients a_i\in \{0,1,\ldots,9\}, however one needs to make some identifications, like 0.099999\ldots = 0.100000\ldots. Now decimal expansions per se naturally form a profinite set, namely \prod_{n\geq 1} \{0,1,\ldots,9\}. It is only when doing these extra identifications that one produces the continuum [0,1]! So secretly, in high school we learn to think of the interval as a quotient of a profinite set.

3. Condensed abelian groups

Just like topological abelian groups, one can consider condensed abelian groups, that is abelian group objects in the category of condensed sets. Equivalently, these are sheaves of abelian groups on the pro-étale site of a point. In fact, one nice feature of the condensed formalism is that for any notion of mathematical objects, one can consider their condensed version: Condensed groups, condensed rings, condensed categories, …, just by talking about sheaves of such on the pro-étale site of a point, making it possible to “put a topology on top of something” without trouble. (This is the spell of “working internally in a topos”…) However, unlike topological abelian groups, condensed abelian groups have excellent formal properties:

Proposition 3.1. The category \mathrm{Cond}(\mathrm{Ab}) of condensed abelian groups is an abelian category with all limits and colimits, and a class of compact projective generators. In particular, it satisfies Grothendieck’s axioms (AB3–6) and (AB*3–4) (i.e., the same that the category of abelian groups satisfies), in particular filtered colimits and infinite products are exact.

This makes \mathrm{Cond}(\mathrm{Ab}) exceptionally well-behaved, even more well-behaved than the category of abelian sheaves on a usual topos: Usually, infinite products fail to be exact.

Example 3.2. In topological abelian groups, a standard problem is that one can have continuous maps f: M \to N of topological abelian groups that are isomorphisms of underlying abelian groups, but the map is not a homeomorphism. For example, let N=\mathbb R be the real numbers and M=\mathbb R^\delta be the real numbers equipped with the discrete topology. Then the kernel and cokernel of f are trivial, while f is not an isomorphism. This is corrected in condensed abelian groups as follows: There is an exact sequence

0\to \mathbb R^\delta \to \mathbb R \to Q \to 0

where Q is a condensed abelian group with underlying abelian group Q(\ast)=0, but for a general profinite set S, one has Q(S) = \mathrm{Cont}(S,\mathbb R)/\mathrm{LocConst}(S,\mathbb R), which is in general not zero.

Another nice feature of the condensed world is that it’s a free world. Namely, for any condensed set X, there is a free condensed abelian group \mathbb Z[X] on X (given by the sheafification of the presheaf taking S to \mathbb Z[X(S)]). If X is a compact Hausdorff space, one can describe \mathbb Z[X] explicitly: It is a union of compact Hausdorff spaces \mathbb Z[X]_{\leq n}\subset \mathbb Z[X], with underlying set \{\sum_{s\in S} n_s[s]\mid \sum_{s\in S} |n_s|\leq n\}. This defines in fact also a (compactly generated weak Hausdorff) topological abelian group; in that context, this construction goes back to old work of Markov.

What are compact projective generators of \mathrm{Cond}(\mathrm{Ab})? One can take \mathbb Z[S] for certain special profinite sets S, namely so-called “extremally disconnected” profinite S. This includes the Stone-Čech compactifications of discrete sets, and all other examples are retracts. In fact, extremally disconnected profinite sets are exactly the projective objects of the category of compact Hausdorff spaces, by an old theorem of Gleason. Extremally disconnected profinite sets are pretty rare: The smallest infinite one has cardinality 2^{2^{\aleph_0}}. All convergent sequences in extremally disconnected sets are eventually constant. So all your favourite examples of profinite sets (Cantor sets, \mathbb Z_p, etc.) are not extremally disconnected. In fact, the existence of extremally disconnected ones is tied extremely closely to the axiom of choice (in the form of the existence of non-principal ultrafilters): The basic examples are Stone-Čech compactifications of discrete sets, which are discrete sets together with some ghostly dust: You will never be able to name any point in the boundary!

The proposition ensures that one can do homological algebra in \mathrm{Cond}(\mathrm{Ab}), but it means that for a computation, one has to resolve by free condensed abelian groups on extremally disconnected sets. These are some “clouds of dust”, and projective resolutions in \mathrm{Cond}(\mathrm{Ab}) amount to resolving nice objects by “clouds of dust”. It is basically impossible to do this completely explicitly, but one can sometimes be reasonably explicit. Clausen had developed some techniques to carry out nontrivial computations in \mathrm{Cond}(\mathrm{Ab}). In particular:

Theorem 3.3. Let A, B be locally compact abelian groups, considered as objects in \mathrm{Cond}(\mathrm{Ab}). Then \mathrm{Ext}^i_{\mathrm{Cond}(\mathrm{Ab})}(A,B)=0 for i\geq 2, and \mathrm{Ext}^1_{\mathrm{Cond}(\mathrm{Ab})}(A,B) agrees with the Yoneda-Ext in the category of locally compact abelian groups, while \mathrm{Hom}_{\mathrm{Cond}(\mathrm{Ab})}(A,B) are the usual continuous maps.

Thus, starting from locally compact abelian groups, there are no “strange” condensed abelian groups arising as extensions (or “higher” extensions): Even if \mathrm{Cond}(\mathrm{Ab}) chops things up into clouds of dust, the only ways to build the dust back together are the ways you would have thought about. The proof of the theorem is quite nontrivial; it makes use of the Breen–Deligne resolution of abelian groups. However, given the right approach the proof is quite tidy.

4. Completeness

These results show that \mathrm{Cond}(\mathrm{Ab}) behaves quite well, and if one starts with reasonable examples, then their \mathrm{Hom} and \mathrm{Ext}-groups generally stay well-behaved. However, this is not true for tensor products. Namely, if one takes say a tensor product of \mathbb R and \mathbb Z_p, one gets a condensed abelian group \mathbb R\otimes \mathbb Z_p whose underlying abelian group is the usual algebraic tensor product of \mathbb R and \mathbb Z_p, which is a bit of a mess as the real and p-adic topologies are incompatible. One would like to have a notion of “complete” condensed abelian groups, and a resulting notion of completed tensor product, such that the completed tensor product vanishes. Similarly, the completed tensor product of \mathbb Z_p with itself should just be \mathbb Z_p.

After toying around, we found a notion of completeness that works perfectly away from the real numbers.

Definition 4.1. A condensed abelian group M is solid if for any profinite set S, any map f: S\to M extends uniquely to a map \mathbb Z[S]^\blacksquare \to M, where \mathbb Z[S]^\blacksquare = \varprojlim_i \mathbb Z[S_i] when S=\varprojlim_i S_i is written as a limit of finite sets.

One can regard \mathbb Z[S]^\blacksquare also as \mathrm{Hom}(\mathrm{Cont}(S,\mathbb Z),\mathbb Z), i.e. as the \mathbb Z-valued measures on S. In this sense, this says that any map f: S\to M gives a unique map \mathbb Z[S]^\blacksquare\to M sending a measure \mu to \int f\mu. Note that there is a map S\to \mathbb Z[S]^\blacksquare given by “Dirac measures”.

Theorem 4.2. The category of solid abelian groups is stable under all limits and colimits and extensions, and is an abelian subcategory of condensed abelian groups. The inclusion into condensed abelian groups admits a left adjoint M\mapsto M^\blacksquare “solidification”, which is the unique colimit-preserving extension of \mathbb Z[S]\mapsto \mathbb Z[S]^\blacksquare. There is a unique tensor product on solid abelian groups making M\mapsto M^\blacksquare symmetric monoidal, i.e. compatible with the tensor product. The category of solid abelian groups has compact projective generators, given by infinite products of copies of \mathbb Z.

Example 4.3. The discrete group \mathbb Z is solid, and then all objects built via limits and colimits (and internal Hom’s) from \mathbb Z are still solid. This includes \mathbb Z_p, \mathbb Q_p, \prod_I \mathbb Q_p, etc., in fact everything that comes up in usual algebra. The only way to leave the category is to take tensor products; these have to be resolidified. But then \mathbb Z_p \otimes^\blacksquare \mathbb Z_p = \mathbb Z_p, \mathbb Z_p \otimes^\blacksquare \mathbb Z_\ell = 0 for p\neq \ell, etc. In fact, considering solid \mathbb Q_p-vector spaces, one gets a perfect framework for doing p-adic functional analysis. The solid tensor product recovers the usual completed tensor product of Banach spaces, and even of Fréchet spaces.

5. Road to reality

Let us recall where we stand. The condensed formalism was largely motivated by questions in p-adic geometry, is based on profinite sets, and it can beautifully handle p-adic functional analysis. All of these are very much nonarchimedean. On the other hand, we would like to claim that condensed sets are better than topological spaces, even over \mathbb R. So what about the real numbers?

The bad news is that the real numbers are definitely not solid; their solidification is equal to 0. The issue is that \mathbb Z[S]^\blacksquare was the space of \mathbb Z-valued measures on S, without any archimedean bounds.

Working over \mathbb R, one is thus led to consider for profinite sets S=\varprojlim_i S_i the space \mathcal M(S) of (signed Radon) measures on S; as a condensed abelian group,

\mathcal M(S) = \mathrm{Hom}_{\mathbb R}(\mathrm{Cont}(S,\mathbb R),\mathbb R).

This is a union of compact Hausdorff spaces \mathcal M(S)_{\leq c} (equipped with the weak-\ast-topology). One can write \mathcal M(S)_{\leq c} = \varprojlim_i \mathbb R[S_i]_{\ell^1\leq c}, where \mathbb R[S_i] is the free vector space on the finite set S_i, and \ell^1\leq c refers to the part of \ell^1-norm at most c. Originally, we were hoping that the following definition was reasonable.

Definition 5.1. A condensed \mathbb R-vector space V is \mathcal M-complete if for any profinite set S, any map f: S\to V extends uniquely to a map \mathcal M(S)\to V.

Any complete locally convex vector space gives rise to an \mathcal M-complete V. Conversely, the condition of \mathcal M-completeness is closely related to local convexity. One can translate much of functional analysis to this setting. In particular, (up to also asking for quasiseparatedness), one can define a completed tensor product in this setting, and in some sense it recovers both the injective and the projective tensor product of Banach spaces. Unfortunately, it turns out that this category is not abelian, not stable under extensions, etc. The problem is the existence of the Ribe extension, that is an extension of Banach spaces that is not itself locally convex; it arises from the entropy functional. This forces us to include non-locally convex spaces in the formalism. Traditionally, this leads to the notion of p-Banach spaces for 0<p\leq 1, which includes \ell^p-spaces: a p-Banach space is a topological \mathbb R-vector space V that is complete for a p-norm ||\cdot||: V\to \mathbb R_{\geq 0}. The only axiom that is changed is the scaling axiom: ||av||= |a|^p ||v|| for a\in \mathbb R, v\in V. In particular, ||\cdot|| still satisfies the usual triangle inequality. [There is some unfortunate clash of notation here between p always denoting a prime number for me, and it being the standard letter to denote \ell^p-norms. Denoting primes by \ell doesn’t even help much!]

There is a theorem of Kalton that any extension of p-Banach spaces is p'-Banach for all p'<p. This leads to the following definition.

Definition 5.2. For a profinite set S=\varprojlim_i S_i, let the space of p-measures be \mathcal M_p(S) = \bigcup_{c>0} \mathcal M_p(S)_{\leq c}, where

\mathcal M_p(S)_{\leq c} = \varprojlim_i \mathbb R[S_i]_{\ell^p\leq c}.

Definition 5.3. A condensed \mathbb R-vector space V is p-liquid if for any profinite set S and any p'<p, any map f: S\to V extends uniquely to a map \mathcal M_{p'}(S)\to V.

After a lot of struggles, we believe we were able to prove the following analogue of our theorem on solid abelian groups.

Theorem 5.4. The class of p-liquid \mathbb R-vector spaces is stable under all limits and colimits and extensions (even in condensed abelian groups), and forms an abelian subcategory of condensed abelian groups. The inclusion into all condensed abelian groups admits a left adjoint “p-liquidification”, which is the unique colimit-preserving extension of \mathbb Z[S]\mapsto \mathcal M_{<p}(S) = \varinjlim_{p'<p} \mathcal M_{p'}(S). There is unique tensor product of p-liquid \mathbb R-vector spaces making p-liquidification symmetric monoidal. The category of p-liquid \mathbb R-vector spaces has compact projective generators, given by \mathcal M_{<p}(S) for S extremally disconnected.

On nuclear spaces, the p-liquid tensor product agrees with the usual completed tensor product, for any choice of 0<p\leq 1. (Arguably, this is the best one can hope for, as for say Banach spaces, there is not one but several natural choices for a tensor product as defined by Grothendieck. The p-liquid one is still different, and does not usually produce a Banach space, but in the nuclear case these subtleties disappear.)

This is the theorem that I would like to see formalized. As stated in the beginning, it comes down (by a not completely obvious, but very safe reduction) to Theorem 1.1, the vanishing of \mathrm{Ext}-groups between \mathcal M_{p'}(S) and p-Banach spaces.

Remark 5.5. The class of p-liquid \mathbb R-vector spaces depends on the choice of 0<p\leq 1; for p=1, the condition is the strongest (closest to local convexity) and gets weaker as p approaches 0. For applications, usually any choice of p is fine. It is quite mysterious that there is not only one theory over \mathbb R now, but a whole family of them! In fact, it turns out that there are similar theories also over \mathbb Q_\ell. In that case, they exist for all 0<p<\infty, and in some sense the limiting theory for p=\infty is the theory of solid \mathbb Q_\ell-vector spaces. One can thus think that for increasing p, the objects get more and more “viscous” until they become solid for p=\infty. (On the other hand, the limit for p=0 is in some sense the class of all condensed \mathbb Q_\ell-vector spaces, which one might refer to as “gaseous”.) Over \mathbb R, one needs to take p\leq 1 as otherwise the transition maps in the limit \varprojlim_i \mathbb R[S_i]_{\ell^p\leq c} are not well-defined, as they can increase the \ell^p-norm.

6. Sympathy for the devil

Why do I want a formalization?

— I want to make the strong claim that in the foundations of mathematics, one should replace topological spaces with condensed sets (except when they are meant to be topoi — topoi form a separate variant of topological spaces that is useful, and somewhat incomparable to condensed sets). This claim is only tenable if condensed sets can also serve their purpose within real functional analysis.

— with this theorem, the hope that the condensed formalism can be fruitfully applied to real functional analysis stands or falls. I think the theorem is of utmost foundational importance, so being 99.9% sure is not enough.

— if it stands, the theorem gives a powerful framework for real functional analysis, making it into an essentially algebraic theory. For example, in the Masterclass, Clausen sketched how to prove basic results on compact Riemann surfaces or general compact complex manifolds (finiteness of cohomology, Serre duality), and one can black box all the functional analysis into this theorem. Generally, whenever one is trying to mix real functional analysis with the formalism of derived categories, this would be a powerful black box. As it will be used as a black box, a mistake in this proof could remain uncaught.

— I spent much of 2019 obsessed with the proof of this theorem, almost getting crazy over it. In the end, we were able to get an argument pinned down on paper, but I think nobody else has dared to look at the details of this, and so I still have some small lingering doubts.

— as I explain below, the proof of the theorem has some very unexpected features. In particular, it is very much of arithmetic nature. It is the kind of argument that needs to be closely inspected.

— while I was very happy to see many study groups on condensed mathematics throughout the world, to my knowledge all of them have stopped short of this proof. (Yes, this proof is not much fun…)

— I have occasionally been able to be very persuasive even with wrong arguments. (Fun fact: In the selection exams for the international math olympiad, twice I got full points for a wrong solution. Later, I once had a full proof of the weight-monodromy conjecture that passed the judgment of some top mathematicians, but then it turned out to contain a fatal mistake.)

— the Lean community has already showed some interest in formalizing parts of condensed mathematics, so the theorem seems like a good goalpost.

— from what I hear, it sounds like the goal is not completely out of reach. (Besides some general topos theory and homological algebra (and, for one point, a bit of stable homotopy theory(!)), the argument mostly uses undergraduate mathematics.) If achieved, it would be a strong signal that a computer verification of current research in very abstract mathematics has become possible. I’ll certainly be excited to watch any progress.

— I think this may be my most important theorem to date. (It does not really have any applications so far, but I’m sure this will change.) Better be sure it’s correct…

7. Arithmetic reality

Finally, let me say a few words about the proof of Theorem 1.1; I tried to give a summary of the idea in Lectures 17 — 18 of the Copenhagen Masterclass, you can find them on YouTube (here and here). In brief, one is battling with two enemies:

a) The real numbers are not locally profinite. So in some philosophical sense there’s a mismatch between the objects \mathcal M_{p'}(S), V (real vector spaces) and the category \mathrm{Cond}(\mathrm{Ab}) (built from profinite sets). Our task is to show that the two can still be friends, even though \mathrm{Cond}(\mathrm{Ab}) wants to chop up all real vector spaces into locally profinite clouds of dust.

b) Putting bounds on the real numbers leads to subsets that are not stable under addition anymore. That’s hardly news to a normal human being — of course 1+1 is larger than 1 — but it is very bad news for a person that has spent all of their life in the p-adic world (where 1+1 is not larger than 1). When combining bounds with homological algebra, it means that within all our complexes we have to carefully keep track of norms. This quickly gets nasty. Did you ever try to chase norms in the snake lemma? In a spectral sequence?

Let me first say a few words about how we battled a), which I believe is a most profound point. In some sense we come back to decimal expansions of the real numbers. More precisely, let 0<r<1 and consider the ring \mathbb Z((T))_{>r} of those Laurent series \sum_{n\gg -\infty} a_n T^n with integer coefficients a_n\in \mathbb Z that converge on a complex punctured disc \{0<|T|< r'\} for some r' > r. Assuming r\geq \tfrac 1{10}, we get a surjection

\mathbb Z((T))_{>r}\to \mathbb R: \sum a_n T^n\mapsto \sum \frac{a_n}{10^n}

and in fact \mathbb R\cong \mathbb Z((T))_{>r}/(10T-1). (Needless to say, the choice of \tfrac 1{10} here is completely arbitrary; you can also use \tfrac 1{\pi}. Except that it’s less obvious that the kernel is a principal ideal. A theorem of Harbater states that \mathbb Z((T))_{>r} is a principal ideal domain!) Now \mathbb Z((T))_{>r} is naturally a countable union of profinite subsets — essentially, putting bounds on each a_n is cutting them down to finite sets. This means that problem a) disappears if we work with \mathbb Z((T))_{>r}. The story of liquid modules also works over \mathbb Z((T))_{>r}, and the whole proof happens here. The variant of Theorem 1.1 for \mathbb Z((T))_{>r} is Theorem 9.1 in www.math.uni-bonn.de/people/scholze/Analytic.pdf, and its proof is given in Lecture 9.

Let me not say much about the proof in Lecture 9 — you can look for yourself — but to note that as indicated in b), it is a very delicate argument fighting with estimates against homological algebra. There is a hierarchy of implicit constants that have to be chosen, and you have to do it in exactly the right order. In the end, one formulates Theorem 9.5 which can be proved by induction; it is a statement of the form \forall \exists \forall \exists \forall \exists, and there’s no messing around with the order of the quantifiers. It may well be the most logically involved statement I have ever proved. (On the other hand, if I want to claim that the theorem on liquid vector spaces makes it possible to black box functional analysis, hard estimates have to be somewhere. Better be sure the estimates actually work…!)

Note that I’m saying that to prove a theorem about real vector spaces (in fact, to set up real functional analysis) we have to work with the arithmetic ring \mathbb Z((T))_{>r}! As a number theorist, this seems like a fair state of affairs to me, but I wonder whether I let my prejudices guide me. So here is a challenge for human mathematicians:

Question 7.1. Can one prove Theorem 1.1 directly over the reals?

Remark 7.2. In Theorem 1.1 the \mathrm{Ext}-groups are computed in \mathrm{Cond}(\mathrm{Ab}). One could also compute them in condensed \mathbb R-vector spaces. The result of Theorem 1.1 is slightly stronger (it implies the vanishing of \mathrm{Ext}-groups in condensed \mathbb R-vector spaces via a mostly formal argument), and implies that p-liquid \mathbb R-vector spaces embed fully faithfully into condensed abelian groups, i.e. the \mathbb R-linear structure is necessarily unique. One could also consider the variant of Theorem 1.1 taking \mathrm{Ext}-groups in condensed \mathbb R-vector spaces; I’d already be happy to see a direct proof of this over the reals. However, I actually believe that this is not any easier.

Finally, here is a somewhat vague question.

Question 7.3. Is the proof of Theorem 1.1 in any way related to some known arguments in functional analysis?

For example, is it related to “multi-scale” arguments? The different scales here being the powers 1,T,T^2,\ldots of the fixed element T (taken as \tfrac 1{10} above).

Peter Scholze, 5th December 2020

About xenaproject

The Xena Project aims to get mathematics undergraduates (at Imperial College and beyond) trained in the art of formalising mathematics on a computer. Why? Because I have this feeling that digitising mathematics will be really important one day.
This entry was posted in number theory and tagged , . Bookmark the permalink.

16 Responses to Liquid tensor experiment

  1. xenaproject says:

    This comment is from David Roberts (who is having trouble posting for some reason):

    ***

    HI Peter,

    thanks for the very nice post. I very much applaud your humility in highlighting the importance of an independent check of your proof.

    I have only one complaint, and I may have mentioned it before. There is long-established terminology for the sort of category that condensed sets form, and indeed any category that satisfies Giraud’s axioms except the set of generators: it is known as a pretopos. In any case, please don’t go around calling it a topos, seeing as there is already a bifurcation in terminology between algebraic geometers and category theorists (elementary vs Grothendieck). SGA4 calls a category satisfying Giraud’s axioms but not the size condition a ‘faux topos’, but this terminology has not stuck. Pretopos is much more meaningful than the perjorative ‘fake topos’.

    Liked by 3 people

    • (Trying again…)

      I just found, that SGA 4 (in Exposé VI, Exercice 3.11), does define a pretopos, I had thought that this was a bit later. Though I suppose the faux topos has all small coproducts, whereas strictly speaking a vanilla pretopos only has finite coproducts.

      These days people talk of \kappa-ary pretoposes, which have \kappa-small coproducts, and allow \kappa to be the size of the universe to allow for all small coproducts.

      Like

  2. Pingback: Various and Sundry | Not Even Wrong

  3. A_reader says:

    1) Is this projet meant to result in a joint paper by the contributors to the Lean code, whatever result they obtain (i. e. that a proof exists or that the result is incorrect) ? Or even a joint paper with Peter Scholze?

    2) What time frame is expected ?

    Thanks for an interesting project in any case, that’s appealing cutting edge stuff!

    (Side comment: there seems to be some nice musical references in several titles above…)

    Like

  4. xenaproject says:

    It doesn’t have to be Lean. I think Peter mentions Lean only once, and that was regarding condensed sets, which there had been some work on independently, but just basic stuff — nothing like the challenge here.

    Who knows what the project will result in. You want my guess? There will be some progress from the Lean people, the proof will be reduced to perhaps the Breen-Deligne resolution, and maybe some people will write it up, and it will get sent to a computer science conference proceedings, and the computer scientists won’t really know what to make of it. Such are the joys of working in this ambiguous region between two disciplines. Note that my work with Massot and Commelin on perfectoid spaces was published in a CS conference proceedings, as was Commelin’s work with Lewis on Witt vectors. I think that the big question is not “what happens to the code”, the big question is really whether it is even feasible to make any progress on the theorem at all. That’s the test. Our perfectoid work showed that it was possible to make complex modern mathematical definitions in a theorem prover. Hales and Gonthier have already demonstrated beyond all doubt that it is possible to prove highly complex theorems about relatively low-level objects (spheres, finite groups, planar graphs) in a theorem prover. This is the next step — can we prove a complex theorem about a complex object? For me this is the interesting question. What do I care about publications? I am interested in laying the foundations for developing new tools which will help humans do mathematics and this is a test to see where we are right now. It’s a bit like the RSA challenge numbers — nobody actually cares what the factors are, people care about what can actually be done.

    What time frame is expected? It’s too early to say. I was going to put the T-adic topology on Z[[T]] and then I discovered that we didn’t have ideal-adic topologies in mathlib (we have them in the perfectoid project but they were never ported over) so now I have to spend a day or two doing those. Sometimes things are slow.

    Well spotted with the musical reference. Not really my kind of thing, but apparently Scholze’s 🙂

    Like

  5. JSE says:

    Thanks for a very informative and idea-rich post, Peter!

    Kevin, how are real numbers formalized in Lean in the first place? Via Cauchy sequences? If so, the idea of proving something about real vector spaces by exhibiting R as a quotient of a subring Z[[T]]_{>r} of Z[[T]] cut out by archimedean norm conditions does somehow seem in the spirit of things; indeed I suppose you could think of this description of R as an alternate means of formalizing the real numbers, one particularly well-suited for functional analysis!

    Like

    • xenaproject says:

      Yes, real numbers are equivalence classes of Cauchy sequences under the hood, but of course the official API is that they’re a complete ordered archimedean field and users are encouraged only to use that API when building new stuff. I tried formalising the real numbers as decimal expansions recently but proving associativity of multiplication is a nightmare! The only way I could see how to do it was to construct them another way (e.g. Cauchy sequences, where associativity is easy peasy) and then construct the bijection (by developing a theory of decimal expansions for complete ordered fields) and then proving that column multiplication on decimals agrees with Cauchy sequence multiplication and hence column multiplication is associative. This is an issue even for the naturals. You can define naturals as strings of digits and even then proving associativity of multiplication is a combinatorial nightmare which I don’t know how to do directly. But proving it agrees with Peano axioms multiplication (which is associative) is easy. Sometimes different constructions tell you new things about an object, even if the object is unique up to unique isomorphism. My MSc student Amelia Livingston just found the same issue with tensor algebras; Lean now has two explicit constructions of the adjoint M \mapsto R[M] of the forgetful functor from R-algebras to R-modules, one for which the universal property is obvious and one for which injectivity of M^{otimes n} to R[M] is obvious.

      Like

      • JSE says:

        So then I think this sharpens my question: I am curious whether I should see the description construction of the reals Peter talks about, as a quotient of a subring of Z[[T]], as “the same” as the Cauchy sequence construction, or “different,” in the sense that your two constructions of M -> R[M] are different; provably equivalent, but each construction makes a different set of facts “formally obvious.” (Somehow I feel “the same” is the right answer in the real numbers case, but my experience is very limited so I’m curious to know how you see it!)

        Like

      • xenaproject says:

        Jordan — I now think of R[1/fg] and R[1/f][1/g] as different! They are uniquely isomorphic as R-algebras but they’re not the same underlying set/type. Any two constructions of the reals are uniquely isomorphic but no two different constructions are the same in the sense of foundations (two sets are the same iff they have the same elements etc). The way maths actually works is that we have lots of different equivalence relations on objects, such as “being literally equal”, “being uniquely isomorphic”, “being canonically isomorphic” (whatever that means), “being isomorphic”, and depending on what we’re doing we can regard any of these notions as being “the same”. We even change our use of = accordingly. In EGA 1 they use the notational convention R[1/fg]=R[1/f][1/g] even though they’re not literally equal. Sometimes isomorphic things are the same (“how many groups of order 6 are there?”) and sometimes distinguishing between different notions of sameness is the mathematics (relation between pi_1(X,a) and pi_1(X,b) ). This goes back to Euclid! The square on the hypotenuse equals the sum of the squares on the other two sides. Pythagoras’ “equality” was an equivalence relation defined axiomatically though, and his proof is indeed a proof that the bigger square is related to the smaller two via his relation.

        Liked by 1 person

  6. Pingback: Algebraic geometry and its plot to take over the (analytic) world. – Gabe Khan's blog

  7. Pingback: Condensed mathematics – SPP 2026

  8. Pingback: Various Math Items | Not Even Wrong

  9. Pingback: The Liquid Tensor Experiment and the Rise of the Digital Homunculus | 3 Quarks Daily

  10. Pingback: To cheer you up in difficult times 27: A major recent “Lean” proof verification | Combinatorics and more

  11. Pingback: New entry on Substack newsletter on Scholze’s liquid tensor experiment | Mathematics without Apologies, by Michael Harris

  12. Pingback: Thuses – theHigherGeometer

Leave a comment