Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 6, 2020

The Liquid Tensor Experiment

Posted by David Corfield

Peter Scholze has just published a challenge to the automated mathematical formalisation community in a post – Liquid tensor experiment – on Kevin Buzzard’s blog. Peter explains there the motivation for the theorem he would like to be formalised, and his reasons to want a formal computer-checked proof.

I see a couple of Café interests intertwined here: formalisation in dependent type theory, and the nature of (mathematical) space.

Regarding the former, there was an intense discussion recently on MathOverflow arising from a comment on the advantages of dependent type theory by Kevin, who for some time has been promoting the prover Lean as the best means to formalise the mathematics of typical mathematics departments.

In the comments to the main question, there is some discussion of why much more has been achieved in this regard by Lean compared with HoTT/Univalent Foundations approaches. We’ve heard plenty about this latter approach at the Café, in particular about its use to develop a kind of synthetic mathematics, due to HoTT being the internal language of (,1)(\infty, 1)-toposes. At the end of the references in the last link are some articles using modal HoTT, in particular relating to the cohesive modalities, as in Mike’s

Brouwer’s fixed-point theorem in real-cohesive homotopy type theory, Mathematical Structures in Computer Science Vol 28(6), (2018): pp. 856-941 (arXiv:1509.07584).

That brings me to the second issue – the nature of (mathematical) space. My question on the comparison of condensedness/pyknoticity and cohesion generated an interesting discussion through which I think it became clear that the former is pointing in a somewhat different direction to the latter.

One of the main motivations for the ‘solid’ and ‘liquid’ mathematics in Peter’s condensed mathematics program is to allow a better approach to functional analysis. Of course, Lawvere, having defined cohesion, also looked to found functional analysis in terms of something else, namely bornology, and Peter told us there that

condensed sets are bornological sets equipped with some extra structure related to “limit points”, where limits are understood in terms of ultrafilters.

I suppose one day there might be a synthetic treatment of condensedness, but the following from Peter’s post suggests an analytic approach is in order for the challenge, where he describes the proof to be formalised as:

…a very delicate argument fighting with estimates against homological algebra. There is a hierarchy of implicit constants that have to be chosen, and you have to do it in exactly the right order. In the end, one formulates Theorem 9.5 which can be proved by induction; it is a statement of the form \forall \exists \forall \exists \forall \exists, and there’s no messing around with the order of the quantifiers. It may well be the most logically involved statement I have ever proved. (On the other hand, if I want to claim that the theorem on liquid vector spaces makes it possible to black box functional analysis, hard estimates have to be somewhere. Better be sure the estimates actually work…!)

Posted at December 6, 2020 4:36 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3275

33 Comments & 0 Trackbacks

Re: The Liquid Tensor Experiment

Hi. Just to give you the goss about condensed v pyknoticity – right now we are sorely tempted to just ignore all the cardinal bounds and formalise condensed sets using the Barwick/Haine approach, based on the computer science maxim “build one to throw away”. Basically we’re going to assume a couple of universes exist and see what happens, on the basis that it will simply make our lives much easier, at least when it comes to actually getting started.

Posted by: Kevin Buzzard on December 6, 2020 5:54 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

The category theorists here will probably welcome your opting for a topos.

Will you need anything homotopic? I recall Peter saying something on the possible advantages of derived categories for working ‘internally’ to make solid abelian groups into a condensed category, here:

The main reason is that the limit is not equal to the derived limit, and one should be taking the derived limit…I didn’t check this carefully, but it may be true that when passing to derived categories, things are again OK; but one will definitely lose some properties.

Posted by: David Corfield on December 6, 2020 8:00 PM | Permalink | Reply to this

Why more in Lean?

I’ve been looking at many systems for a long time (and involved in the development of some). My opinion as to why some systems have more math than other: it is mostly sociological.

Doing that requires a lot of labour. And that labour needs to be organized. To achieve that, you need some dedicated people who start the ball rolling, and then slowly switch from mostly-doers to mostly-organizers. This dynamic has been well documented and studied in the general open source community.

The difficulty is that, for math, one gets very little credit (usually no, sometimes a little, and there’s a few spectacular exceptions) for doing the hard work needed to get a library together. A lot of people here should be familiar with that: the same is sort-of true for the nLab. It is now recognized as an amazing resource, but did the people who spent hundreds of hours working on it get appropriate academic credit? I don’t think so. For general open source software, a lot of the ‘doers’ end up building a portfolio, which helps then get a good (or better) job in industry. Not so for your typical mathematician. So the barrier to getting a large library is quite significant. There’s a reason the libraries of Maple and Mathematica are massively larger than those of most proof assistants! [To be fair, they are older too.]

A bad system will have a hard time getting the kind of critical mass it needs. A sporadically supported system has similar issues.

Philosophy also matters: Coq is unabashedly constructive, Lean is unabashedly classical. In Lean, even things like polynomials, which are usually presented in computation-friendly ways, are non-computing. [They have reasons for that. I am dubious, but I also can’t deny their success.]

Lean also has another huge advantage, on top of a dedicated crew of leaders: learning. Lean is new, and its makers have learned a lot from the good and bad ideas of old systems. Even more: Lean 4 is under development. This means that self-learning is going on: the errors of Lean 1, 2, 3 can be removed. A lot of other systems have steadfastly refused to do this. Of course, there’s a big cost associated with that, and this can only be borne by a community that accepts that having a better system is worth adapting thousands of hours’ of work. Coq lost some of its experts when it broke compatibility too much. Lean seems like it will instead hang on to them, despite breaking things. Again, sociological, not mathematical, forces at play.

Posted by: Jacques Carette on December 6, 2020 6:11 PM | Permalink | Reply to this

Re: Why more in Lean?

A lot of people here should be familiar with that: the same is sort-of true for the nLab. It is now recognized as an amazing resource, but did the people who spent hundreds of hours working on it get appropriate academic credit? I don’t think so.

Absolutely! That’s why I get snippy when people on social media find it amusing to mock it. Much has been written by people without a secure faculty position, some without any position.

I was wondering about the possibility of having my own small contribution to the nLab count for something in terms of ‘Impact’ for my department’s submission to the UK’s Research Excellence Framework. I listed a few endorsements here and here, but it would very difficult to establish.

Posted by: David Corfield on December 7, 2020 7:56 AM | Permalink | Reply to this

Re: Why more in Lean?

Absolutely! That’s why I get snippy when people on social media find it amusing to mock it. Much has been written by people without a secure faculty position, some without any position.

That’s a very interesting point. I suspect you’d agree that the institutions and culture of academia are in many respects extremely conservative: theoretically 100% open to reasoned argument, but in practice sometimes absurdly reluctant to change. Occasionally that conservatism is a good thing, but mostly not. And one of the most harmful effects is the lack of proper recognition for so many activities that are obviously positive and beneficial. That can genuinely blight people’s lives.

There’s a secondary effect of this conservatism. It’s wholly understandable that you get snippy about nLab teasing, for the reason you state. But everything can benefit from good-faith criticism (and sometimes not-so-good-faith criticism), which is much harder to hear and learn from if one’s defensive guard is up through feeling underappreciated. (I’m speaking in generalities, not about you personally.) Or to put it another way, if one is feeling secure and respected, it’s easier to be open to criticism and use it to improve. When our systems and traditions deprive people of that security and respect, they also make life harder for them in this way.

Posted by: Tom Leinster on December 7, 2020 11:46 AM | Permalink | Reply to this

Re: Why more in Lean?

Yes, but criticism has to be grounded in an understanding of the circumstances of the production of the target. That the nLab lacks loads of more accessible expository pages is plain to everyone. But it’s no particular person’s job to write these. That a (very) few people have been kind enough to share thousands of pages of their insights for no obvious personal gain is remarkable in itself. They don’t owe anyone anything.

And anyone can edit, just please be a little sensitive to the ways of the community, and report what you’re doing at the nForum and ask advice there.

Ask not what the nLab can do for you,…

Posted by: David Corfield on December 7, 2020 1:40 PM | Permalink | Reply to this

Re: Why more in Lean?

I have little disagreement with that. Let me emphasize, though, that I wasn’t especially talking about the nLab, but the general phenomenon of what happens when you pour time and energy into activities that aren’t appropriately rewarded by the establishment. Jacques gave one example, you gave another, and I have others in mind myself.

Posted by: Tom Leinster on December 7, 2020 4:08 PM | Permalink | Reply to this

Re: Why more in Lean?

David wrote:

Yes, but criticism has to be grounded in an understanding of the circumstances of the production of the target.

It rarely is. Most people who read the nLab see such an awesome, intimidating wall of highly esoteric information that they can’t easily imagine that it’s made by people “like them”: individuals who are doing this as a hobby, not a well-funded organization with committees and review boards. Nobody they know says things like this:

Recall, that given a small site (𝒞,J)(\mathcal{C},J) there is a geometric theory 𝔽 J\mathbb{F}_J called the theory of JJ-continous flat functors that (albeit uneconomically) axiomaticizes the theory classified by Sh(𝒞,J)Sh(\mathcal{C},J) using the objects and morphisms in 𝒞\mathcal{C} for its signature. When Sh(𝒞,J)Sh(\mathcal{C},J) is exponentiable this theory is dualizable, and, in the case the site (𝒞,J c)(\mathcal{C},J_c) is coherent site i. e. 𝒞\mathcal{C} has finite limits and J cJ_c is generated by finite covering families, the following theory 𝕊\mathbb{S} called the theory of sheaves on (𝒞,J c)(\mathcal{C},J_{c}) axiomatizes the dual theory 𝔽 J c *\mathbb{F}_{J_{c}}^\ast (in an equally uneconomic way):

So they don’t understand that this was jotted down by someone in their spare time… and they certainly don’t feel capable of jumping in and helping out.

I don’t think there’s any way around this, short of showing pictures of the top nLab contributors on every page, and saying how busy they are with other things. Like a picture of you mowing the lawn or something.

So yes, there will be sniping on social media; you just have to put up with it. But if you go to the Category Theory Community Server you’ll see young category theorists absolutely rely on the nLab… and that’s your thanks.

How to get these people to join in building the nLab? That’s an interesting challenge. I think it would require nLabbers to frequent the Category Theory Community Server, and act sort of friendly. There’s a big generation gap.

Posted by: John Baez on December 22, 2020 5:46 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

I have been playing around with an internal axiomatization of condensed (or, really, pyknotic) types, but I haven’t been able to get it quite right. I’ll just drop my musings here in case they interest some more capable minds.

My hunch is that we can begin the axiomatization with what I want to call the “condensation principle”:

Every “ultrafilter” on a “compactum” is principle (by a unique element).

I call the unique element which generates the ultrafilter it’s “condensate” — that is, the inverse to the inclusion of a “compatum” into its space of “ultrafilters” is called the “condensation” function. The meaning of “ultrafilter” and “compactum” are up to interpretation. But the “compacta” (a kind of set) should be (or narrowly include) the representables in the intended model, and the notion of “ultrafilter” should degenerate with excluded middle into a standard definition.

One pass at making this notion precise is to define an “ultrafilter” to be simply a prime filter on the powerset, which is to say a Heyting morphism (XΩ)Ω)(X \to \Omega) \to \Omega). We might then phrase the axiom as saying that every prime filter on a “compactum” is completely prime (a frame morphism, not just a Heyting morphism), and then rely on the definition of “compactum” (say, a separation condition) to let us conclude that the completely prime filters on the powerset are principle.

Making this useful and precise has been difficult for me, so I’ll talk a bit about why I think this is the right direction.

In his thesis, Ingo Blechschmidt gives an axiom which describes the internal logic of the ring classifier: there is a ring 𝔸\mathbb{A} (to be the affine line) which is synthetically quasi-coherent in that it satisfies the following law:

For every finitely presented 𝔸\mathbb{A} algebra RR, the map R(Hom 𝔸(R,A)𝔸)R \to ({Hom}_{\mathbb{A}}(R, A) \to \mathbb{A}) given by sending r:Rr : R to evaluation by rr is an isomorphism.

In the algebro-geometric context, this means that RR is precisely the ring of functions of its synthetic spectrum, the space of “algebraic functionals” on RR. To get the big Zariski topos, on asks further that 𝔸\mathbb{A} be local, and to get finer topologies one asks for further properties on 𝔸\mathbb{A} (which correspond to the structure implied on stalks by the topologies, e.g. strict Hensenlian for etale).

I believe something similar can work in the setting of condensed sets. There are two “algebraic” contexts which correspond to compacta: compact regular frames and commutative C*-algebras. Each of these should give a version of Ingo’s axiom. These axioms would look something like:

Every* compact regular frame is the powerset of its set of points.

or

Every* commutative C* algebra is the algebra of functions on its space of characters.

The first of these isn’t right, since the powerset of a set is compact if and only if the set is Kuratowski finite. We might try and change the notion of “compact”; for example, we might instead ask that it is compact only with respect to discretely indexed covers (which is a way of saying “externally” compact — as a coherent object is). But any change of this sort messes with the equivalence between compact regular locales and commutative C* algebras.

If we just wanted to pick out the representables, we could define “compactum” to be a coherent set, by which I mean an set which is “discretely compact” in that it is compact for discretely indexed covers, and is stable in that the kernel pair of any map from a discretely compact set is discretely compact. But it isn’t clear to me how this notion of coherent — which is a straightforward internalization of the notion you can find in the Elephant — relates to any useful compact regular locales, which are the correct intuitionistic notion of compactum as a space rather than a bare set.

Some of these axioms seem related to the notions of solidity and liquidity in their functional guises. Maybe an axiom more informed by the actual practice of condensed mathematics is a better idea :)

.* It is perhaps crucial to restrict which sorts of locales and C* algebras this axiom applies too (Synthetic Quasi-Coherence just applies to the finitely presented ones…). Certainly size issues come into play, but even more deeply, there may be some “pro-finiteness” baked into the class to which these axioms apply.

Posted by: David Jaz Myers on December 7, 2020 8:10 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

If a constructive version of ultrafilter would help, I see from a while ago Andrej Bauer writes:

The proof of existence of the codensity monad really looks constructive, which is to be expected since it’s just general category theory. So this is quite interesting because it does indeed suggest a constructive alternative to ultrafilters.

The other remaks I wanted to make was that the codensity monad for the inclusion of finite sets into sets reminds me very strongly of Martin Escardo’s selection monad. This is worth looking into.

Posted by: David Corfield on December 29, 2020 12:26 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

It thought I’d take a look at Andrej’s hint and started an nnLab page on the selection monad.

I guess the connection is something like seeing what David JM above calls the “condensate” of an ultrafilter as a kind of selection function for integration against the ultrafilter.

Posted by: David Corfield on December 31, 2020 10:59 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

An idle question: bad quotients like the boundary P 1(R)/PGl 2(Z)P_1(R)/PGl_2(Z) of the elliptic moduli space seem (like R/QR/Q?) to live on the solid/liquid shoreline. Do these new ideas shed any light on such shadowy things?

Posted by: jackjohnson on December 7, 2020 10:11 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

You can certainly take the quotient 1()/SL 2()\mathbb{P}^1(\mathbb{R})/\mathrm{SL}_2(\mathbb{Z}) in condensed sets, just like /\mathbb{R}/\mathbb{Q}, and this quotient is reasonable as a condensed set. But this seems unrelated to solid or liquid modules.

Posted by: Peter Scholze on December 7, 2020 11:12 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Thanks, good to know. Does this quotient have an interpretation as a moduli space (asking for a friend)?

Posted by: jackjohnson on December 8, 2020 12:23 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Well, everything is a moduli space (of the functor it represents). In this case, it’s parametrizing oriented rank 22 lattices together with a line in the associated real vector space. (Note that these notions make sense relatively over a profinite set (or general condensed set), so you really define a condensed set, not just a set, this way.)

Posted by: Peter Scholze on December 8, 2020 8:06 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Very nice! Thanks again, sincerely.

Posted by: jackjohnson on December 8, 2020 1:27 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

This is adressing a minor point, but I’m a bit perturbed by constant focus on the difference between pyknotic and condensed. Really, there is no difference. It’s an aesthetic choice more than anything else, and in particular I think it’s incorrect to say that condensed and pyknotic are pointing in somewhat different directions. What may be true is that Clark & Peter H had different applications in mind than me & Peter S, but that has nothing to do with which aesthetic choice the respective parties made in setting up the theory.

Posted by: Dustin Clausen on December 22, 2020 2:25 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Well, they literally have different consistency strengths. I expect pyknotic stuff to be conservative over condensed stuff when dealing with only the latter objects. But then mathematicians might as well use NBG as a foundation rather than ZFC, since it’s conservative for statements about sets. In fact I like this approach, so I’m not saying we shouldn’t do this kind of thing.

But try getting Johan de Jong to put pyknotic machinery into the Stacks Project. Based on the foundational predilections he has publicly stated, I imagine he’d only be happy with condensed mathematics.

Posted by: David Roberts on December 23, 2020 2:56 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

That’s a fair point, David. I should have said that there’s no difference practically speaking, meaning as a “user” of the theory. But of course there is a literal difference, and if one is unwilling to assume universes then one certainly cares about this difference.

On the other hand there is the third option of just taking \kappa-condensed sets for a fixed uncountable strong limit cardinal \kappa, eg. beth aleph naught. This is what Bhatt-Scholze did in their pro-etale paper. Then you get an honest topos without assuming universes, and again for all practical purposes I’m aware of there’s no difference between working in condensed sets, pyknotic sets, or \kappa-condensed sets. In particular for the purposes of the results Peter mentions in his challenge there’s no difference.

Posted by: Dustin Clausen on December 23, 2020 12:33 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

OK, cool. Incidentally I think the “pick a suitably big κ\kappa” approach is also much closer to the Stacks Project philosophy, where only small sites are considered.

Posted by: David Roberts on December 24, 2020 11:08 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Dustin wrote

I think it’s incorrect to say that condensed and pyknotic are pointing in somewhat different directions.

If, as seems likely, you’re referring to my comment in the post

My question on the comparison of condensedness/pyknoticity and cohesion generated an interesting discussion through which I think it became clear that the former is pointing in a somewhat different direction to the latter,

that’s not what I meant. I was contrasting the near synonymous pair condensedness/pyknoticity with cohesion.

Posted by: David Corfield on December 23, 2020 1:16 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Hi David,

Thank you! I had indeed misread what you wrote. My apologies. (I was also surprised to read something from you that I disagreed with, to be honest, so I should’ve gone back and double checked to be sure…)

Posted by: Dustin Clausen on December 23, 2020 1:28 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Hi Dustin. No problem.

The contrast with cohesion at least prompted David Jaz Myers to tell us of his efforts to give a synthetic account of condensedness here.

It feels like there should be more to be said category-theoretically about the world of ultrafilters and ultracategories.

Perhaps any such insight would cast light on an old discussion we had on the relationship between category theory and model theory.

Posted by: David Corfield on December 23, 2020 2:29 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Since I’ve been mentioned I’ll drop in here with a bit more on what condensed homotopy type theory might look like. Especially since the Sierpinski space might play a crucial role in the axiomatization, and this is one of those places where the condensed and pyknotic approaches differ. I should say what I call “condensed” HoTT really should be called “pyknotic” HoTT because we need the topos, but I like the simpler English word better…

Condensed HoTT would be a specialization of what Mike Shulman calls Spatial HoTT, which uses a separation of variables into “crisp” and not “crisp” (or “spatial”, for emphasis) varieties. The crisp variables can be thought of as discrete or “external” variables. Spatial HoTT adds both a comonadic (flat) and a monadic (sharp) modality corresponding to the existence of fully faithful left and right adjoints to the global sections functor. These are the (co)reflections into discrete and codiscrete spaces.

The difference between this and Cohesive HoTT is the non-existence of a global modality left adjoint to flat which produces the homotopy type of a type. But we should have a reflection into discrete types for some types which we might call “continua”, and these should include the CW complexes on which that reflection should be given by localization at the unit interval.

Ok, on to the particular character of condensed HoTT. Here’s a slightly more down-to-earth approach than what I posted above.

The main idea is to unify, internally, the dual spatial and algebraic representables. There are two algebraic duals to the site of compact Hausdorff spaces: compact regular frames, and commutative C*-algebras. We could use either of these, and a good litmus test is that each axiom should be able to prove the other.

An internal spatial representable is given by a coherent set, by which I mean a crisp set in which every discretely indexed cover has a finite subcover (“discretely compact”) and which is stable for this property in that the kernel pair of any map from another discretely compact object is also discretely compact. This is an internal way of saying “coherent object”.

The C* version of the axiom would then be:

For any coherent set XX, the evaluation map XC *((X),)X \to C^{\ast}((X \to \mathbb{C}), \mathbb{C}) including XX to the space of characters of its commutative C*-algebra of functions is an isomorphism.

The \mathbb{C} here is the usual (Dedekind) complex numbers, which should be the complex numbers as included from topological spaces externally.

On the other hand, it would be a shame to have to turn every topological problem into a statement about C* algebras, so we should have a more basic duality that works with the topological structure of representables. Here is where I went astray above; I thought we could do this fully internally without assuming sui generis any sets. Now, I’m not so sure.

The pivot over which the duality between compact Hausdorff spaces and compact regular frames flips is the Sierpinski space Σ\Sigma. The axiom for compact regular frames would be:

Let XX be a coherent set. Then the evaluation map XFrame((XΣ),Σ)X \to Frame((X \to \Sigma), \Sigma) is an isomorphism. Or, in other words, every completely prime filter of “open” subsets of XX is principle.

This is what I would call the condensation principle — a completely prime filter of opens “condenses” to a unique point contained in it.

Of course, the Sierpinski space is only pyknotic, and not condensed. I have the feeling that this is still not really a meaningful difference.

I do not know how to prove one of these axioms from the other (but I haven’t really tried). However, it seems important to me that we be able to construct Σ\Sigma internally (so that we are assuming a proposition rather than an element of a groupoid). A naive guess might be to define Σ=/ *\Sigma = \mathbb{C}/\mathbb{C}^{\ast}; I’m doubtful that this equation holds for pyknotic sets as it does for topological spaces. It amounts to the claim that for any open subset UU of a compact Hausdorff space XX, there is a cptHaus YY surjecting onto XX and a complex valued function on YY so that the elements of YY taking non-zero values are precisely those sent to UU.

Posted by: David Jaz Myers on December 27, 2020 3:25 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

If condensed sets are to be a replacement for topological spaces, one should presumably be able to develop some elementary general topology in this setting. For instance, one can ask the following, which might help people to gain some intuition for condensed sets.

1) Is there is a nice way to describe the unit interval, or the real numbers, as a condensed set in an elementary way, without referring to the unit interval (resp. real numbers) as a topological space? I am aware that one can use decimal/binary expansions to obtain a relevant profinite set, but I’d like to see this fully worked out, i.e. an explicit description as a sheaf.

2)

a) Is there is a nice way to say what a compact condensed set is? Same question for connectedness.

b) In the usual category of topological spaces, there is a pathology that compact objects in the category theoretic sense (preservation of filtered colimits) are not exactly the compact topological spaces in the usual sense. Can we use the category theoretic notion in the condensed case?

c) It would seem to me that an elementary description of the unit interval as a condensed set using decimal or binary expansions would probably lend itself well to proving compactness of it. Can we spell that proof out for whatever definition of compactness is chosen?

d) Connectedness of the real numbers is, of course, a very low-level fact, relying on completeness. Again, it seems to me that use of nested intervals in the condensed setting might lend itself well to a completeness-type argument. Can we spell this out?

3) Related to 1) and 2 d). If one is to have a theory of real numbers, one has to build in Archimedean-ness somehow. One cannot obtain something from nothing, so I would imagine that 1) might be difficult unless one imposes Archimedean-ness as an axiom in one form or another. Suppose that we do this, i.e. one takes as one’s foundation of analysis the category of condensed sets plus some axiom implying Archimedean-ness for some condensed set which will play the role of the real numbers. I can imagine that the kind of theorem which Professor Scholze is asking for a formal proof of might be become more enlightening/approachable in a different way in this setting. Can we formulate some nice axiom implying Archimedean-ness in this way, and explore whether we can at least formulate if not prove something like that theorem in elementary terms?

Posted by: Richard Williamson on December 23, 2020 12:44 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Hi Richard,

Thanks for the questions!

1) Indeed, the unit interval is a quotient of the Cantor set {0,1} \{0,1\}^\mathbb{N} by the equivalence relation which “glues adjacent endpoints together”, or makes sure that 0.999999...=10.999999... = 1 and the like. This induces a description as sheaf on extremally disconnected profinite sets S, since for such S mapping out of S commutes with quotients by compact Hausdorff equivalence relations.

I think there’s also nothing in principle wrong with describing the real numbers as the sheaf which assigns to a profinite set S the completion of the set of locally constant functions SS\rightarrow\mathbb{Q} with respect to the sup norm.

But another slightly more elaborate description of \mathbb{R} has been crucial for us in the liquid story, writing \mathbb{R} as a quotient of an overconvergent arithmetic Laurent series ring, as Peter describes in his challenge.

2) a,b) The category of condensed sets is compactly generated, but the compact objects themselves are not so nice. They are the quotients of compact Hausdorff spaces by equivalence relations which are generated, as equivalence relations, by some closed relation. There are many such equivalence relations which are not themselves closed, so compact condensed sets include lots of non-Hausdorff phenomena in addition to the usual compact Hausdorff spaces.

On the other hand there is another kind of “compactness” condition which comes from the world of topos theory. It’s called being “quasicompact and quasiseparated”. The quasicompact quasiseparated condensed sets are exactly the compact Hausdorff spaces.

One can also define connectedness in a topos-theoretic way. This unwinds to something very naive: a condensed set is connected if it’s nonempty and whenever you write it as a coproduct of two other condensed sets, one of them must be empty. For compactly generated weak Hausdorff spaces, which sit fully faithfully in condensed sets, it’s the same as connectedness for topological spaces.

c) Yes, you can see that every profinite set is quasicompact and quasiseparated because the site you chose is finitary, and then [0,1][0,1] is the quotient of {0,1}\prod \{0,1\} by a closed equivalence relation, so it’s also quasicompact quasiseparated. Actually if you think about it this is basically the usual proof of compactness of [0,1][0,1] by dyadic subdivision.

d) I haven’t thought about proving connectedness of the real numbers in a purely condensed way. I’d guess it will again be a version of some ordinary proof.

3) I’m not sure I follow the question here. I don’t see why you’d have to build Archimedeanness in so to speak or add it as an axiom when you can just construct \mathbb{R} however you like and then work with it. Actually, the third description of \mathbb{R} I mentioned above is the most useful one and it seems rather much more in the nature of direct construction than axiomatic specification.

Posted by: Dustin Clausen on December 23, 2020 1:09 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Hi Dustin, thank you very much for the reply. This is very helpful. For now, I’ll just focus on the following; I’ll get back to my other questions/your replies to them at a later point.

Indeed, the unit interval is a quotient of the Cantor set {0,1} \left\{0,1\right\}^\mathbb{N} by the equivalence relation which “glues adjacent endpoints together”, or makes sure that 0.999999...=10.999999...=1 and the like. This induces a description as sheaf on extremally disconnected profinite sets S, since for such S mapping out of S commutes with quotients by compact Hausdorff equivalence relations.

Thank you. The following re-telling of this, which avoids referring to topological spaces at all, was something like what I was looking for: motivating the definition of condensed sets from a standard category theoretic point of view.

The profinite set {0,1} \left\{0,1\right\}^{\mathbb{N}} is almost the unit interval. It would be the unit interval if we could identify things like 0.10.1 and 0.01111110.0111111\ldots. Thus we’d like to take a quotient (colimit). However, the category of profinite sets is not co-complete, and there is no reason for this quotient to exist as a profinite set. The obvious thing to do then is to pass to a co-completion of profinite sets. We can pass to the free co-completion, that is, take presheaves (I will ignore size matters), but then of course we destroy existing colimits in profinite sets; the remedy then is to work with sheaves with respect to some topology. I can’t say at this point why jointly surjective covers give exactly the right topology, but they are one of the most minimal topologies one could choose so that representables are sheaves and so that one retains a reasonable amount of information, so seem a reasonable choice to start with. Once we know that representables are sheaves, it is pretty obvious that the required colimit is a sheaf.

End of re-telling! The sort of thing I was getting at now was to see what we can do from this starting point, where we have not mentioned topological spaces yet; can we continue to proceed without mentioning them?

As I say, I will come back to some other aspects of your reply. For now, I’d like to try to formulate a statement of the liquidity theorem from this perspective. Before we can do that, though, there are some basic things to try to understand.

Firstly, we can of course construct the pp-adic unit interval (pp-adic integers) simply as the profinite set {0,,p1} \left\{ 0, \ldots, p-1 \right\}^{\mathbb{N}}, and thus as a representable condensed set. Noting that we can replace the use of {0,1}\left\{ 0, 1 \right\} in the construction of the real numbers with {0,,p1}\left\{ 0, \ldots, p-1 \right\} for any pp and tweaking the quotient in the obvious way (i.e. we obtain an isomorphic condensed set in this way), the only difference between the real numbers and the p-adic integers for any pp comes from the quotient in the definition of the real numbers.

One can show that both the real and pp-adic numbers constructed in this way can be equipped with the structure of condensed abelian groups (i.e. sheaf of abelian groups), again without referring to topological spaces. There is a perfectly good tensor product of condensed abelian groups. Here comes the first important point in the ‘liquid’ story: it is claimed that the tensor product of the real numbers and pp-adic numbers in this sense is no good.

Before we go further, could we dwell on this a moment? What properties would we like this tensor product to have? Professor Scholze’s article is a bit vague on why the naive tensor product does not have these properties; can we spell this out?

Posted by: Richard Williamson on December 27, 2020 2:22 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

I can’t say at this point why jointly surjective covers give exactly the right topology

I’m pretty sure it’s whatever passes for the coherent topology on the category of profinite sets. If surjective maps of profinite sets are precisely the regular epimorphisms, then this is definitely true, but I don’t know off the top of my head.

Posted by: David Roberts on December 28, 2020 3:25 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Profinite sets are the same as Stone spaces. So if we have a surjection p:YXp: Y \to X between Stone spaces, it suffices to show that this is a quotient map. But this follows from the fact that a continuous surjection from a compact space to a Hausdorff space is a closed map, and that a closed continuous surjection is a quotient map; see the third item here. Somewhat relatedly, compact Hausdorff spaces form a pretopos, so that epis are coequalizers of their kernel pairs.

Posted by: Todd Trimble on December 28, 2020 8:35 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

I haven’t been following this thread super-closely, so maybe this doesn’t affect much the overall point you (Richard) were making, but my eyes did alight on this:

However, the category of profinite sets is not co-complete

Isn’t it? I thought the category of profinite sets was equivalent to the category of Stone spaces, which is equivalent to the opposite of Boolean algebras. Since the category of Boolean algebras is complete, the category of Stone spaces ought to be cocomplete.

Possibly you mean to say that profinite sets are not closed under colimits taken in an ambient category like TopTop?

Posted by: Todd Trimble on December 29, 2020 12:08 AM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

Thank you for the clarification/correction Todd! Yes, it would have been more precise to say that colimits do not behave as we need here (e.g. the one which should give the real numbers does not ‘exist’ in the sense that if we try to take that colimit, we get completely the wrong answer).

What I had in mind was that there is no real reason for general colimits to exist in a category of pro-objects, and if they do exist, they probably do not behave in a useful way for most purposes; in one wishes to use pro-objects as ‘local objects’ which one glues, it will almost certainly be better to pass to pre-sheaves (and thus deliberately destroy the ‘wrong’ colimits!).

As you hinted, I can maybe also remark that I am keen to think of pro-finite sets, or more generally pro-sets, here simply as pro-objects in finite sets, without making use of any equivalence to a category of (some) topological spaces:-).

Posted by: Richard Williamson on December 29, 2020 12:55 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

PS - I hopped from talking about the unit interval to the real numbers. Once one has a unit interval, one can of course obtain the real numbers by glueing copies of the unit interval together end to end. No doubt one can proceed in other ways, e.g. in the construction of the unit interval that I outlined, one can probably replace {0,1} \left\{ 0, 1 \right\}^{\mathbb{N}} by ×{0,1} \mathbb{Z} \times \left\{ 0, 1 \right\}^{\mathbb{N}}, where \mathbb{Z} is regarded as a discrete profinite set.

Posted by: Richard Williamson on December 27, 2020 4:20 PM | Permalink | Reply to this

Re: The Liquid Tensor Experiment

An update on what has been achieved by Peter Scholze.

Posted by: David Corfield on June 5, 2021 6:21 PM | Permalink | Reply to this

Post a New Comment