Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

June 10, 2021

Large Sets 2

Posted by Tom Leinster

Previously: Part 1. Next: Part 3

The world of large cardinals is inhabited by objects with names like inaccessible, ineffable and indescribable, evoking the vision of sets so large that they cannot be reached or attained by any means we have available. In this post, I’ll talk about the smallest sets that cannot be reached using the axiom of ETCS: limits.

Most of the ETCS axioms assert the existence of certain sets, either outright or constructed from other sets and functions between them. Informally, those axioms say:

  • We have sets 00, 11, 22 and \mathbb{N}.

  • For any sets AA and BB, we have their product A×BA \times B.

  • For any sets AA and BB, function f:ABf: A \to B, and bBb \in B, we have the fibre f 1(b)f^{-1}(b).

  • For any sets AA and BB, we have the set B AB^A of functions from AA to BB.

(That accounts for seven of the ten ETCS axioms, in the form I listed them last time. The other three axioms are that sets and functions form a category, a function is determined by its effect on elements, and the axiom of choice.)

We can use the axioms of ETCS to build new sets, such as 2 \mathbb{N}^{2^{\mathbb{N}}}. But we can also contemplate sets that are too big to be constructed in this way — that “cannot be reached from below”.

Here “big” refers to the relation \leq on sets, defined by XYX \leq Y if and only if there exists an injection XYX \to Y. Strict inequality X<YX \lt Y means that XYX \leq Y and XX is not isomorphic to YY. The Cantor–Bernstein theorem tells us that XYXX \leq Y \leq X if and only if XYX \cong Y, so another way to say X<YX \lt Y is “XYX \leq Y but not YXY \leq X”.

A set XX that “cannot be reached from below” using ETCS is one with the following properties:

  • 0<X0 \lt X, 1<X1 \lt X, 2<X2 \lt X and <X\mathbb{N} \lt X.

  • For any sets A<XA \lt X and B<XB \lt X, we have A×B<XA \times B \lt X.

  • For any sets A<XA \lt X and B<XB \lt X, function f:ABf: A \to B, and bBb \in B, we have f 1(b)<Xf^{-1}(b) \lt X.

  • For any sets A<XA \lt X and B<XB \lt X, we have B A<XB^A \lt X.

There’s loads of redundancy in this list — for example, A×Bmax(A,B)A \times B \cong max(A, B) when AA and BB are infinite, so the item about products can be skipped. It’s easy to see that XX satisfies all these conditions if and only if it satisfies just these two:

  • <X\mathbb{N} \lt X;

  • for any sets A<XA \lt X and B<XB \lt X, we have B A<XB^A \lt X.

A set XX satisfying the first condition is, of course, said to be uncountable. An infinite set XX satisfying the second condition is said to be a strong limit.

(I’ll come back to that word “strong” later.)

This definition of strong limit is still a bit redundant. Since B Amax(2 A,2 B)B^A \leq max(2^A, 2^B) for infinite sets AA and BB, it’s equivalent to define an infinite set XX to be a strong limit if and only if:

  • for any set A<XA \lt X, we have 2 A<X2^A \lt X.

And in fact, that’s how the definition of strong limit is usually phrased.

For example, \mathbb{N} is a strong limit. But no power set 2 A2^A is a strong limit, since A<2 AA \lt 2^A and 2 A2 A2^A \nless 2^A. So none of 2 2^{\mathbb{N}}, 2 2 2^{2^{\mathbb{N}}}, … is a strong limit.

Are there any strong limits apart from \mathbb{N}? That is, are there any uncountable strong limits?

Well, it’s almost in the definition that the existence of uncountable strong limits can’t be proved in ETCS, unless ETCS is inconsistent. Here’s why.

Take a model of ETCS, that is, a category satisfying the ETCS axioms. Let’s temporarily say that a set (in this model) is “small” if it is strictly smaller than every uncountable strong limit. Then the small sets are also a model of ETCS. For example, if AA and BB are small then for every uncountable strong limit XX, we have A<XA \lt X and B<XB \lt X, therefore B A<XB^A \lt X; hence B AB^A is small. So we’ve shown:

For any model of ETCS, the sets smaller than every uncountable strong limit are also a model of ETCS.

This immediately implies a simple independence result:

It is consistent with ETCS that there are no uncountable strong limits.

For if ETCS is consistent then we can take a model of it and cut down to the model consisting of just the “small” sets. And by construction, that cut-down model of ETCS contains no uncountable strong limit.

Let me digress for a moment on the “strong” terminology. In set theory, there are two standard ways of making a set AA slightly bigger. One is to take its power set, 2 A2^A. The other is to take the smallest set strictly bigger than AA, denoted by A +A^+ (the successor of AA). A priori, there is no reason to expect that A +A^+ and 2 A2^A should be the same, and we’ve known since Cohen that, in fact, they needn’t be. So whenever we see something interesting involving power sets, we can ask what would happen if we replaced them by successors, and vice versa.

In particular, we can make this replacement in the definition of strong limit. Thus, an infinite set XX is said to be a weak limit if:

  • for any set A<XA \lt X, we have A +<XA^+ \lt X.

It should be clear that every strong limit is a weak limit. It’s also nearly immediate that an infinite set is a weak limit if and only if it is not the successor of anything. That is, every infinite set is a weak limit or a successor, but not both.

If there are any models of ETCS at all, there’s one in which the generalized continuum hypothesis holds, i.e. A +=2 AA^+ = 2^A for all infinite sets AA. In such a model, weak limit == strong limit, so by the result above:

It is consistent with ETCS that there are no uncountable weak limits.

That’s the end of the digression on weak limits. However, I want to point out that when I defined A +A^+, I implicitly assumed the fact that any family (A i) iI(A_i)_{i \in I} of sets, indexed over some nonempty set II, has a smallest member. (For without this fact, how would we know that there’s a smallest set strictly bigger than AA?) This can be proved using Zorn’s lemma, or by arbitrarily well-ordering each member.

The fact that every nonempty family of sets has a smallest element is extremely important, and I’ll use it over and over again throughout these posts.

For example, it tells us that if (in a given model of ETCS) there are any uncountable strong limits at all, there is a smallest one. So earlier, when I wrote this —

Let’s temporarily say that a set (in this model) is “small” if it is strictly smaller than every uncountable strong limit

— it can be understood as follows. If there are no uncountable strong limits, then every set is small. If there is an uncountable strong limit, then there is a least one, XX, and a set AA is small if and only if A<XA \lt X.

In fact, it’s easy to describe the smallest uncountable strong limit, assuming there are any uncountable strong limits at all. We know that every uncountable strong limit is bigger than all of

,2 ,2 2 ,. \mathbb{N}, 2^{\mathbb{N}}, 2^{2^{\mathbb{N}}}, \ldots.

Since this sequence of sets has an upper bound, it has a least upper bound or supremum. Call it XX. You should be able to convince yourself that XX is an uncountable strong limit. So, XX is the smallest uncountable strong limit. The standard name for XX is ω\beth_\omega, and I’ll say much more about it in future posts.

One last general point. At the risk of sounding slow — a risk that any mathematical blogger constantly exposes themselves to — it took me a while to really appreciate that despite the extreme structurelessness of sets, infinite sets can have different qualities. They may just be bags of featureless dots, but even bags of featureless dots can have distinctive features.

Here I’m contrasting “quality” with “quantity”. Of course, there are basic properties of sets such as uncountability that refer to being bigger or smaller than some threshold. But my point is that there are natural, important properties of sets that appear and disappear as you climb the ladder. Being a strong limit is one such property: \mathbb{N} is, 2 2^{\mathbb{N}} isn’t, 2 2 2^{2^{\mathbb{N}}} isn’t, etc., until you reach the supremum ω\beth_\omega, which is again. Later posts will introduce several more such properties.

Next time

However you like to approach set theory, well-ordered sets arise inevitably, and they’re my topic for next time.

Posted at June 10, 2021 5:29 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3319

76 Comments & 1 Trackback

Re: Large Sets 2

How much choice do you need for the statement that every family of sets has a smallest member? How far can you get with, say, only DC?

Posted by: Anon on June 10, 2021 6:17 PM | Permalink | Reply to this

Re: Large Sets 2

Even if you demand only that every pair of sets has a smallest member, then you are asking that any two sets be comparable in size (there is an injection from one of them to the other), and this implies that any set can be well-ordered.

To see that any SS can be well-ordered, take it and its Hartogs number (S)\aleph(S), which is the collection of isomorphism classes of subsets of SS equipped with a well-ordering. Any two well-ordered sets are comparable, and it can be shown that (S)\aleph(S) is itself well-ordered under this comparability ordering. It can also be shown, along the lines of the Burali-Forti paradox, that (S)S\aleph(S) \leq S leads to a contradiction.

So if (S),S\aleph(S), S are comparable, then it must be that S<(S)S \lt \aleph(S), giving an injection from SS to a well-ordered set. But then SS becomes well-ordered, by restricting the order on (S)\aleph(S) to SS.

Posted by: Todd Trimble on June 10, 2021 7:03 PM | Permalink | Reply to this

Re: Large Sets 2

You can also restrict the sets being compared and still get full AC. The example I remember is the axiom “Every graph G has a chromatic number”, meaning, we can often enough compare the different ways of coloring G to be able to establish one of them as “least”.

Posted by: Allen Knutson on June 11, 2021 6:35 PM | Permalink | Reply to this

Re: Large Sets 2

Todd’s nice answer is related to an interesting point about successor sets X +X^+.

In the post, I said the following. We know that for every set XX, there’s some other set strictly larger than XX, so (using choice) there must be a smallest set X +X^+ strictly larger than XX. This is a perfectly valid definition of X +X^+, but it leaves it somewhat nebulous. Unlike the power set, we have no explicit description of X +X^+, and even its existence depends on the axiom of choice. At least, that’s how it seems from the presentation I gave.

But happily, X +X^+ does have an explicit description, in the following sense. As Todd says, one can apply the Hartogs construction \aleph to the set XX, giving the well-ordered set (X)\aleph(X) that he describes. The underlying set of (X)\aleph(X) consists of the isomorphism classes of well-ordered sets of cardinality X\leq X, and the important point is that it’s exactly X +X^+. Indeed, as the nLab page says, (X)\aleph(X) is the smallest well-ordered set with cardinality X +X^+.

To prove that the underlying set of (X)\aleph(X) is X +X^+ requires the axiom of choice. (We can’t possibly get away without choice: for as in Todd’s answer, if every set injects into some well-ordered set then every set can be well-ordered.) But no choice is needed to give the definition of (X)\aleph(X).

Posted by: Tom Leinster on June 11, 2021 12:12 PM | Permalink | Reply to this

Re: Large Sets 2

Without choice, would one then use well-preorders or well-quasiorders instead of well-orders?

Posted by: Madeleine Birchfield on June 11, 2021 3:40 PM | Permalink | Reply to this

Re: Large Sets 2

For example, \mathbb{N} is a strong limit. But no power set 2 A2^A is a strong limit, since […]. So none of \mathbb{N}, 2 2^{\mathbb{N}}, 2 2 2^{2^{\mathbb{N}}}, … is a strong limit.

So is \mathbb{N} a strong limit or not?

Posted by: Tim Hosgood on June 10, 2021 6:40 PM | Permalink | Reply to this

Re: Large Sets 2

Ha! Well caught. Thanks; I’ll fix that.

Yes, \mathbb{N} is a strong limit. But the subconscious thought behind that slip was that it’s the trivial example. I’ve been confused several times by set theory texts silently assuming that sets were uncountable. And now I’ve gone and done the same thing myself.

Posted by: Tom Leinster on June 10, 2021 6:46 PM | Permalink | Reply to this

Re: Large Sets 2

Speaking of smallish strong limits, I think by your definition 0, 2 and \mathbb{N} are strong limits, but not 1.

Posted by: John Baez on June 11, 2021 1:44 AM | Permalink | Reply to this

Re: Large Sets 2

They would be, except that I explicitly put into my definition that strong limits must be infinite.

Somewhat relatedly, the story I told is really motivation for the concept of uncountable strong limit. An uncountable strong limit is a set that’s too big to be reached by ETCS. But I’m bowing to tradition by separating the concept of uncountablestronglimit into two.

Posted by: Tom Leinster on June 11, 2021 2:24 AM | Permalink | Reply to this

Re: Large Sets 2

Here “big” refers to the relation \leq on sets, defined by XYX\leq Y if and only if there exists an injection XYX \to Y.

In other words, we can consider sets XX with the following properties:

  • 0<X0\lt X, 1<X1\lt X, 2<X2\lt X and <X\mathbb{N}\lt X.

If your metalogic is constructive, you first have to postulate that your model of ETCS is an inequality space before you could go from the relation \leq to <\lt.

Posted by: Madeleine Birchfield on June 10, 2021 6:48 PM | Permalink | Reply to this

Re: Large Sets 2

You have defined the size comparison \leq when there is an injection, but constantly refer to <\lt without defining it.

Posted by: David Roberts on June 11, 2021 5:31 AM | Permalink | Reply to this

Re: Large Sets 2

True. I thought about putting the definition in, but decided against as I thought readers would easily surmise.

Given your comment, I’ll insert the definition into the post after all. But I’m curious: did you genuinely not know what I meant by <\lt, or was your comment intended as the correction of an oversight?

Posted by: Tom Leinster on June 11, 2021 11:20 AM | Permalink | Reply to this

Re: Large Sets 2

I’m puzzled by this comment. Define it in any way that would be standard assuming classical logic, e.g., xyx \leq y and yxy \nleq x. Is this not common understanding?

(Or is this another comment, but without saying so, about whether the background logic is constructive? I don’t think Tom said anything about that; my general understanding is that if nothing is said, go ahead and assume that the speaker is using classical logic throughout. If he did and I missed it, then I agree that a whole lot more care would be needed.)

Posted by: Todd Trimble on June 11, 2021 11:38 AM | Permalink | Reply to this

Re: Large Sets 2

Yup, classical logic throughout.

Posted by: Tom Leinster on June 11, 2021 11:51 AM | Permalink | Reply to this

Re: Large Sets 2

I was just being picky. You were super careful to define other things, even if informally. :-) Clearly a fully-fledged mathematician would be able to pick up what you mean.

Note however, that if you want to define X<YX \lt Y, where XX and YY are sets, then “XYX \leq Y and XYX\neq Y” doesn’t work out of the box, since of course the latter condition should be |X||Y||X| \neq |Y|.

Posted by: David Roberts on June 13, 2021 2:37 AM | Permalink | Reply to this

Re: Large Sets 2

OK, no worries!

Posted by: Tom Leinster on June 13, 2021 11:16 AM | Permalink | Reply to this

Re: Large Sets 2

Apologies if this is a stupid question, but what relationship (if any) is there between the notion of size of sets being used here and the notion of size of categories being used when we talk about Cat being the large category of all small categories and therefore not containing itself?

Posted by: Josh Lalonde on June 11, 2021 7:24 PM | Permalink | Reply to this

Re: Large Sets 2

Strict categories are sets, so to say a strict category (such as Cat) contains itself runs into Burali-Forti’s paradox, and since sets are discrete categories themselves, it runs into Russell’s paradox as well, which is why we have large and small sets in the first place, and hence large and small strict categories.

On the other hand, if Cat is a weak category, then the underlying collection of objects is a weak groupoid rather than a set, so the usual Burali-Forti’s paradox and Russell’s paradox for sets wouldn’t apply here. However, there is still a version of Burali-Forti’s paradox for weak groupoids, which states that if a weak groupoid contains itself, then it is trivial, which is why we still speak of small and large weak groupoids and categories. This first appeared in type theory, where a type theoretic model of weak groupoids was constructed and where Burali-Forti’s paradox for types is called Girard’s paradox.

Posted by: Madeleine Birchfield on June 11, 2021 10:24 PM | Permalink | Reply to this

Re: Large Sets 2

Josh, it’s not a stupid question at all.

As far as these posts are concerned, I’m not thinking about the size of categories at all. And my temporary use of the word “small” in part of this post wasn’t supposed to be related to “small category”; it was just a convenient word.

But zooming out, there are various ways of formalizing the small/large distinction for categories. One approach is to use sets vs. classes (which requires some axiomatic notion of class). Another is to say that even large categories are supposed to have a set of objects, and small categories are those whose set of objects is below some threshold. Inaccessibility often comes in here. I’ll get to inaccessibility in a few posts’ time, and though I won’t attempt to say anything about the connection with large/small categories, there are well informed people here who know all about that kind of thing.

In categorical approaches, the question of whether a set contains itself doesn’t make a whole lot of sense, because membership isn’t a relation defined on the collection of sets. (That is, given two sets XX and YY, it doesn’t make sense to ask whether XYX \in Y.) The same goes for categories containing themselves.

Posted by: Tom Leinster on June 11, 2021 11:14 PM | Permalink | Reply to this

Re: Large Sets 2

In categorical approaches, the question of whether a set contains itself doesn’t make a whole lot of sense, because membership isn’t a relation defined on the collection of sets.

But we can make sense of roughly the same intent using families: a family p:AIp:A\to I “contains itself” if there exists an element iIi\in I such that A iIA_i \cong I.

Posted by: Mike Shulman on June 12, 2021 4:10 PM | Permalink | Reply to this

Re: Large Sets 2

But we can make sense of roughly the same intent using families: a family p:AIp:A\rightarrow I “contains itself” if there exists an element iIi \in I such that A iIA_i \cong I

Isn’t that trivially true for all constant functions p:A1Ip:A \rightarrow 1 \rightarrow I?

Posted by: Madeleine Birchfield on June 12, 2021 5:15 PM | Permalink | Reply to this

Re: Large Sets 2

Certainly not for all constant functions. For instance, if II is not subterminal then a constant function A=1IA = 1 \to I doesn’t satisfy it, since A iA_i is subterminal for all ii but II is not.

However, there are certainly trivial examples. For instance, you can make any p:AIp:A\to I into a family that contains itself by adding the indexing set to get A+(I+1)I+1A + (I+1) \to I + 1. Similarly, in a non-well-founded set theory you can make any set XX into one that contains itself, X=X{X}X' = X \cup \{X'\}. Self-containment by itself isn’t paradoxical.

Posted by: Mike Shulman on June 13, 2021 4:16 PM | Permalink | Reply to this

Re: Large Sets 2

This may be jumping the gun a little, but I would be willing to defend the assertion that the most important aspect of the category of sets in category theory is that it is the free co-completion of a point (or allows for the construction of free co-completions more generally).

Now it is rather fundamental to the argument that strong limit sets do not exist in ETCS that the categories of models are not co-complete (in this case, that the simplest kind of infinite colimits do not exist).

This leads to a few natural questions:

1) If we do not augment ETCS in any way, can we actually do much interesting category theory in it? Does the situation change much even if we chuck in universes of some kind?

2) What if we just add to ETCS the axiom that it is co-complete? Maybe not so easy to formulate in an elementary way, but let’s ignore that. How much further can we then get? I suppose this is akin to a form of replacement, but something more naturally category-theoretic. How different is the result from ZFC?

Posted by: Richard Williamson on June 12, 2021 10:07 AM | Permalink | Reply to this

Re: Large Sets 2

This only addresses your question 2, but Colin McLarty has a paper that I very much like, Exploring categorical structuralism. Section 8 is on replacement.

Here’s how I’d explain it. In ETCS, a family of sets indexed by a set II is naturally defined as a function p:XIp: X \to I into II. The iith fibre X i=p 1(i)X_i = p^{-1}(i) can be called the “iith member of the family”, and XX is the coproduct iX i\sum_i X_i.

Another idea of what a “family” should be is that it’s a formula for producing a set from each element of II. Formally, then, a family is a first order ETCS formula R=R(i,X)R = R(i, X) in elements iIi \in I and sets XX, such that for each iIi \in I, there is a unique-up-to-isomorphism set XX such that R(i,X)R(i, X) holds.

(The contrast between these two conceptions of family is similar to what I understand to have been the debate around the time that the notion of “function” was being nailed down: is a function an arbitrary collection of assigned values, or a formula for assigning those values? The former conception became the standard one, but the latter is important in logic.)

We can add to ETCS the axiom that any family in the second sense gives rise to a family in the first sense. That is, for any RR as above, there exist a set XX and a map p:XIp: X \to I such that R(i,p 1(i))R(i, p^{-1}(i)) holds for every ii.

One could call this axiom cocompleteness. (It only asserts the existence of small coproducts, but we already have coequalizers from ETCS alone, so we get all small colimits.) Or we could call it replacement, as McLarty does. He writes:

Probably few readers think this looks like the ZF axiom scheme [of replacement]. […] Our axiom is not a translation from ZF. It is a plain categorical version of Cantor’s idea.

ETCS with this replacement/cocompleteness axiom adjoined is biinterpretable with ZFC (that is, equivalent to it in the strongest sense).

Posted by: Tom Leinster on June 12, 2021 11:35 AM | Permalink | Reply to this

Re: Large Sets 2

The contrast between these two conceptions of family is similar to what I understand to have been the debate around the time that the notion of “function” was being nailed down: is a function an arbitrary collection of assigned values, or a formula for assigning those values? The former conception became the standard one, but the latter is important in logic.

I think that’s backwards. In the world of calculus-type functions, the “formula” definition is less general than the arbitrary-collection definition. But in the context you’re talking about, the “formula” definition is more general than the other one. Roughly the reason for this is that formulas are allowed to have parameters. That is, if you have a given arbitrary collection of values p:XIp:X\to I, then F(i)=X iF(i) = X_i counts as a “formula”.

Posted by: Mike Shulman on June 12, 2021 3:58 PM | Permalink | Reply to this

Re: Large Sets 2

Replacement is interesting and worth thinking about, but it’s not necessary to do basic category theory, and I don’t think it’s correct to call it “cocompleteness”. In particular, ETCS already proves that the category of sets is complete and cocomplete. The point is that in order to say something like “every family of sets has a product and coproduct” in ETCS, you have to first decide what a “family of sets” is in ETCS, and as Tom mentioned, the only sensible definition is that it’s a morphism AIA\to I (where II is the indexing set). The product and coproduct of such a family can then be defined in ETCS (in fact, the coproduct is just the set AA).

It’s true that if you instead defined a “family of sets” using a logical formula then ETCS would not prove that the category of sets is cocomplete, but this is the wrong definition of “family of sets” for the purposes of category theory. In fact, with this definition of “family of sets”, ETCS cannot even formulate the statement that the category of sets is cocomplete, because it cannot quantify internally over logical formulas.

Posted by: Mike Shulman on June 12, 2021 3:55 PM | Permalink | Reply to this

Re: Large Sets 2

Thank you to both you and Tom for your thoughts! I was just thinking quite simplistically about the following…

Take a model of ETCS, that is, a category satisfying the ETCS axioms. Let’s temporarily say that a set (in this model) is “small” if it is strictly smaller than every uncountable strong limit. Then the small sets are also a model of ETCS. For example, if AA and BB are small then for every uncountable strong limit X, we have A<XA \lt X and B<XB \lt X, therefore B A<XB^A \lt X; hence B AB^A is small. So we’ve shown:

For any model of ETCS, the sets smaller than every uncountable strong limit are also a model of ETCS.

…and observing that this argument does not work if, for example, the colimit

2 2 2 \mathbb{N} \rightarrow 2^{\mathbb{N}} \rightarrow 2^{2^\mathbb{N}} \rightarrow \ldots

is required to exist as part of the definition of ETCS in some way or other. Since it is replacement that guarantees the existence of this colimit in ZFC, co-completeness seems analogous to replacement at least to some degree.

Posted by: Richard Williamson on June 13, 2021 10:10 AM | Permalink | Reply to this

Re: Large Sets 2

Thanks, Richard. I take your point. And your comment looks forward nicely to some later posts. In particular, we’ll look at the supremum of ,2 ,2 2 , \mathbb{N}, 2^{\mathbb{N}}, 2^{2^{\mathbb{N}}}, \ldots and more generally, the supremum of X,2 X,2 2 X, X, 2^X, 2^{2^X}, \ldots for any set XX.

Part of the point of this whole series is that while replacement is natural enough and has a lot going for it, it’s not the only way of guaranteeing that sets like this exist. For example, one can directly add the axiom “sets like this exist”! I’ll get to that in a while.

As this series goes on, the sets involved will get ever larger. At first, replacement will be enough to guarantee their existence, but later it won’t be enough.

Posted by: Tom Leinster on June 13, 2021 11:29 AM | Permalink | Reply to this

Re: Large Sets 2

Tom, may I draw your attention to The Logic in Philosophy of Science (2019) by Hans Halvorson. Halvorson is not there focused on how to study large cardinals and so on . He is, though, using ETCS as his background mathematics to discuss interpretations, translations, etc., between theories.

In Chapter 2 (“The Category of Sets”), Halvorson presents a detailed account of ETCS. Later his approach to meta-theory is given largely in terms of ETCS. In Chapter 4 (p 117, Definition 4.5.15), Halvorson describes a certain strong form of intertranslatability between theories as “homotopy equivalence”, which relates to a suggestion Madeleine made on the previous post (he explains the reason for that on p 118).

I suppose this is a side-issue, not directly about large cardinality questions your post is about. Still, it is a case of using ETCS, instead of ZFC, as the “background” mathematical theory to study inter-theory relations, and may be of interest (and I say this as someone who thinks that the “elements are morphisms 1X1 \to X” view is weird!). Finally, I mention the most detailed direct connection between category theory and interpretation theory: Albert Visser’s Categories of Theories and Interpretations (2004).

Posted by: Jeffrey Ketland on June 13, 2021 12:06 PM | Permalink | Reply to this

Re: Large Sets 2

may I draw your attention to The Logic in Philosophy of Science (2019) by Hans Halvorson.

Thanks, Jeffrey. I hadn’t come across that, and look forward to having a browse.

I say this as someone who thinks that the “elements are morphisms 1X1 \to X” view is weird!

I’m convinced that it’s healthy for mathematics for there to be a wide variety of viewpoints on everything, with different people finding different things weird. And everyone has the right to find something weird!

Two things, though. First, defining elements of XX as functions 1X1 \to X isn’t as essential to ETCS as might be thought. That is, one can cook up a trivial variation of ETCS in which “element” is a primitive concept. One then has to add in further axioms to ensure that the interplay between functions and elements is as it should be. I had a quick go at this here. The theory (“\inTCS”) that I wrote down there off the top of my head might not be quite right, but some small variation of it is bound to be. It’s trivially equivalent to ETCS, but it doesn’t define an element of XX to literally be a function 1X1 \to X. They’re just in canonical one-to-one correspondence, as everyone agrees should be the case.

Second, a question. Do you find it weird to define a sequence of elements of XX to be a function X\mathbb{N} \to X?

Posted by: Tom Leinster on June 13, 2021 8:19 PM | Permalink | Reply to this

Re: Large Sets 2

(On the question, treating a sequence as a function s:IXs: I \to X seems what we intuitively thought, as a sequence is s 1,s 2,s_1, s_2, \dots, so there’s a functional dependence, s(1),s(2),s(1), s(2), \dots.)

My own thought about “elements as morphisms” is connected to “urelement comprehension”, which is absent from ZFC, of course, and absent from ETCS too. It’s a bit of a long story, but here goes and apologies for the length!

Urelement comprehension is present in Zermelo’s original Z, or (old-fashioned) type theory, or Jech’s ZFA, or Quine’s NFU, or any system that can be applied to non-sets. Suppose I am applying some combinatorics to residents and houses. I first use urelement comprehension, with respect to “house(x)house(x)” and “resident(x)resident(x)”, to assert the needed sets:

Xa(aXhouse(a))\exists X \forall a (a \in X \leftrightarrow house(a))

Xa(aXresident(a))\exists X \forall a (a \in X \leftrightarrow resident(a))

Given these, and Extensionality, I can now define HH to be {ahouse(a)}\{a \mid house(a)\} and RR to be {aresident(a)}\{a \mid resident(a)\}. I can now formulate empirical assumptions mathematically and apply mathematics to draw (empirical) conclusions. An example might be,

(1) There are fewer houses than residents

(2) Each resident lives in exactly one house.

(3) So, there’s a house where at least two residents live

The premise (1) then is formalized as “There is an injection HRH \to R and there is no injection RHR \to H”. And the premise (2) is “(aR)(!hH)lives(a,h)(\forall a \in R)(\exists ! h \in H) \ lives(a,h)”. And the conclusion (3) is,

h(house(h)(a,b)(resident(a)resident(b)ablives(a,h)lives(b,h))\exists h(house(h) \wedge (\exists a, b)(resident(a) \wedge resident(b) \wedge a \neq b \wedge lives(a,h) \wedge lives(b,h))”.

The sentence ((1)(2))(3)((1) \wedge (2)) \to (3) is a theorem of set theory with atoms. That shows how we apply mathematics to solve this problem: to conclude (3) from (1) and (2). (Premise (2) defines a function f, and I use the Pigeonhole Principle and premise (1) to conclude that f is non-injective.)

But this reasoning needs urelement comprehension to provide for HH and RR. How to do, or emulate, this in ETCS is not clear. But I presume it’s because ETCS is designed to provide for an equivalent of the pure sets of ZFC. But in general, how to get sets of non-sets, by “bringing things satisfying a common condition together” is unclear. In order to get that, it seems to me one can’t avoid a comprehension axiom scheme, like

Xx(xX(xZϕ(x,))\exists X \forall x(x \in X \leftrightarrow (x \in Z \wedge \phi(x, \dots)).

This gets us back to the “elements as morphisms” view, at the start. Comprehension (well, to be more exact, Separation) says, given some condition ϕ(x)\phi(x), I “bring together” the objects (in ZZ) satisfying ϕ(x)\phi(x) to a single mathematical object. But the objects themselves can be whatever we please! Numbers, houses, alephs, spacetime points, open sets of R nR^n, …

If one finds this line of argument remotely convincing, then one solution might be to reformulate ETCS with “xXx \in X” as an additional primitive, giving a theory \inTCS, which is what you sketched in the comment linked above. Another might be to add, as primitive, new “urelement set constants”, say “HH”, “RR”, as above, whenever needed. Then a house is a morphism 1H1 \to H. (!) But I would prefer to say the houses are somehow basic, and the set of houses is obtained from them by “lasooing” them together, into a single mathematical object.

Posted by: Jeffrey Ketland on June 13, 2021 10:04 PM | Permalink | Reply to this

Re: Large Sets 2

If one finds this line of argument remotely convincing, then one solution might be to reformulate ETCS with “xXx\in X” as an additional primitive, giving a theory \inTCS, which is what you sketched in the comment linked above. Another might be to add, as primitive, new “urelement set constants”, say “H”, “R”, as above, whenever needed. Then a house is a morphism 1H1 \rightarrow H. (!) But I would prefer to say the houses are somehow basic, and the set of houses is obtained from them by “lasooing” them together, into a single mathematical object.

The first approach fits well into dependent type theory, where in addition to having a type SetSet of sets and for sets AA and BB a type ABA \rightarrow B of functions from AA to BB, for all sets AA there is a type El(A)El(A) of elements in AA.

Posted by: Madeleine Birchfield on June 14, 2021 10:24 PM | Permalink | Reply to this

Re: Large Sets 2

Yes, I see how this goes, but I think the question is how one “gets off the ground”, to give (the existence of) any sets to start with. Normally, for a predicate or formula ϕ(a)\phi(a), we may assert that there exists a set XX such that, for any aa, aXa \in X if and only if ϕ(a)\phi(a) (we need some restrictions, but I’ll ignore that). By Extensionality, now I have the existence of X={aϕ(a)}X = \{a \mid \phi(a)\} and I can instantiate general set-theoretical claims on this XX. In ordinary type theory or set theory admitting urelements (like Russell’s 1908 and Zermelo’s 1908, both the same year), or HOL (which goes back to Frege 1879), these comprehension axioms (for urelements) are either given or implicitly assumed. In such a theory, one then obtains theorems like

(a) If there are fewer houses than residents and each resident lives in exactly one house, then there’s a house with at least two residents

The reasoning would be similar if it were set theory, or HOL, or type theory. Is there some modification of ETCS, say ETCSU, which incorporates urelements and can prove application claims like (a) above? I’m puzzled about how to do this.

Posted by: Jeffrey Ketland on June 15, 2021 5:00 AM | Permalink | Reply to this

Re: Large Sets 2

Since his Conceptual Mathematics is dripping with real world examples, and a very important motivation for his work in category theory came from physics, Lawvere is certainly interested in applied category theory.

P. 85 of the book speaks of “some set XX of actual people”.

I doubt very much he gives anything like an ETCSU. The picture he gives on p.84 has Thinking reflecting Reality. The Objective part of Thinking provides “as clear an image as possible of reality, as it is and moves in itself, independent of our particular thoughts.”

I’d imagine if we looked to expand on such a picture, we’d end up retracing our steps from the last time we spoke about applications, back in this thread.

Posted by: David Corfield on June 15, 2021 12:13 PM | Permalink | Reply to this

Re: Large Sets 2

Yes, thanks, I enjoyed Lawvere/Schanuel’s Conceptual Mathematics when I read it maybe three years ago, and I know about the physics background too. Geroch published a physics textbook I used to use in 1986, all category-theoretical.

Since that thread you link to, I published this about applied mathematics. I’ll try and write a follow-up, when I have the speed-up theorem for ZA over first-order logic properly worked out.

Posted by: Jeffrey Ketland on June 15, 2021 5:44 PM | Permalink | Reply to this

Re: Large Sets 2

Normally, for a predicate or formula ϕ(a)\phi(a), we may assert that there exists a set XX such that, for any aa, aXa \in X if and only if ϕ(a)\phi(a) (we need some restrictions, but I’ll ignore that).

You should not ignore the restrictions! In material set theory, constructing a set can be a two-step procedure:

  1. Construct a class.
  2. Prove it’s a set. In other words, that it’s “small”.

You are ignoring how step 2 goes. But in structural set theory, you just directly construct a set, which is small by construction. So given some intuitive class you have in mind, the reason why it’s small enough to be a set is what you use to construct it as a set.

In structural set theory, you don’t “lasso” some objects lying around to make a set. In constructing a structural set, you don’t worry about how to get the right members. That would make no sense because there’s no membership relation. You worry about having enough elements (cardinality), and having the structure (on the set) that you will use. The cardinality and extra structure can be based on some intuition of preexisting objects that the elements should represent, but formally there are no preexisting objects. Closest thing to objects would be the elements of sets, I suppose. (And moreover, sets themselves are technically not elements. Ever. That’s why people have been talking about families in terms of fibers.)

So starting with a class, which picks which preexisting objects are members, formally makes no sense for structural set theory. So skip step 1. In this sense, there’s only step 2. So you should not ignore it. Clearly if you do, you will be left without any basis to define a set. You won’t “get off the ground”.

In ordinary type theory or set theory admitting urelements (like Russell’s 1908 and Zermelo’s 1908, both the same year), or HOL (which goes back to Frege 1879), these comprehension axioms (for urelements) are either given or implicitly assumed.

I’m guessing that the kind of comprehension axiom for type theory that you have in mind constructs a predicate. But structural sets are more like the types of type theory than the predicates. (Of course there are many more sets in a sensible structural set theory than types in those old type systems.)

Is there some modification of ETCS, say ETCSU, which incorporates urelements and can prove application claims like (a) above? I’m puzzled about how to do this.

My understanding is that urelements are objects that are not sets. I parenthetically remarked above that in structural set theory, elements are technically never sets. So I figure all elements are urelements in ETCS, so it already has urelements.

For your reasoning about houses and residents, it seems sufficient to declare two arbitrary sets HH and RR and a “lives” relation between them. Of course that doesn’t say what houses and residents are, but:

  1. You didn’t say what they are either. You just declared (without definition) predicates “house” and “resident”.
  2. The argument is evidently independent of what any of those parameters mean.

In the internal logic of ETCS, quantifiers are always bounded by a set. So your desugaring of bounded existential quantification in (3) should not be done. In the internal logic, (3) would be just:

(hH)(a,bR)(ablives(a,h)lives(b,h))(\exists h \in H)(\exists a,b \in R)(a \neq b \wedge lives(a,h) \wedge lives(b,h))

I don’t think Tom specified the formal system in enough detail to tell exactly how that would be formalized. But it definitely works.

Posted by: Matt Oliveri on June 15, 2021 12:28 PM | Permalink | Reply to this

Re: Large Sets 2

In structural set theory, you don’t “lasso” some objects lying around to make a set.

Right, and I think that is the essence of the difference between structural and material! When I think of a set, I think of some objects lying around (they could be bricks, or planets, or real numbers, etc), and then I lasso them—boing! I’ve got a set.

(I think this metaphor comes from Saul Kripke.)

Posted by: Jeffrey Ketland on June 15, 2021 4:59 PM | Permalink | Reply to this

Re: Large Sets 2

When I think of a set, I think of some objects lying around (they could be bricks, or planets, or real numbers, etc), and then I lasso them-boing! I’ve got a set.

That is the type theoretic approach: you have a number of terms and you judge them to be of a certain type; or in the metatheory, there is a membership metaproposition that relates terms to types, and by ‘lassoing’ them together one gets a type. The only difference is that the lassoing occurs at the metatheory level rather than the object theory level.

Material set/class theories are also just simply typed theories with two types, a type of propositions and a type of sets/classes, so the same lassoing happens there, except in material set theories, it happens twice: once to lasso all the sets into the cumulative hierarchy and the propositions together, and then a second time to lasso all the internal sets into a set.

Posted by: Madeleine Birchfield on June 15, 2021 5:34 PM | Permalink | Reply to this

Re: Large Sets 2

That’s the “extrinsic typing” approach to type theory. But in the “intrinsic typing” approach, the elements of a type don’t exist prior to the type itself to be “collected” into it. (One of these is attributed to Church and the other to Curry, but I can never remember which.) Intrinsic typing is a better match for categorical semantics.

Posted by: Mike Shulman on June 16, 2021 1:40 AM | Permalink | Reply to this

Re: Large Sets 2

I can never remember which.

Luckily there’s a resource for such questions: intrinsic and extrinsic views of typing.

Intrinsic typing is a better match for categorical semantics.

I think this is the largest obstacle to conveying category theory/type theory to philosophers. The extrinsic view is so deeply entrenched in the analytic tradition. Perhaps a dose of Husserl’s Formal and Transcendental Logic would help, but it’s not an easy read.

Untyped first-order logic plays a large role in the analytic conception. But if FOL is a 2-theory in your sense, so a restricted form of type theory (some slides on this), why not give it an intrinsic reading?

Posted by: David Corfield on June 16, 2021 8:16 AM | Permalink | Reply to this

Re: Large Sets 2

David,

Untyped first-order logic plays a large role in the analytic conception.

Here are some counter-examples to this claim.

Frege: Higher-order logic (invented it; as well a version of types and lambda abstraction).

Russell: Type theory (invented it)

Carnap: Type theory.

Church: Type theory.

Kripke: Quantified Modal logic (invented it; influenced by Carnap’s theory of state descriptions though)

Kripke: Intuitionistic logic.

Parsons: Second-order arithmetic.

Kripke: 3-Valued logic.

Boolos: Second-order logic.

Boolos: Plural logic.

Shapiro: Second-order logic.

Priest: Many-valued logic (LP).

Linnebo: Plural logic.

Horsten: Partial Kleene logic.

Leitgeb: Second-order logic.

Leitgeb: Hyperintensional logic.

What do you have in mind? Quine, perhaps?

Posted by: Jeffrey Ketland on June 16, 2021 7:46 PM | Permalink | Reply to this

Re: Large Sets 2

Of course, many other logics are worked on. This doesn’t make them counter-examples to my claim.

Yes, Quine. Take Bricker’s article, SEP: Ontological Commitment

Quine’s criterion of ontological commitment has dominated ontological discussion in analytic philosophy since the middle of the 20th century; it deserves to be called the orthodox view.

And to locate these commitments:

first, regiment the competing theories in first-order predicate logic; second, determine which of these theories is epistemically best (where what counts as “epistemically best” depends in part on pragmatic features such as simplicity and fruitfulness); third, choose the epistemically best theory. We can then say: one is ontologically committed to those entities that are needed as values of the bound variables for this chosen epistemically best theory to be true.

Posted by: David Corfield on June 17, 2021 8:22 AM | Permalink | Reply to this

Re: Large Sets 2

Mike said:

Intrinsic typing is a better match for categorical semantics.

David said:

I think this is the largest obstacle to conveying category theory/type theory to philosophers. The extrinsic view is so deeply entrenched in the analytic tradition.

I believe you, but I think it would be good if you say what you mean in more detail. Jeffrey gave a list of philosophers using various systems, some of which are intrinsically typed. But I wouldn’t be surprised if all of those examples are still too extrinsic in style for you.

In proof assistants and programming languages, intrinsic typing is often used for technical reasons; not because of the kind of invariant-by-construction vision of categorical semantics. So I figure something like this could be going on in philosophy, too, and you should say carefully what you’d like to change. “Intrinsic” and “extrinsic” are actually pretty hand-wavy, out of context.

Posted by: Matt Oliveri on June 17, 2021 4:19 AM | Permalink | Reply to this

Re: Large Sets 2

I see it as how much work you want Separation to do (see Jeffrey’s comment).

One extreme has us always separating out collections from the totality of whatever there is. The other extreme takes there to be many fundamentally different kinds of entity.

Try the Venus = 3 test.

But now we’re clogging up Tom’s post going over old ground.

Posted by: David Corfield on June 17, 2021 8:40 AM | Permalink | Reply to this

Re: Large Sets 2

I was wondering if you have a precise criterion for when analytic philosophy is being too extrinsic. (I mean, even structural set theory has subsets. So it can’t be “no separation allowed”.) I don’t think that would be old ground.

It was just a suggestion, and it’s fine if you don’t like it, or don’t want to discuss it here.

Posted by: Matt Oliveri on June 18, 2021 7:16 AM | Permalink | Reply to this

Re: Large Sets 2

I’ll put up a post soon where we can discuss these issues.

Posted by: David Corfield on June 18, 2021 9:58 AM | Permalink | Reply to this

Re: Large Sets 2

Matt, if you’re curious, I have worked out the formalization of the “residents/houses” reasoning in Isabelle/HOL (which is a type theory, of course, too).

theory Residents

imports Main “~~/src/HOL/Library/FSet”

begin

locale residents =

fixes house :: “‘a ⇒ bool”

fixes resident :: “‘a ⇒ bool”

fixes lives :: “‘a ⇒ ‘a ⇒ bool”

fixes houseof :: “‘a ⇒ ‘a”

assumes A1: “finite {x. resident x}”

and A2: “card {x. house x} < card {x. resident x}”

and A3: “∀x. resident x ⟶ (∃!y. (house y ∧ lives x y))”

and A4: “houseof x = y ⟷ (resident x ∧ house y ∧ lives x y)”

begin

lemma lem1: “¬ inj_on houseof {x. resident x}”

by (metis (mono_tags, hide_lams) A1 A2 A4 card_inj_on_le finite_subset imageE less_le_not_le mem_Collect_eq subsetI)

lemma lem2: “∃y. ∃x. ∃x’. (house y ∧ resident x ∧ resident x’ ∧ x ≠ x’ ∧ lives x y ∧ lives x’ y)”

by (metis A4 inj_onI lem1)

end

end

Posted by: Jeffrey Ketland on June 18, 2021 10:13 AM | Permalink | Reply to this

Re: Large Sets 2

I wouldn’t’ve thought I was curious, but now that you’ve shown some code, I have a couple of questions.

  1. Probably not that important, but: What is “metis”? A tactic? What’s up with that name? (Yeah, this is all probably in the docs. Just curious. I know a thing or two about Isabelle, but not for practical use.)

  2. When you say fixes house :: "'abool", does that also fix the type 'a, or is house polymorphic, so that you could ask if house 5?

Posted by: Matt Oliveri on June 19, 2021 2:11 AM | Permalink | Reply to this

Re: Large Sets 2

Metis is one of the proof tactics, yes. No idea where the name comes from. The types ‘a, ‘b, \dots behave like types/sorts of basic individuals, but are separate from the type of natural numbers. The declaration

fixes house : : “‘a => bool”

in the locale given above says “house” will be a one-place predicate on ‘a. It doesn’t constrain ‘a further. Isabelle knows how to introduce ‘a set (i.e., “sets of ‘a things”, …), and ‘a set set, etc., as need be, using its own internal comprehension axioms. Because numerals are themselves typed, asking Isabelle to prove “house 5” produces a syntax error:

Type unification failed: Variable ‘a::type not of sort numeral

Posted by: Jeffrey Ketland on June 19, 2021 8:43 AM | Permalink | Reply to this

Re: Large Sets 2

Type unification failed: Variable 'a::type not of sort numeral

Wait is this because house is not polymorpic, or because we’re missing a type class instance for 'a? I guess house 5 was a bad example. What about house(\x.x) or however you write the identity function?

Posted by: Matt Oliveri on June 20, 2021 10:04 PM | Permalink | Reply to this

Re: Large Sets 2

It’s because house is not polymorphic. Isabelle/HOL does implement polymorphism and overloading in other cases (which to be honest, I don’t understand so well, as I’ve not used them), but not in this case. As soon as house is declared to be a predicate on ‘a (i.e., its type is ‘a => bool), then any argument must have this type too in order to unify.

Posted by: Jeffrey Ketland on June 22, 2021 1:57 PM | Permalink | Reply to this

Re: Large Sets 2

(One of these is attributed to Church and the other to Curry, but I can never remember which.)

Church-style is intrinsic; Curry-style is extrinsic.

The way I remember it is that Curry’s combinatory logic is intrinsically untyped, but popularly used as a basis for extrinsic typing. And Church’s HOL (“simple type theory”) is intrinsically typed.

Posted by: Matt Oliveri on June 16, 2021 3:42 AM | Permalink | Reply to this

Re: Large Sets 2

Actually replacement isn’t what guarantees the existence of the colimit of the diagram 2 \mathbb{N} \to 2^{\mathbb{N}} \to \cdots. Replacement is what guarantees the existence of that diagram! (And in particular, ETCS cannot prove that that diagram exists.) Once you have the diagram, ETCS can prove that it has a colimit.

A little more precisely, replacement guarantees that that diagram is small. Zermelo set theory can show that that diagram exists as a proper class; ETCS can do something similar, although “proper classes” are a bit different in ETCS-land. But of course “cocompleteness” means only that all small diagrams have a colimit.

So again, I would say that the connection between replacement and cocompleteness is an illusion. Cocompleteness says that small diagrams have a colimit; replacement is just about how many small diagrams there are.

Posted by: Mike Shulman on June 13, 2021 4:22 PM | Permalink | Reply to this

Re: Large Sets 2

Cocompleteness says that small diagrams have a colimit; replacement is just about how many small diagrams there are.

Great, this is helpful: now I see better what you’re saying, Mike, and why you’re insisting that replacement shouldn’t be referred to as a kind of cocompleteness.

Posted by: Tom Leinster on June 13, 2021 8:03 PM | Permalink | Reply to this

Re: Large Sets 2

Further on how Replacement shouldn’t be seen as a kind of cocompleteness, here’s something I learned from Asaf Karagila, whose views I take as reasonably representative of the current generation of set theorists. He say that in the material setting, Replacement is really a device (I hesitate to say “hack”) that ensures that ZFC can act in a structural (or isomorphism invariant) way. This is because Replacement can be interpreted as saying that given a set and a an isomorphism to a class, then the class is in fact a set (the general case is reducible to this, I think).

Posted by: David Roberts on June 14, 2021 1:45 AM | Permalink | Reply to this

Re: Large Sets 2

A little more precisely, replacement guarantees that that diagram is small. Zermelo set theory can show that that diagram exists as a proper class; ETCS can do something similar, although “proper classes” are a bit different in ETCS-land. But of course “cocompleteness” means only that all small diagrams have a colimit.

So replacement is more accurately a schemata of large cardinal axioms.

Posted by: Madeleine Birchfield on June 13, 2021 5:10 PM | Permalink | Reply to this

Re: Large Sets 2

I don’t personally agree that this splitting of hairs is significant or changes anything (I think one can quibble in many ways here about what is implicit here or there, or about what is reasonable to consider in a ‘neutral’ context), but we are only discussing the validity of an analogy, so it doesn’t matter too much! I look forward to the rest of the series!

Posted by: Richard Williamson on June 13, 2021 5:13 PM | Permalink | Reply to this

Re: Large Sets 2

I really appreciate this discussion on the role of Replacement!

Let me ask a question stemming from my interests in condensed sets, where also strong limits are the first relevant (not so) “large” cardinals. More precisely, recall that condensed sets are sheaves on the category of profinite sets (with covers given by finite families of jointly surjective maps), and to avoid pathologies one solution is to restrict the size of the profinite sets, where it is advisable to choose a strong limit.

Namely, one wants that any profinite set SS is covered by a “projective profinite set” S˜\tilde{S}, that is one where each further surjection SS˜S'\to \tilde{S} splits. By a theorem of Gleason, these are exactly the extremally disconnected profinite sets. One such is given by the Stone-Cech compactification of SS considered as a discrete set; its cardinality is 2 2 |S|2^{2^{|S|}}. So if we ask |S||S| is less than κ\kappa for some strong limit κ\kappa, then also |S˜||\tilde{S}| is less than κ\kappa.

Now to me, in “naive” set theory, this should then imply that any such SS has a hypercover S˜ S\tilde{S}_\bullet\to S all of whose terms S˜ i\tilde{S}_i are extremally disconnected – just iteratively construct it, using such covers at each step. But you seem to tell me that in bare ETCS, this wouldn’t actually be true, as the diagram iS˜ ii\mapsto \tilde{S}_i is not defined in ETCS? That’s weird strange to me, as it can even be defined by an explicit inductive definition. So far I had assumed that the diagram is there, just not the union of all S˜ i\tilde{S}_i. I guess Replacement is really quite ingrained in what I consider “naive” set theory.

Posted by: Peter Scholze on June 14, 2021 3:59 PM | Permalink | Reply to this

Re: Large Sets 2

I haven’t yet absorbed enough of this to have a sense of whether iS˜ ii \mapsto \tilde{S}_i is definable in ETCS. But if I’ve understood roughly what’s going on, it could be that replacement is much stronger than necessary. Perhaps it’s enough to add to ETCS the far weaker condition that “beths exist”, which is equivalent to:

for every well-ordered set II, there exists a map of sets p:XIp: X \to I such that i<j2 |p 1(i)||p 1(j)|i \lt j \implies 2^{|p^{-1}(i)|} \leq |p^{-1}(j)| for all i,jIi, j \in I.

This condition is in turn equivalent to a similar but apparently stronger one, in which all the fibres p 1(i)p^{-1}(i) are required to be bigger than some prescribed set YY. So then, if we take I=I = \mathbb{N}, then for any set YY we’re guaranteed the existence of an \mathbb{N}-indexed family XX \to \mathbb{N} whose 00th fibre is Y\geq Y, whose 11st fibre is 2 Y\geq 2^Y, whose 22nd fibre is 2 2 Y\geq 2^{2^Y}, and so on. Is that enough to construct the hypercover?

Of course, it would be nice if you could do what you need to do using ETCS alone. But even if not, replacement may be much more than necessary.

I’m aware that others who know much more about set theory than me have argued that replacement is over-emphasized and owes some of its prominence to historical accident, and that in many instances, large cardinal axioms would suffice. In this case, “beths exist” is a not-so-large cardinal axiom, to borrow your phrase :-)

Posted by: Tom Leinster on June 14, 2021 7:06 PM | Permalink | Reply to this

Re: Large Sets 2

I guess Replacement is really quite ingrained in what I consider “naive” set theory.

I’m aware that others who know much more about set theory than me have argued that replacement is over-emphasized and owes some of its prominence to historical accident

On the other hand, one could construct a constructive material set-theoretic cumulative hierarchy (V,)(V,\in) that satisfies replacement from a universe UU in the univalent type theory used in the Homotopy Type Theory textbook by the Univalent Foundations Project, such that when the axiom of choice sets is added to the type theory, constructing (V,)(V,\in) results a model of classical ZFC. This might mean that bare ETCS and well-pointed 1-topos theory is too weak a theory for set theory for lack of enough set-truncated higher inductive types, and that replacement naturally follows from the axioms of a well-pointed (,1)(\infty,1)-topos.

Posted by: Madeleine Birchfield on June 14, 2021 9:35 PM | Permalink | Reply to this

Re: Large Sets 2

This might mean that bare ETCS and well-pointed 1-topos theory is too weak a theory for set theory for lack of enough set-truncated higher inductive types, and that replacement naturally follows from the axioms of a well-pointed (∞,1)-topos.

In fact, I think the existence of object classifiers in any (,1)(\infty,1)-topos (which don’t exist in bare ETCS) is equivalent to the reflection principle in set theory, which could replace the axiom of replacement in any axiomatic set theory.

Posted by: Madeleine Birchfield on June 14, 2021 9:50 PM | Permalink | Reply to this

Re: Large Sets 2

Whether or not the diagram iS˜ ii\mapsto \tilde{S}_i is “defined” in ETCS depends on your definition of “diagram of sets”.

If CC is a small category, then one definition of a “CC-indexed diagram of sets” consists of a C 0C_0-indexed family of sets (meaning, as Tom has said, a function AC 0A\to C_0) together with an action C 1× C 0AAC_1 \times_{C_0} A \to A that is associative and unital. With this definition, ETCS cannot prove that iS˜ ii\mapsto \tilde{S}_i is an \mathbb{N}-indexed diagram. In general, without replacement you cannot construct families of sets by recursion unless you can show that there exists a set giving a uniform bound for all the elements of the desired family.

Another definition of “CC-indexed diagram of sets” involves instead a logical formula R(c,A)R(c,A) such that for any cCc\in C there is a set AA, unique up to (perhaps unique) isomorphism, such that R(c,A)R(c,A), plus functorial action. ETCS can show that iS˜ ii\mapsto \tilde{S}_i is an \mathbb{N}-indexed diagram in this sense; but it cannot construct the colimit of this sort of diagram.

So your intuition is right, if by “diagram” you mean the second sort. But the point is that in ETCS, it’s the first kind of diagram that’s the “correct” one. For instance, if you want to talk about a the presheaf category of CC, then its objects must be diagrams in the first sense. And, as mentioned above, SetSet has limits and colimits of diagrams in the first sense, but not the second. Category theory just doesn’t work normally in the absence of replacement if you try to use diagrams in the second sense, but it works perfectly well for diagrams in the first sense. The only problem is that very occasionally, as in your case, one runs across a diagram in the second sense that isn’t one in the first sense without replacement.

Posted by: Mike Shulman on June 15, 2021 4:29 AM | Permalink | Reply to this

Re: Large Sets 2

Perhaps it’s enough to add to ETCS the far weaker condition that “beths exist”

That looks like it might be enough. But I’m surprised to hear that that’s far weaker than replacement. Can you give a brief intuition for why? I didn’t find anything in a quick google for “beths exist”.

Posted by: Mike Shulman on June 15, 2021 4:32 AM | Permalink | Reply to this

Re: Large Sets 2

I should get to that in about part 5 or 6 (have been preparing a seminar to give “in” Coimbra today, which is on another topic and so taking my attention away from sets). But in brief, “beths exist” is strictly weaker than “beth fixed points exist”, which in turn is implied by replacement.

I guess I can’t justify the word far in that sentence of mine you quoted. I may have had in mind that replacement is an axiom schema but “beths exist” is a puny single axiom.

Posted by: Tom Leinster on June 15, 2021 10:26 AM | Permalink | Reply to this

Re: Large Sets 2

Come to think of it, I asked almost the same question as Mike on MathOverflow and got some helpful answers. In the terminology I’m informally using now and will explain properly in posts coming soon, the question I asked there was: how much weaker than replacement is the axiom that “alephs exist”?

Posted by: Tom Leinster on June 15, 2021 10:35 AM | Permalink | Reply to this

Re: Large Sets 2

Mike wrote:

I didn’t find anything in a quick google for “beths exist”.

I just explained this condition in Part 6.

Posted by: Tom Leinster on June 24, 2021 4:39 PM | Permalink | Reply to this

Re: Large Sets 2

Thanks for your answer!

It is exactly this sentiment, that Replacement is really by far the most powerful axiom, and has mostly been included for reasons relevant to set theory but not for “normal” mathematics, that I’m trying to understand. Certainly the above situation is one that seems completely intuitive to me, and where I’d feel severely restrained if Replacement is removed.

(I clearly don’t need the full strength of it – it’s a whole axiom schema after all – and your specific “beths exist” is actually enough. But if, in an alternate history, Replacement was not part of the foundations, and somebody pointed out to me that the construction is therefore invalid, I would imagine that if I tried to pin down what the kind of general principle should be that I want to invoke, I would end up with Replacement. After all, the situation is precisely that for each nn, I have defined some S˜ n\tilde{S}_n, and now I want to say that this gives a family of sets (S˜ n) n(\tilde{S}_n)_n.)

Actually, here is a confusion I have. I think in ETCS, the natural numbers are defined as the initial set \mathbb{N} with a member 00 and an endofunctor S:S: \mathbb{N}\to \mathbb{N}. Thus, to give a map f:Xf: \mathbb{N}\to X to some other XX, one has to prescribe the value f(0)f(0) at 00, and for each nn\in \mathbb{N} one has to define f(S(n))f(S(n)) (which may depend on f(n)f(n)).

Now at first I thought this would now break in ETCS, as you can define a set f(0)=f(0)=\mathbb{N} and for each nn you can define f(S(n))f(S(n)) as the power set of f(n)f(n). This ought to define a map from \mathbb{N} to sets, with values ,2 ,2 2 ,\mathbb{N}, 2^{\mathbb{N}}, 2^{2^{\mathbb{N}}},\ldots.

Writing this down I realize that the issue is that the universal property of \mathbb{N} here really only pertains to maps valued in XX’s within our universe of sets, while we would like to apply it to XX being the whole ETCS universe of sets. So in ETCS, you can inductively define elements of a given set (and thus, by existence of power sets, also subsets of a given set), but you can’t inductively define sets. Even in your “beths exist” scenario, this wouldn’t in general be guaranteed, I think?

OK, I think I have a slightly better grasp of Replacement now, but I’d argue that it’s a perfectly natural axiom that I probably have used quite often, and would not want to miss. (Surely any use I made can be weakened, but I’d argue that we want to have axioms that are as permissive and unobstructing as possible, and now I’m quite sure that removing Replacement would feel quite obstructing to me.)

Posted by: Peter Scholze on June 14, 2021 9:02 PM | Permalink | Reply to this

Re: Large Sets 2

So here’s the question: do you need the whole hypercover to make a concrete conclusion about the base space? For instance, to calculate a particular cohomology/Ext group? Suppose one took the truncated hypercover, just a few notches above the desired dimension, then the calculation would go ahead as per normal, with the same result.

For what purpose do you need the whole simplicial object? I guess one might want to pass to a suitable (,1)(\infty,1)-category of simplicial sheaves (condensed anima or what-have-you).

Posted by: David Roberts on June 15, 2021 1:28 AM | Permalink | Reply to this

Re: Large Sets 2

I’d argue that [Replacement is] a perfectly natural axiom that I probably have used quite often, and would not want to miss. (Surely any use I made can be weakened, but I’d argue that we want to have axioms that are as permissive and unobstructing as possible, and now I’m quite sure that removing Replacement would feel quite obstructing to me.)

I mostly agree with this. Many, perhaps even most, uses of Replacement are inessential and can be avoided by taking care to define small diagrams correctly and so on, which essentially amounts to working with “indexed categories” throughout. McLarty did essentially this, in material set theory, in The large structures of Grothendieck founded on finite-order arithmetic. But as important and interesting as the ability to do this is, it rather uglifies the mathematics; no one in practice is going to start doing things this way.

However, I believe there’s a middle way. By passing into the stack semantics of a model of ETCS, we can put ourselves in a world where Replacement holds automatically. Semantically, this essentially means the uglification is happening “under the hood” and the “user” doesn’t have to concern themselves with it. The tradeoff is that we have to give up the full axiom of separation, and the axioms of excluded middle and choice when applied to “large” formulas. But most of category theory is very constructive anyhow, most of the rest of mathematics takes place with “small” formulas, and one can argue that uses of unbounded separation are rarer than uses of replacement.

Posted by: Mike Shulman on June 15, 2021 4:46 AM | Permalink | Reply to this

Re: Large Sets 2

Mike

By passing into the stack semantics of a model of ETCS, we can put ourselves in a world where Replacement holds automatically. Semantically, this essentially means the uglification is happening “under the hood” and the “user” doesn’t have to concern themselves with it. The tradeoff is that we have to give up the full axiom of separation, and the axioms of excluded middle and choice when applied to “large” formulas.

So do you think Peter’s construction would go through in this world where Replacement holds? I guess one only needs to have Stone–Čech compactifications of discrete spaces, but we’d need the usual properties to hold. Presumably one could pass to a locale if absolutely necessary, which would have better constructive properties.

Posted by: David Roberts on June 18, 2021 9:08 AM | Permalink | Reply to this

Re: Large Sets 2

Unfortunately (?), Peter’s construction is one of those cases that really needs additional strength beyond ETCS, rather than being arguable away at the cost of uglification. In Zermelo set theory, it fails due to the lack of replacement. But in the stack semantics set theory, it fails instead due to the lack of full separation. Constructing a family of sets by induction over \mathbb{N} in general requires both replacement and separation; separation to form “the set of all nn such that A nA_n exists” so that you can use induction to prove that it’s all of \mathbb{N}, and then replacement to collect together those sets A nA_n into a single family.

Posted by: Mike Shulman on June 18, 2021 3:58 PM | Permalink | Reply to this

Re: Large Sets 2

Now at first I thought this would now break in ETCS, as you can define a set f(0)=f(0)=\mathbb{N} and for each nn you can define f(S(n))f(S(n)) as the power set of f(n)f(n). This ought to define a map from \mathbb{N} to sets, with values \mathbb{N}, 2 2^{\mathbb{N}}, 2 2 2^{2^\mathbb{N}},….

Writing this down I realize that the issue is that the universal property of \mathbb{N} here really only pertains to maps valued in XX’s within our universe of sets, while we would like to apply it to XX being the whole ETCS universe of sets.

This is a nice way to put it. One way to express the reason I disagree with Mike so far in saying that this is not to do with co-completeness is that a notion of size is fundamentally implicit in speaking of co-completeness at all.

For to continue your example, suppose that we introduce a single universe into ETCS, but nothing else. We can then carry out your inductive construction in the whole ETCS, and the colimit will exist, but will not live in the universe. But we could impose that this colimit does live in the universe. I see no real difference between doing that and between requiring instead that the original diagram be able to be constructed (or, better, I see the two alternatives as two sides of the same kind of coin).

As an aside, if we did introduce a single universe in this way, what is the relation to imposing the existence of strong limit sets/cardinals? Is it equivalent?

Suppose that in ZFC we did the same kind of thing as with condensed sets, and re-defined the notion of smallness implicit in co-completeness to mean smaller than a strong limit, and did all of category theory with respect to this notion of smallness, so that free co-completion and everything else would be with respect to diagrams small in this sense. Does anything go seriously wrong at some point, or to put it better, does anything surprising happen? Or do we just have the same situation as with Grothendieck universes: no real problem, just a bit of extra book-keeping?

Posted by: Richard Williamson on June 14, 2021 11:44 PM | Permalink | Reply to this

Re: Large Sets 2

I’m happy to agree that replacement is a cocompleteness property of SetSet, in the general sense of asserting that certain colimits exist. It’s just that it’s not the property usually called “cocompleteness”, because that asserts only that small diagrams have colimits, which is provable in ETCS for SetSet. Replacement is a stronger cocompleteness property which instead asserts that certain a-priori large diagrams have colimits (or, equivalently, that those diagrams are in fact small).

Posted by: Mike Shulman on June 15, 2021 4:37 AM | Permalink | Reply to this

Re: Large Sets 2

By the way, as long as so much discussion about Replacement is happening on a post series based on neutral/structural set theory, let me point out that McLarty’s replacement axiom for ETCS is not, in my opinion, stated in the best way.

As Tom explained very nicely in Part I, category theorists are interested in the right level of generality, and in particular whether any given fact about sets generalizes to other categories. But unlike the other axioms of ETCS (aside from choice), McLarty’s replacement axiom does not generalize well to other categories: because it assumes only that the desired sets exist up to isomorphism, but not unique isomorphism, it tends to fail in toposes not satisfying choice. On the other hand, simply adding uniqueness of the isomorphism doesn’t seem to give a very useful axiom, since a bare set is very rarely characterized up to absolutely unique isomorphism – this uniqueness usually only applies to characterize a set together with additional data.

I wrote about some other ways to phrase categorical Replacement (and Collection) axioms, which are equivalent to McLarty’s over ETCS but generalize better to other categories, in section 7 of Comparing material and structural set theories. Although these are a bit more bureaucratic to state in the language of ETCS (due to its non-neutral idiosyncracies such as defining families in terms of functions), I think in practice they correspond more closely to how mathematicians actually use these axioms. (And also, these are the forms of these axioms that hold internally in stack semantics, as I mentioned in another comment.)

Posted by: Mike Shulman on June 15, 2021 4:59 AM | Permalink | Reply to this
Read the post Large Sets 6
Weblog: The n-Category Café
Excerpt: Introducing the beths.
Tracked: June 24, 2021 4:10 PM

Re: Large Sets 2

I do not quite understand how you can prove the existence of X +X^+ in ETCS. In ZFC you do this by taking some cardinal which is larger than XX. That cardinal is inparticular an ordinal and the ordinals below it form a set, so you can pick the lowest one which is larger than XX and the result will be exactly the right cardinal number.

How does it work in ETCS, without having canonical representatives of the ordinals and cardinals which automatically assemble into sets when size restricted?

Posted by: Nico Beck on May 11, 2023 8:51 AM | Permalink | Reply to this

Re: Large Sets 2

I think the most natural way to prove the existence of X +X^+ is the one I alluded to in the post:

when I defined A +A^+, I implicitly assumed the fact that any family (A i) iI(A_i)_{i \in I} of sets, indexed over some nonempty set I, has a smallest member. (For without this fact, how would we know that there’s a smallest set strictly bigger than AA?) This can be proved using Zorn’s lemma, or by arbitrarily well-ordering each member.

Let’s take this fact for granted. Given a set XX, we know there’s at least one set bigger than XX, namely, 2 X2^X. Now consider the set of subsets YY of 2 X2^X such that X<Y2 XX \lt Y \leq 2^X. This family of sets is nonempty, since we can take Y=2 XY = 2^X. So by the fact above, it has a smallest member ZZ. Then ZZ is nothing but X +X^+; that is, X<ZX \lt Z and whenever ZZ' is a set satisfying X<ZX \lt Z', then ZZZ \leq Z'.

(A point in the background here is that X +X^+, like everything in ETCS, is only defined up to isomorphism.)

An alternative way of coming at it would be to use the Hartogs construction, as in the comment above. In other words, we can construct X +X^+ as the set of isomorphism classes of well-ordered sets of cardinality X\leq X.

Posted by: Tom Leinster on May 12, 2023 11:04 PM | Permalink | Reply to this

Post a New Comment