## December 8, 2009

### Syntax, Semantics, and Structuralism II

#### Posted by Mike Shulman

Recently several of us have been making a lot of noise about “structural set theory,” also known as “categorical” or “categorial” set theory. This phrase refers to a general class of theory akin to Lawvere’s ETCS, which describes sets from a purely “structural” point of view. A brief way to say this is that we care only about the category of sets, in which isomorphic sets are indistinguishable, rather than the class of sets equipped with a global membership relation $\in$. In structural set theory, which claims to be closer to the way sets are actually used in most mathematics, it doesn’t make sense to ask whether (for instance) $3\in 17$, since $3$ and $17$ are elements of the set $N$ but not sets themselves. Rather than coming with an intricate set-membership structure, sets in structural set theory are simply the “raw material” with which we build mathematical structures such as groups, rings, spaces, manifolds—and even “set-membership structures”!

This is sort of a continuation of this post, but it should also stand mostly alone. I’ll also summarize some of the things I learned from this previous discussion.

I believe the word structural originally comes from the philosophy of mathematics called “structuralism,” according to which

mathematical theories… describe structures. … Structures consist of places that stand in structural relations to each other.

One prominent exponent of this philosophy was Paul Benacerraf, who pointed out in his paper “What numbers could not be” that set-theoretic foundations such as ZFC do not adequately describe the way we really think about mathematical objects such as numbers. In ZFC, one has to define the number $3$ as some particular set, such as the von Neumann ordinal $\{\emptyset, \{\emptyset\}, \{\emptyset,\{\emptyset\}\}\}$. But this then allows us to ask questions such as “is $17\in 3$?” (which, with this definition, is false) or “is $3\in 17$?” (which, with this definition, is true). Many mathematicians would regard these questions as meaningless.

If numbers are sets, then they must be particular sets, for each set is some particular set. But if the number 3 is really one set rather than another, it must be possible to give some cogent reason for thinking so; for the position that this is an unknowable truth is hardly tenable. (Benacerraf 1965)

Benacerraf imagined a student named Ernie who defined the natural numbers as the von Neumann ordinals $0=\emptyset$, $1=\{0\}$, $2 = \{0,1\}$, $3 = \{0,1,2\}$, and another named Johnny who defined them as $0=\emptyset$, $1=\{0\}$, $2=\{1\}$, $3 = \{2\}$. When Ernie and Johnny met, they could not agree on whether $3\in 17$ or not. Benacerraf concluded that numbers cannot be sets.

So what matters, really, is not any condition on the objects (that is, on the set) but rather a condition on the relation under which they form a progression…. what is important is not the individuality of each element but the structure which they jointly exhibit…. the question of whether a particular “object”—for example, $\{\{\{\emptyset\}\}\}$—would do as a replacement for the number 3 would be pointless in the extreme… the whole system performs the job or nothing does. (Benacerraf 1965)

That is,

To be the number 3 is no more and no less than to be preceded by 2, 1, and possibly 0, and to be followed by 4, 5, and so forth. (Benacerraf 1965)

Some years earlier, Bourbaki had given an important role to the notion of structure in writing their Elements of Mathematics. One can argue that when we look around us at mathematics, what we see everywhere are structures. The natural numbers are just one example. Another is the real numbers (are they “really” Cauchy sequences or Dedekind cuts?). Likewise cartesian products (is an ordered pair $(a,b)$ “really” $\{a, \{a,b\}\}$?). Everywhere what we see are structures like groups, rings, fields, topological spaces, manifolds, etc. defined by equipping one or more sets with functions and relations, and which we only care about determining up to structure-respecting isomorphism.

The study of structure can be “coded” within a set theory such as ZFC, but it remains a “coding”—as Benacerraf pointed out, there is extraneous information in any ZFC-set which must be forgotten about in order to study structure. In Bourbaki’s Theory of Sets they defined a general notion of “structure”: one or more sets together with an element of some set obtained from these by iterating products and power sets, satisfying suitable axioms. For example, a toplogical space would be specified by a single set $X$ and an element of $P(P(X))$ (the set of open subsets of $X$). A group could be specified by a set $G$ and an element of $G\times P(G\times G\times G)$ (the identity together with the graph of the multiplication). In their general theory, Bourbaki specifically restricted the allowable axioms for such structures to those which would be invariant under isomorphisms of the carrier sets. In other words, they specifically had to say “now we forget about all those irrelevant bits and remember only what we want.”

There are even ways to make this “forgetting” precise. One way, which I believe has mostly been pursued by philosophers following Benacerraf, is to introduce “structure” as a notion living at a higher level of abstraction than sets. That is, “the natural numbers” are a “structure,” not a “set,” which is a “way of talking at once about all sets that might represent the natural numbers.” I don’t know whether this has been made into a formal mathematical theory.

Another way of doing things structurally in a ZF-like theory is to postulate a global choice operator, meaning an operator $\varepsilon$ such that if $P$ is a property where there exists any $x$ with $P(x)$, then $\varepsilon x. P(x)$ is guaranteed to be such an $x$. We can then define, for instance, “the” natural numbers to be $\varepsilon x. P(x)$ where $P(x)$ is essentially “$x$ is a natural numbers object in $Set$”. All we have to do then is show that there exists some NNO, which we can do with either of Ernie or Johnny’s constructions (or many others, of course). How does this solve the problem? Well, since nothing is assumed about $\varepsilon x. P(x)$ or its elements except that $P$ is true of it, we can’t assert any statement like $3\in 17$. However, since $\mathbb{N}$ is still some particular set, such statements still have a definite truth value (in any model); we can just never discover what it is. So this essentially amounts to resolving the dispute between Ernie and Johnny by having a teacher come over and say “One of you is right and one of you is wrong about whether $3\in 17$, but I’m not going to tell you which is which. All I’m going to tell you is that $\mathbb{N}$ is a set that satisfies a certain list of properties which both of you can prove from your definitions, so those properties are all you’re ever allowed to use.”

Bourbaki used such a choice operator, as does Arnold Neumaier’s proposed foundation FMathL. Personally, I find this a rather unsatisfying resolution of the issue. To me it makes much more sense to say that $3\in 17$ is a meaningless statement than that it has a definite truth value we are merely unable to discover. This is a philosophical point rather than a mathematical one, but of course the whole issue is a philosophical one, once both types of theory are adequate as foundations. I believe Arnold has said that his aimed-at computer implementation of FMathL will recognize such statements as undecidable and warn the user about them, but it seems cleaner to me if they are just a syntax error. A final, and mathematical, point is that assuming a choice operator usually (and unsurprisingly) implies the truth of the Axiom of Choice. Even if you personally have no problem with assuming AC, I think it is unsatisfying for the resolution of a philosophical issue (how to do mathematics structurally) to depend on an extremely strong, and a priori unrelated, set-theoretic axiom.

Now, the partisans of structural or categorical set theory contend that it is equivalent in strength to ZFC-like theories (this much is provable) and just as adequate as a foundation for mathematics, but it does away with all the extra irrevelant information which always has to be forgotten about anyway. I believe the first such set theory was Lawvere’s Elementary Theory of the Category of Sets, which dates to about the same time as Benacerraf. (I should also mention Todd’s very nice exposition of ETCS.) In ETCS, instead of sets with a membership structure $\in$ as in ZFC, we work with the category of sets, imposing axioms guaranteeing that it has products, exponentials, powersets, and so on. If we then perform only categorical constructions, the results will automatically be isomorphism-invariant. For instance, the natural numbers are characterized as a natural numbers object in the category of sets. The way in which this sort of theory answers Benacerraf’s objections was put forward eloquently by Colin McLarty in “Numbers can be just what they have to”:

…the structuralist program is already fulfilled or obviated, depending on how you look at it, by categorical set theory… Sets and functions in this theory have only structural properties. (McLarty 1993)

McLarty imagines a student named Brittany who has been taught categorical set theory and is quite confused by the difficulties of Ernie and Johnny. In the following exchange, Ernie is trying to describe the philosophers’ approach in which “structure” lives at a higher level of abstraction than “sets.”

… [Brittany] asked “You mean that you define the natural numbers as a certain specific set?”

“Well no,” he answered, “The natural numbers aren’t a set, they are a structure. You see they aren’t uniquely defined.”…

She asked if it was the same for the real numbers, or the Euclidean plane, and he said it was. He said all of those are abstract structures, handy ways of talking about sets but not themelves sets and actually not objects at all.

“So the advantage of your set theory is that mathematicians never work with your sets!” she said amazed.

Brittany, of course, can work with sets as objects of the category $Set$, which can only be characterized up to isomorphism. Thus, for her, nothing can be said about the elements of some NNO other than that they support the structure of an NNO.

The term “categorical set theory” is common for theories such as ETCS, but it has the disadvantage that in logic and philosophy, “categorical” also has the completely different meaning of “uniquely determined.” This has led some people to use “categorial set theory” instead (note the missing “c”). I prefer structural set theory, which among other things stresses the point that such theories do not necessarily depend on category theory. Of course, ETCS is explicitly about the category of sets; the first axioms say that “sets and functions form a category”. But there are equivalent theories in which the notion of category is not taken for granted, and in which such facts as the associativity of function composition are proven rather than assumed as axioms.

I’ve written down one such theory myself as a proof-of-concept, for now called SEAR. It has the additional advantages of containing “elements” of sets as a primitive concept, and not requiring the development of topos theory in order to prove the separation axiom. In SEAR we are given a collection of sets, together with a collection of elements of each set, and for each pair of sets a collection of (binary) relations between those two sets. One axiom allows us to construct a relation $R$ from $A$ to $B$ by specifying precisely for which pairs of elements $a\in A$ and $b\in B$ it is supposed to hold. For example, in this way we can define the composite of two relations $R\colon A \to B$ and $S\colon B \to C$ such that $(S\circ R)(a,c)$ holds just when there exists a $b\in B$ such that $R(a,b)$ and $S(b,c)$ hold. Likewise we can define a function to be a relation $R\colon A \to B$ such that for any $a\in A$ there is a unique $b\in B$ with $R(a,b)$. And we can prove that sets and functions form a category, in fact that they form a category satisfying the axioms of ETCS. Conversely, in an ETCS-category we define an “element” of a set $A$ to be a function $1\to A$ from the terminal object, and a “relation” to be a subobject $R\hookrightarrow A\times B$, and we can prove that the basic axioms of SEAR are satisfied. (SEAR is by default stronger than ETCS, akin to the distinction between ZF and Z).

SEAR shows that the distinguishing feature of structural set theories is not that they use category theory, but that they can serve as a foundation for structural mathematics without the need to first forget irrelevant information such as whether $3\in 17$. Of course, one cannot get very far in SEAR without defining the category of sets, but one cannot nowadays expect to get very far in most areas of mathematics without defining the category or categories one is working in. I also believe that SEAR is easier for the non-toposophically-inclined to use as a foundational system than ETCS is, since it includes a “comprehension axiom” as basic, rather than having to construct such an axiom out of topos-theoretic structure as in ETCS.

By contrast to structural set theories, I like to refer to ZFC-like set theories as material set theories. The idea is that in a material set theory, the elements of a set $A$ are “material” and have an existence and identity apart from being an element of $A$. The word “material” was suggested by Steve Awodey; its earliest use that I know of is in his paper “Structure in mathematics and logic: a categorical perspective”:

The definition [of a cartesian product] provides a uniform, structural characterization of a product of two objects in terms of their relations to other objects and morphisms in a category, in contrast to ‘material’ set-theoretic definitions which depend on specific and often irrelevant features of the objects invoved, introducing unwanted additional structure. Indeed it is just this material aspect of conventional set theory that gives rise to such pseudo-problems as whether the number $1$ is ‘really’ the set $\{\emptyset\}$, or whether the real numbers are ‘really’ cuts in the rationals. (Awodey 1996)

Now, the sets in a material set theory are admittedly closer to the natural-language meaning of “set”: a set of three sheep can be distinguished from a set of three chairs, and each of the sheep and chairs might also be an element of other sets. However, the claim is that the sets in a structural set theory are closer to the way sets are used in mathematics. These “structural sets” are also very similar to the types in a type theory (regarded as the object-theory, as suggested in the previous post). In fact, Toby has convinced me that it’s difficult to decide exactly where to draw the line between type theory and structural set theory, although there are differences in how the words are most commonly used. It might be better, terminologically speaking, if mathematicians had used a word such as “type” instead of “set” all along. But by now the notion that (for instance) a group is a set equipped with an identity and a multiplication is so firmly entrenched in most mathematicians’ consciousnesses that I think there’s little point trying to change it. Anyway, as I mentioned in the previous post, “set” and “type” and “class” are basically fungible words—especially when used structurally.

So what is the relationship between structural and material set theory? One direction is easy: the sets in a material set theory will always, by forgetting the superfluous data, form a structural set theory. In the other direction, one can structurally “build” a model of material set theory by constructing “hereditary membership trees” and calling them “sets”. See nlab:pure set for a summary; more detailed versions can be found in chapter VI of Sheaves in Geometry and Logic, or in chapter 9 of Johnstone’s “Topos Theory.”

Thus, the two types of theory are “equivalent” in a suitable sense, so at least in principle either one can equally well be used as a foundation. However, one should not get the impression from this that the way in which structural set theory serves as a foundation is by first using material set theory as a foundation and then interpreting this using membership trees. The situation is rather the reverse: the way material set theory serves as a foundation for mathematics is by first using structural set theory as a foundation, and then interpreting this in the category of material sets. The complexity of the construction of membership trees really makes it clear, to me at least, how much superfluous data is carried around when we use material set theory as a foundation for mathematics.

There are lots of directions to go from here, most notably how to deal with proper classes, large categories, $n$-categories and $\omega$-categories and so on. But I think I’ll call it quits for now.

Posted at December 8, 2009 10:21 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2129

### Re: Syntax, Semantics, and Structuralism II

Sorry to open with such a trivial comment, but I noticed your dissatisfaction with the name “SEAR” (which I guess is your own). Do you have some particular objection to it? It seems good to me.

Posted by: Tom Leinster on December 8, 2009 11:15 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I guess I’m pretty satisfied with “SEAR” by now. It always takes me a while of using a new terminology until I get used to it. Especially if I invented a name myself, I always start out feeling self-conscious about it. (-:

Posted by: Mike Shulman on December 8, 2009 11:42 PM | Permalink | PGP Sig | Reply to this

### Re: Syntax, Semantics, and Structuralism II

No doubt you posted this with eyes wide open. I can just hear the cry roaring up:

Morituri, te salutamus!

Te saluto!

Posted by: Todd Trimble on December 9, 2009 12:42 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Now that’s a disturbing image.

Posted by: Mike Shulman on December 9, 2009 4:01 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I only meant that given the fact that foundational discussions tend to excite a lot of strong mathematical emotion and seemingly interminable discussions (which take a lot of time out of the day to answer), I admire and salute your willingness to enter the fray nonetheless. These words, famously uttered on entry to the arena, apparently evoked an image stronger than anything I intended. Hope I didn’t give offense.

Posted by: Todd Trimble on December 9, 2009 3:56 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Oh no, I was just amused, and maybe a little bemused. I was wondering who the emperor was that we were all saluting. (-:

Posted by: Mike Shulman on December 9, 2009 4:05 PM | Permalink | PGP Sig | Reply to this

### Re: Syntax, Semantics, and Structuralism II

While structural set theories are better than material set theories in terms of parsimony, I will always have a place in my heart for Bourbaki’s treatment of the subject. Also, “every object is a set” is definitely aesthetically pleasing in a certain sense. I definitely agree with you that a structural set theory is better for almost everything, but surely you can admit that classical set theory is beautiful its own special way.

Posted by: Harry Gindi on December 9, 2009 6:55 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Also, “every object is a set” is definitely aesthetically pleasing in a certain sense

Not to me, although aesthetics are of course subjective. I can see that it is convenient, but to me, it is unnatural to say that a group is a set rather than that it is a set equipped with structure.

surely you can admit that classical set theory is beautiful its own special way.

Indeed I can! I frequently overstate the case for structural set theory to make a point and to jumpstart discussion. This is especially true when discussing it as a foundation for other parts of mathematics, since I believe that for those purposes, material set theory just introduces a lot of superfluous stuff. However, material set theory, which I prefer to think of as the study of well-foundedness, indeed has its own beauty.

Additionally, in studying set theory as a subject in its own right, it’s useful to be able to pass back and forth between the two viewpoints freely. On the one hand, I believe some set-theoretic constructions are much easier to understand in the structural world. For instance, set-theoretic forcing is an unnecessarily complicated (to my mind) way of talking about classifying toposes. On the other hand, some other constructions are much clearer materially. Gödel’s constructible universe and its relatives, which play a central role in modern set theory, are a good example; I don’t know whether there is a clean way of talking about them structurally.

Posted by: Mike Shulman on December 9, 2009 8:43 PM | Permalink | PGP Sig | Reply to this

### Re: Syntax, Semantics, and Structuralism II

…although aesthetics are of course subjective.

Why “of course”? Remember von Neumann saying

As a mathematical discipline travels far from its empirical source, or still more, if it is a second or third generation only indirectly inspired by ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration.

By “men with an exceptionally well-developed taste”, did he merely mean people who believed themselves to have good taste, or whom he (subjectively) believed to possess it?

Posted by: David Corfield on December 13, 2009 11:37 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

By “men with an exceptionally well-developed taste”, did he merely mean people who believed themselves to have good taste, or whom he (subjectively) believed to possess it?

Well, not being von Neumann, I can’t say for sure, but I would presume the latter. But as you say with your parenthetical, his judgement of “good” taste is subjective to him just as much as mine is to me.

That isn’t to say that there might not be some objective standard by which to judge “good taste,” especially in mathematics, and many of us seem to think that our personal aesthetics are “correct” or “better.” But I think all I meant to say is that in practice, we observe that different people have different aesthetics, regardless of whether one of them is right and one of them is wrong. Case in point: Harry finds “every object is a set” aesthetically pleasing, but I don’t.

Posted by: Mike Shulman on December 13, 2009 3:46 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

By “men with an exceptionally well-developed taste”, did he merely mean people who believed themselves to have good taste, or whom he (subjectively) believed to possess it?

I read it as not quite either, but closer to the latter. It sounds like he’s saying that some controlled pruning is necessary on the development of a field. It could be the outside influence of experimental results, or it could come from inside. Every field has a dominant cluster of “tastemakers”, and if a field is lucky its tastemakers will prune well.

The tastemakers, of course, will always believe themselves (possibly incorrectly) as having good taste, and von Neumann may or may not be able to judge good taste in a new field, but even so the pruning may be done well or badly, and that is the “objective” standard.

Posted by: John Armstrong on December 13, 2009 4:30 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I didn’t offer the two interpretations as exhaustive. I should rather imagine that von Neumann considered there to be an objective notion of “exceptionally well-developed taste”. And that you needed this taste to judge whether others had it.

I have no trouble with this. I’m happy to say that Atiyah has shown himself to have had exceptionally well-developed taste. Not that I’m qualified to judge. What one might say is that any objectivity to the notion of good taste, amounts to little more than its referring to good mathematics. And perhaps what it is for something to be good mathematics can be specified in other ways - that which carries on the story of mathematics in an important way. What does important mean…

Harry may find “every object is a set” aesthetically pleasing, but say that were for him the high point of what he enjoys, I would then question his mathematical taste.

Posted by: David Corfield on December 13, 2009 5:32 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Also, “every object is a set” is definitely aesthetically pleasing in a certain sense.

I certainly found it pleasing when I first learnt it, although later I found it gratuitous.

Nevertheless, there is still something pleasing about our ability to model everything within the universe of pure sets. These days, I'll remark to my students (usually non-math-majors to whom I'm introducing or recalling the concept of a set in the sense of a subset of a fixed domain, which is usually a low-dimensional cartesian space) that sets are very important in more advanced mathematics, and that in fact it seems possible to (and some people like to) construct every mathematical object out of sets. That fact may not be as important as many people think, but it is still a neat fact about the concept of sets.

Posted by: Toby Bartels on December 13, 2009 6:38 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I don’t remember what exactly triggered the association, but some time ago it occurred to me that material set theory is quite a lot like a naive Lisp. Yes, you can design a very simple programming language where the only data is atoms/symbols, cons pairs and functions, and you can build all data structures on those primitives. And one could even describe the construction as somewhat elegant as an exercise in extreme minimalism.

But, having since worked with typed languages with algebraic/(co)inductive data types, I find myself liking them much more. I much prefer “T is the type defined by these (co)constructors” to “T is the subset of rose trees of this certain shape, and with symbols tagging the right spots.” And definitions in material set theory (at the least) fall in the latter category. It feels ad-hoc and unprincipled to me.

It also occurred to me that I’ve seen similar developments in type theory proper. For instance, in Conor McBride’s (and others’) work on his “Observational” Type Theory, he notes that with sufficiently extensional equality (which he’s trying to provide), it suffices to have a core theory with only the primitive types 0, 1 and 2 (being finite sets of those sizes), and type formers $\Sigma$, $\Pi$ and W, because all inductive families can be encoded with those (and apparently coinductive and M-types can be, too, although I’m hazy on that). And W-types, being well-founded trees, are a lot like material sets. But, such a type theory isn’t something I’d want to work with directly, so its usefulness is probably limited to being a core theory, so one can be confident in the correctness of the implementation, while one would actually work in an overlay where inductive definitions are at the fore.

Having expressed that, I’m no longer sure if I’m arguing for or against material set theory as a foundation. :) The material approach (so to speak) of the OTT papers is useful because it makes it easy to have confidence that the implementation faithfully reflects the mathematical theory, but I’m not sure there’s an analogous concern in the realm of pure mathematics (as I wouldn’t say SEAR, for example, has more complex axioms than ZF).

Posted by: Dan Doel on December 14, 2009 1:49 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

That reminds me of the point that came up somewhere in the long discussions back here about “low-level” versus “high-level” foundations. Set theory, of any sort, is kind of like assembly language, or bytecode maybe, or I suppose pure Lisp if you like: simple, portable, and easy to ensure correctness, but long and tedious to actually code in, so therefore no one does. Arnold Neumaier is interested in designing a “foundation” or “formalization” which is more like a high-level language that’s closer to the way mathematicians actually write. I found the general idea quite compelling, but we disagreed about what that high-level formalization ought to look like; I (and some others) wanted it to look more “structural” and “typed,” since that’s what mathematics looks like to me, whereas his system FMathL is “material” and “untyped,” which apparently is what mathematics looks like to him.

Posted by: Mike Shulman on December 14, 2009 2:26 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I suppose another example of such a construction is that you can encode inductive types in lambda calculi, and you don’t even need to abandon types to do so. For instance, the Church numerals look something like:

$0 = \lambda x \, f. \, x$ $1 = \lambda x \, f. \, f \, x$ $2 = \lambda x \, f. \, f \, (f \, x)$ $\vdots$

And in System $F_2$ or better, you can give such terms a type: $\Pi R. R \to (R \to R) \to R$.

For some reason, I dislike this less than I dislike building everything with sets, although they aren’t exactly a pleasure to work with directly (and they have some fundamental flaws; for instance, I saw a paper recently showing that you can’t construct the dependently typed induction principle for the above encoding). Perhaps it’s because it’s still somewhat structural in character. Inductive types are encoded by their (weak) eliminators (or, if you look at the categorical basis, it’s making use of initiality, by encoding elements of the initial F-algebra by functions from F-algebras to the image of said element under the unique initial mapping), so types are still characterized by what you can do with the elements, rather than by simply defining Foos as some subset of all lambda terms where the elements have the right shape (especially since there’s no single type of all lambda terms to take a subset of).

This goes away a little more if you work in the untyped lambda calculus. For instance, if we have:

$\lambda x \, f. \, x$

It’s both a Church numeral and a boolean, but

$\lambda x \, f. \, f \, x$

is a numeral but not a boolean, because it has the wrong shape to be the latter. If we have types, and we’re explicit about them, then:

$\Lambda R. \, \lambda x^R \, f^{R \to R}. \, x$

is a numeral, while:

$\Lambda R. \, \lambda t^R \, f^R. \, t$

is a boolean, and the types tell us the difference.

Of course, it’s also possible that I’m just biased against theories where types don’t play a part (since I quite like types). You could argue that the above types are just ways of picking out the rightly shaped lambda terms, but it feels less messy to me.

Posted by: Dan Doel on December 14, 2009 6:49 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

For some reason, I dislike this less than I dislike building everything with sets, although they aren’t exactly a pleasure to work with directly (and they have some fundamental flaws; for instance, I saw a paper recently showing that you can’t construct the dependently typed induction principle for the above encoding).

You have to be careful about this, though. Whether this is possible depends on the notion of equality of terms in your type theory.

If equality of terms is given by $\beta\eta$-equality, then you can only show the Church encoding is a weakly initial algebra – that is, you’ve got existence but not uniqueness, so you can define functions by recursion but not prove properties by induction. However, if the equality of terms is given by contextual equivalence (which in the case of F corresponds to Reynolds-parametricity), then the Church encodings are strong initial algebras, and the induction principle is provable.

This is why people sometimes say dependent type theory doesn’t support data abstraction: without parametricity, we can’t prove internally to the type theory, that an abstract implementation of the natural numbers by unary Peano numbers is equal to an implementation with a binary representation.

Posted by: Neel Krishnaswami on December 14, 2009 11:07 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I’m interested in this more generally. Given an strong endofunctor $F: C \to C$ on a cartesian closed category, the end

$\int_{c: C} c^{c^{F c}},$

if it exists, is a weakly initial $F$-algebra. (This specializes to the result you just mentioned, where if we take $F$ to be an endofunctor of the form $F(c) = 1 + c$, the end can be rewritten as

$\int_{c: C} (c^c)^{(c^c)}$

which parametrizes dinatural transformations of the form $c^c \to c^c$.)

Do you know of some nice general assumptions on $C$ or $F$ which would ensure that this end is strongly initial?

Posted by: Todd Trimble on December 14, 2009 2:10 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Hi Todd,

Sorry for the delay in my response, but I’ve tried to reply a couple of times and each time it ballooned out of control. I think the best thing I can do is just point you to the literature.

Rosolini’s paper “Reflexive Graphs and Parametric Polymorphism” is probably the closest thing to an answer to your question that I know. The open question in his intro has been answered positively since then, but his description of parametric completions is still state-of-the-art AFAIK.

To unwind the stack on that paper, you might be helped by Ma and Reynolds’ paper “Types, Abstraction and Parametric Polymorphism, Part 2.”

This is a categorical version of Reynolds’s “Types, Abstraction and Parametric Polymorphism”, which was necessary because the original assumed the existence of set-theoretic models of parametric polymorphism, which he subsequently showed (in “Polymorphism Is Not Set-Theoretic”) did not exist.

The references to the use of the effective topos in Rosolini’s paper are explained a bit in Pitts’s paper “Polymorphism Is Set-Theoretic, Constructively”.

Posted by: Neel Krishnaswami on December 15, 2009 1:18 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

I expect that a lot of it is what one is used to. It guess it makes a little more sense to say that the number 3 “is” the operation of iterating something three times that it does to say that it “is” the set $\{0,1,2\}$. It’s kind of like the naive set-theoretic definition (which works in NF, I believe) that 3 “is” the set of all three-element sets. The natural numbers are so ubiquitous, with so many faces, that it’s easy to find lots of “natural-seeming” models for them. But the advantage of structural set theory is that it doesn’t force us to choose any particular definition.

I also think all the focus on natural numbers partly misses the point. What about $-2$? Or $\pi$? Or $3+4i$? From the point of view of structural set theory, these numbers all have the same ontological status. But there is no Church numeral for $-2$, no useful set of all sets with $\pi$ elements, and no von Neumann ordinal for $3+4i$. So if your definition of “natural number” is any of those things, then when you move on to more general kinds of numbers, you tend to end up introducing ad-hockery and unnecessary material data.

Posted by: Mike Shulman on December 14, 2009 6:45 PM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Historical note: Bourbaki’s final version of, “Theorie des Ensembles” was published in 1954, which I’ve verified on the internet. This was over ten years before Lawvere published his exposition of ETCS. It’s also a matter of fact that the main ideas of Book 1 were published around 1939. This was essentially the same book, except the proofs were not filled in, as there were more pressing concerns (namely publishing the books that had important substance (algebra, topology, etc.)). In view of this fact, Lawvere’s ETCS was preceded by Bourbaki’s set theory by nearly thirty years.

By the way, Toby said to try to get someone to make a post on the n-cafe asking about summer undergraduate research projects/opportunities involving higher category theory when I asked on the n-forum. Here’s the relevant post: Click!. I figure since I’m replying, I might as well put that out there.

Posted by: Harry Gindi on December 9, 2009 7:26 AM | Permalink | Reply to this

### Re: Syntax, Semantics, and Structuralism II

Thanks for the correction! I’ve updated the entry. I was fooled because my version of Bourbaki is dated in the 60s, but if I’d been awake I would have realized that was just the date of the English translation, not the original.

Posted by: Mike Shulman on December 9, 2009 4:12 PM | Permalink | PGP Sig | Reply to this

### Hilbert’s choice operator and existential types

It should be noted that Hilbert’s choice operator is also related to existential types in type theory. The common denominator is that both are a form of “dynamic linking” or “abstract types”.

See Abadi, Gonthier, Werner: Choice is Dynamic Linking, which also explains the subtle distinctions between existential types proper and Hilbert’s choice. Existential types have a large literature, but the seminal paper is Mitchell, Plotkin: Abstract types have existential type.

Posted by: guest on December 12, 2009 6:12 PM | Permalink | Reply to this

### Re: Hilbert’s choice operator and existential types

It should be noted that Hilbert’s choice operator is also related to existential types in type theory.

Very interesting, thanks for the link. For anyone who doesn’t want to follow it, I think the crucial passages are:

…if $A$ is a type and $X$ is a type variable [which is presumably free in $A$], then $\varepsilon X.A$ is a type $X$ for which $A$ is inhabited, when such a type exists. The type $X$ may be chosen dynamically (at run-time) among several candidates. In any case, $X$ is unique [sic.—I would say “fixed”]. For instance, $\varepsilon X.X$ is an arbitrary but fixed inhabited type.

and

…whenever we instantiate $\varepsilon X.A$ with a chosen type $B$, we also pick a value of the corresponding type $A[B/X]$. In programming terms, this value can be seen as a dynamically linked implementation of the interface $A$, with $B$ as the concrete representation type for $X$.

This choice operator in type theory reminds me a lot of the seemingly impossible functional programs, only acting at the level of types rather than values. Since the seemingly impossible “find” function on Cantor space depends on its topological compactness, do you think it makes sense to say that the choice operator is expressing some sort of “compactness” of the “space of types”?

Posted by: Mike Shulman on December 14, 2009 7:42 PM | Permalink | PGP Sig | Reply to this
Read the post In Praise of Dependent Types
Weblog: The n-Category Café
Excerpt: Dependent types should be everyone's friend!
Tracked: March 3, 2010 6:41 PM

Post a New Comment