## March 23, 2010

### Stack Semantics

#### Posted by Mike Shulman

Note: This is being reposted from the temporary site, with existing comments copied over so that discussion can continue here.

Today I’m blogging from Washington D.C., at the Annual Meeting of the Association for Symbolic Logic. The ASL is mostly populated with material set theorists and classical logicians, but this year they had a special session on Categorical Logic, and another one on Logic and the Foundations of Physics (including lots of categorical quantum mechanics)—a promising sign for the recognition of category theory. I was invited to speak at the former session this afternoon, about stack semantics and 2-categorical logic.

And not-entirely-coincidentally, at long last I’ve put online a draft of my (first) paper about the stack semantics and comparing material and structural set theories. You can get it from my nlab page:

There are also slides from today’s talk and one from last November.

In brief, the idea of the stack semantics is to extend the internal logic of a topos to a language which can talk about unbounded quantifiers (quantifiers of the form “for all sets” rather than “for all elements of A” for some fixed set A). In this extended language, we can then state topos-theoretic axiom schemas which are as strong as the full separation and replacement axioms of ZF. (Ordinary topos theory is only equiconsistent with bounded Zermelo set theory, which is much weaker than ZF.) This generalization is extremely easy—even easier than some presentations of the ordinary internal logic—and is in fact implicit throughout topos theory, but has seemingly never been written down precisely before.

If that intrigues you, then you may want to look first at the talk from November; it’s aimed at category theorists without much experience in categorical logic. Then you can go on to look at the paper itself, most of which should (I hope) also be fairly accessible. Comments are welcome!

Posted at March 23, 2010 6:51 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2196

### Re: Stack Semantics

I am intrigued by the role the self-indexing stack $X \mapsto S/X$ (on slide 30) plays.

Do you see a role for the under-self-indexing $X \mapsto X/S$? Elsewhere the claim is that a tremendous amount of interesting geometric information is encoded in the objectwise stabilized/abelianized self-under-indexing of a higher “category of spaces” $S$. This must have a logical counterpart, I suppose.

Posted by: Urs Schreiber on March 23, 2010 7:01 AM | Permalink | Reply to this

### Re: Stack Semantics

In brief, no, I don’t see any comparable role for co-slice-categories. One reason is that co-slice categories generally don’t preserve any of the “set-like” behavior which slice categories do. The slice category $S/X$ plays a crucial role in topos theory because its objects can be thought of as “$X$-indexed familes of sets/objects,” and anything you can do with a single set can be done “family-wise” to a family. Put differently, objects of $S/X$ are “generalized objects of $S$ at stage $X$” as I described on one of the slides. I don’t see any such interpretation for co-slice categories; they are undoubtedly important in mathematics, and maybe they even have their own internal logic, but that logic would be much less familiar, perhaps sort of like a linear logic.

Posted by: Mike Shulman on March 23, 2010 7:25 AM | Permalink | Reply to this

### Re: Stack Semantics

I don’t see any such interpretation for co-slice categories

We can at least think of the coslice-category under $X$ as the category of all generalized elements in objects of $S$ at stage $X$.

But maybe I changed my mind since my original question. Now I am thinking that the stabilized co-slice operation is maybe more important at the level of sites.

Posted by: Urs Schreiber on March 23, 2010 9:09 AM | Permalink | Reply to this

### Re: Stack Semantics

Well, I haven’t finished writing my response to the paper as you sent it by email (but I like it, I like it!). However, since you asked specifically for terminological advice, here is mine:

• On ‘autological’: I’m wary of any terms derived from ‘separation’, since this is liable to conflict with unrelated topological terminology. The only meaning of ‘autological’ that I know of is in relation to the Grelling–Nelson paradox, where it refers to a self-describing adjective. If you’re happy with invoking that idea, then go for it.

• On ‘well-pointed’: I honestly don’t know why one wouldn’t simply use ‘well-pointed’ period. Is there any literature that studies topos theory from an externally constructive perspective that uses that term in the weaker (but classically equivalent) sense? If so, then sure, say ‘constructively well-pointed’ to clarify (and in any case, say ‘classically well-pointed’ for the weaker notion when comparing them). But if not, then use the standard term for what that term really should mean (and already does mean, even if one usually simplifies the definition classically).

Posted by: Toby Bartels on March 23, 2010 7:02 AM | Permalink | Reply to this

### Re: Stack Semantics

Dang, I can’t believe that I wrote that wrong. Of course, ‘autological’ describes a self-describing adjective.

Posted by: Toby Bartels on March 23, 2010 7:12 AM | Permalink | Reply to this

### Re: Stack Semantics

I have exactly the same feelings about terms derived from ’separation’. The only objection to ‘autological’ that I’ve heard so far is that it’s a ‘funny-sounding word’.

On ‘well-pointed’, I agree with you. But I’ve talked to some people who feel strongly that the term ‘well-pointed’ is so strongly associated with a classical metatheory that using it to mean anything at all in a constructive one will confuse people. My inclination with this paper is to bend over backward to avoid alienating anyone, since many people have strong and differing feelings in this area. But my current secret hope (hmm, I suppose it’s not so secret any more when I post it on the blog, is it? oh well) is that ‘constructively well-pointed’ will be unambiguous to everyone to start with, and yet clumsy enough that once people get used to using the constructively reasonable notion they will naturally begin dropping the adjective.

Posted by: Mike Shulman on March 23, 2010 7:18 AM | Permalink | Reply to this

### Re: Stack Semantics

If you’ve found people who strongly object to ‘well-pointed’ in this sense, then that’s probably a good strategy. If you consistently use one adjective or another when working within a constructive metatheory, then I agree that the natural thing to do will become obvious. Certainly the classical theorem well-pointed ⇒ boolean is so well-known that it’s good to guard against it.

The danger here is that people will assume that the constructive version is weaker than the classical. That is, they’ll think that you avoid the theorem above by using a weaker hypothesis, whereas in fact you avoid it by using a weaker metatheory, even though you use a potentially stronger hypothesis than you might have. But that point will have to be clarified no matter what terminology you use.

Posted by: Toby Bartels on March 23, 2010 7:23 AM | Permalink | Reply to this

### Re: Stack Semantics

That’s a good point. I think it is true, though, that frequently the adjective “constructively” is a strengthening one.

Posted by: Mike Shulman on March 23, 2010 7:27 AM | Permalink | Reply to this

### Re: Stack Semantics

I’m very happy to see you are developing “2-logic”, and I’m sure David Corfield will be happy too.

But what exactly do you mean by this:

I prefer to say: Sets are already analogous to truth values in a 1-topos, via Curry-Howard.

It seems like something I should understand and like, but somehow I don’t get the reference to Curry-Howard.

Are you saying sets are to the topos Set as truth values are to the set {F,T}? Or what?

Posted by: John Baez on March 23, 2010 7:13 AM | Permalink | Reply to this

### Re: Stack Semantics

I wrote a bit in response to this comment, and then realised that I wasn’t quite answering the question you asked. However, I think the answer may be of interest anyway, so here it is.

Are you saying sets are to the topos Set as truth values are to the set {F,T}? Or what?

As I read the paper here, he’s saying that statements like “sets are to $\mathbf{Set}$ as truth values are to $\Omega$”, or “sets are to $\mathbf{Cat}$ as truth values are to $\mathbf{Set}$”, are somewhat true, but better seen slightly differently.

Asking “what is to $\mathbf{Cat}$ as truth values are to $\mathbf{Set}$?” is like asking “what is to the tetrahedron as edges are to the triangle?” In the latter case, the answer could either be just “edges again”, but it could also be “faces” — depending on whether you’re thinking more about dimension or codimension. But thinking about the sequence of dimensions points to a better answer; what we should be saying is:

“The sequence (vertices, edges, faces, solid) is to the tetrahedron as the sequence (vertices, edges, face) is to the triangle.”

and from this we can see that edges and faces in the tetrahedron are both somewhat analogous to edges in the triangle, but neither completely so; and also that, as adjacent terms of the sequence, there’s already an important analogy between the edges and the face, just within the triangle itself.

Posted by: Peter LeFanu Lumsdaine on March 23, 2010 8:13 PM | Permalink | Reply to this

### Re: Stack Semantics

Asking “what is to Cat as truth values are to Set?” is like asking “what is to the tetrahedron as edges are to the triangle?”

That’s a very nice analogy, thanks! Assuming you don’t mind, I may appropriate it (with attribution, of course).

Posted by: Mike Shulman on March 24, 2010 12:08 AM | Permalink | Reply to this

### Re: Stack Semantics

Perhaps this analogy could be massaged to become more of an answer to John’s question. Saying that in a 2-topos, sets “play the role of” the truth values in a 1-topos, is like saying that the 2-dimensional faces of a tetrahedron “play the role” that the edges of a triangle did. Whereas what I’m saying is that a triangle already has a 2-dimensional face, in addition to its 1-dimensional edges. We don’t need a tetrahedron in order to have 2-dimensional faces that are analogous to 1-dimensional faces in some ways. The difference is rather that a tetrahedron, in addition to edges and faces, has a 3-dimensional solid face as well.

Posted by: Mike Shulman on March 24, 2010 12:14 AM | Permalink | Reply to this

### Re: Stack Semantics

Or are you saying that characteristic functions of sets are valued in truth values?

Posted by: John Baez on March 23, 2010 7:14 AM | Permalink | Reply to this

### Re: Stack Semantics

While you’re here, let’s see if we can tease out the difference between taking categorified logic to have truth values as sets and keeping truth values as {F, T}.

Mike mentioned and a gave a link to this difference here.

I thought from the Baez-Dolan perspective, something like type theory spontaneously emerges by categorification.

So 0-logic is propositional logic: we have a collection of symbols and axioms. In a model symbols take values in truth values, satisfying the axioms. Models form a set.

1-logic is typed predicate logic: we have a collection of types, and typed relations + axioms. In a model types are assigned sets, relations are assigned subsets of the corresponding sets. Models form a category.

2-logic: we have a collection of metatypes, a collection of types which are metatyped, and a collection of typed relations + axioms. In a model, metatypes are assigned categories, types are assigned presheafs over the corresponding product of categories, typed relations are then subsets.

Modal logic is a form of 2-logic, with a single metatype the category of worlds. Types are presheafs over these worlds (cf. Awodey and Kishida).

But we can also have a single metatype, $C$, and then types as presheafs on $C \times C$, i.e., types dependent on pairs of objects in $C$.

Posted by: David Corfield on March 23, 2010 7:17 AM | Permalink | Reply to this

### Re: Stack Semantics

That sounds like almost exactly what I mean by 2-logic. I prefer to say “1-type, 0-type, (-1)-type” instead of “metatype, type, relation” since then the pattern is clearer. To be more precise, relations are dependent (-1)-types (non-dependent ones merely being truth values), and in 2-logic as you describe it, the “types” are what I would call dependent 0-types (dependent on a 1-type).

What I was trying to get at in the slides John referred to was that n-types don’t have to depend only on (n+1)-types; they can already depend on other n-types. For (-1)-types this isn’t interesting, but for 0-types it is. My use of “analogous” was meant to refer to the table on the previous slide, which summarized a bit of the Curry-Howard correspondence between logical operations on dependent (-1)-types and set/type-theoretic operations on dependent 0-types. That is, we don’t need 2-logic in order to “treat sets as being like truth values” — the Curry-Howard analogy already does that for us. (I don’t like to call Curry-Howard an “isomorphism” since in my mind, there is a difference between propositions (i.e. (-1)-types) and types (i.e. 0-types); there’s just an analogy, plus we can use the latter to describe proofs of the former.)

Posted by: Mike Shulman on March 23, 2010 7:26 AM | Permalink | Reply to this

### Re: Stack Semantics

I note you have slides for category theorists who don’t know a lot about categorial logic, and there seem to be plenty of papers/notes etc running in this direction. However there is precious little running in the opposite direction - ie stuff for people who are happy with logic and/or material set theory who need to acquire enough category theory to understand categorial logic.

Meanwhile I am finding your paper very interesting so far (but, as yet I am only on page 4).

Posted by: Roger Witte on March 23, 2010 7:16 AM | Permalink | Reply to this

### Re: Stack Semantics

Of course there are several books on topos theory and categorical logic, but I’m guessing you would like something shorter in the nature of an overview. Have you tried Todd’s exposition of ETCS?

http://ncatlab.org/nlab/show/Trimble+on+ETCS+I

I’m sure there are other things to be mentioned too.

Posted by: Mike Shulman on March 23, 2010 7:23 AM | Permalink | Reply to this

### Re: Stack Semantics

Mike,

Yes, I read Todd’s series on ETCS (and BTW I have now read your paper on Stack Semantics and your paper on Set Theoretical foundations for Category Theory. Both were very good reading!).

However my comment wasn’t asking for information; It was more considering how to ease a transition in Mathematics to the new perspective. Category theorists are likely to be familiar with the pervasiveness of categories in Maths, and the problems that size presents for them. However the challenge is how to persuade set theorists and other kinds of mathemeticians that the categorial perspective has something to offer them.

In fact I think that this is an instance of a more general problem:

In general, the specialists in X theory tend to prefer talking about Xs amongst themselves. By doing so they can avoid having to deal with some of the more obvious facts about Xs and get to ‘the cutting edge’ of what needs to be discovered about Xs. However this creates cultural gaps between X theorists and Y theorists (and, sometimes, there is important relationships between Xs and Ys waiting to be discovered).

I am fairly sure there is something useful to be had from debates between category theorists and set theorists, and most category theorists seem to agree. But how do we encourage the set theorists to join in the debate?

Posted by: Roger Witte on March 23, 2010 1:40 PM | Permalink | Reply to this

### Re: Stack Semantics

However my comment wasn’t asking for information; It was more considering how to ease a transition in Mathematics to the new perspective.

Ah, I misunderstood, sorry. I think you’ve put your finger on one of the serious challenges for structural/categorial set theory. I’m hoping to be able to help make a difference here, but part of the problem is, as you say, that I’m so steeped in categorial perspectives that it always takes an effort for me to imagine the other ways in which someone might think.

This isn’t exactly the same thing, but what you say reminds me of it: at the ASL conference, one of the speakers began by discussing the Axiom of Constructibility $V=L$ (caveat for the casual reader: this has no relation to constructive mathematics) as being a very powerful axiom which settles almost any question, but which is false and settles many of these questions in the wrong way — thus we need to look for alternate axioms to settle important questions (such as large cardinal properties). My mind was momentarily boggled that someone would use a word like “false” in reference to an axiom which is provably consistent, and which is thus satisfied in some models and not in others. I’d read that many material set theorists are “platonists” about there being a “real” universe of material sets out there, but I’d never actually heard anyone explicitly say so before.

Anyway, I’m hoping that the stack semantics will help facilitate the interaction of these areas, since my reading of the literature suggests that the boundedness of all quantifiers in the internal logic has been a major stumbling block in the past. At least, many of the arguments in print between Mac Lane, Lawvere, etc. and material set theorists have seemed to focus on the strength of the theories involved (e.g. BZ versus ZF) rather than on the real question of categorial/structural points of view versus material ones. But removing one stumbling block is certainly not the same as accomplishing the goal.

I think one promising avenue is to work on giving category-theoretic formulations of some of the things that material set theorists are most interested in nowadays, such as large cardinal axioms. For instance, I find Blass’ theorem that there exists a measurable cardinal iff there exists a nontrivial exact endofunctor of $Set$ very suggestive. As far as I know, this related question is still open, although JDH has told me that he and a friend are thinking about it.

Another part of the problem is that material set theorists are so wedded to the axiom of choice, which trivializes so many topos-theoretic notions. But I really don’t have any ideas for what to do about that.

Posted by: Mike Shulman on March 24, 2010 12:04 AM | Permalink | PGP Sig | Reply to this

### Re: Stack Semantics

Slide on page 7: do you really mean that BZ only reaches up to $\aleph_\omega$ or up to $V_{\omega+\omega}$. I suppose what I am asking is: how do you know that $\aleph_\omega$ is in $V_{\omega+\omega}$?

Posted by: Andrej Bauer on March 23, 2010 7:20 AM | Permalink | Reply to this

### Re: Stack Semantics

Of course $\aleph_\omega$ needn’t itself be in $V_{\omega+\omega}$ (which models BZ); for instance if GCH holds then it isn’t. I really meant cardinal numbers equal to or greater than $\aleph_\omega$ are unreachable, not “strictly greater than”. Of course $V_{\omega+\omega}$ is the more precise statement, with $\beth_\omega$ in between, but I chose to mention only $\aleph_\omega$ because I thought it would be more familiar to more people, and the people who are familiar with Beths and the V hierarchy should be able to figure out that that’s what I mean.

Posted by: Mike Shulman on March 23, 2010 7:22 AM | Permalink | Reply to this

### Re: Stack Semantics

Isn’t your counting of “$n$-types” curiously off by one, as here, on slide 21?

An ordinary type should be a 1-type, not a 0-type, no?

Generally I suppose in an $(\infty,1)$-topos we want to say that an object is an $n$-type (in the categorical semantics of the topos) precisely if it is $(n-1)$-truncated.

Is that right (up to a different opinion on where to start counting, possibly)?

Posted by: Urs Schreiber on March 24, 2010 1:29 AM | Permalink | Reply to this

### Re: Stack Semantics

Generally I suppose in an $(\infty,1)$-topos we want to say that an object is an $n$-type (in the categorical semantics of the topos) precisely if it is $(n−1)$-truncated.

Yep. And you seem there to have answered your own question about why I start the numbering at a different place: for me an object is an $n$-type precisely when it is $n$-truncated, and thus precisely when it acts like an $n$-groupoid in the internal logic. There is then also a convenient, but coincidental, dovetailing with the terminology of homotopy n-type, in which the “homotopy” is also sometimes omitted; thus a topological space is an $n$-type (in the categorical semantics of the $(\infty,1)$-topos $Top$) precisely when it is an $n$-type (in the sense of classical homotopy theory).

(Although, to be completely precise, an $n$-truncated object of an $(\infty,1)$-topos should actually be an $(n,0)$-type, since all such objects are groupoidal.)

I also think it’s a bit misleading to call 0-types “ordinary types.” Those are the types you get for the “ordinary” 1-categorical semantics, which corresponds to fully extensional type theory. But fully intensional type theory is also well-studied by type theorists, and it has natural semantics in $(\infty,1)$-categories, where all types are $\infty$-types. And in between there is also “$n$-extensional type theory” which has semantics in $(n,1)$-categories, although those haven’t really been studied much by type theorists.

Posted by: Mike Shulman on March 24, 2010 3:21 AM | Permalink | PGP Sig | Reply to this

### Re: Stack Semantics

for me an object is an n-type precisely when it is n-truncated, and thus precisely when it acts like an n-groupoid in the internal logic.

Not that we should spend much more time on it, but while I understand that this makes the counting easier, it has the unfortunate side effect that an ordinary type is a 0-type, where the general convention is (or seems to be or should be or would hopefully be) that an ordinary thing is a 1-thing.

I also think it’s a bit misleading to call 0-types “ordinary types.” Those are the types you get for the “ordinary” 1-categorical semantics, […]

Hm, that seems to make it non-misleading then.

Sorry, I don’t know why we alsways run into such meta-discussion. I’ll try to stop it. :-)

But fully intensional type theory is also well-studied by type theorists, and it has natural semantics in (∞,1)-categories, where all types are ∞-types.

Just so that I am sure I am following the language: Does saying “it has natural semantics in …” mean that you start with an abstractly defined type theory and then notice that it is the categorical semantics – in the sense of categorical semantics – of some category (or $(\infty,1)$-category or the like)?

Posted by: Urs Schreiber on March 24, 2010 2:10 PM | Permalink | Reply to this

### Re: Stack Semantics

I also think it’s a bit misleading to call 0-types “ordinary types.” Those are the types you get for the “ordinary” 1-categorical semantics, […]

Hm, that seems to make it non-misleading then.

That’s why I put “ordinary” in quotes in the second sentence. My point is that 1-categorical semantics are not the “ordinary” semantics. They are just one particular kind of semantics, which 1-category theorists care about more, but which intensional type theorists may find degenerate and uninteresting.

Does saying “it has natural semantics in …” mean that you start with an abstractly defined type theory and then notice that it is the categorical semantics – in the sense of categorical semantics – of some category (or (∞,1)-category or the like)?

Not quite, it means that any (X-)category has an internal logic that is an (X-)type theory, and conversely one can talk about models of any (X-)type theory in an (X-)category. An X-type theory may not be, literally, the categorical semantics of some particular X-category, although usually it will have a syntactic category which is an X-category “freely generated” by a model of that X-type theory.

Posted by: Mike Shulman on March 24, 2010 4:00 PM | Permalink | Reply to this

### Re: Stack Semantics

it means that any (X-)category has an internal logic that is an (X-)type theory, and conversely one can talk about models of any (X-)type theory in an (X-)category.

Maybe I still don’t know what “a type theory” is and what a model for it is.

In the entry on type theory it says just

Given a category $\mathcal{C}$, we may speak about its categorical semantics in terms of type theory.

This “speaking in terms of type theory” I now understand, to some extent at least. But what is “a type theory” as an object that I can manipulate?

Posted by: Urs Schreiber on March 24, 2010 6:10 PM | Permalink | Reply to this

### Re: Stack Semantics

First of all, you should understand that we have an adjunction between type theories and (higher) categories, not an equivalence or even an inclusion. Every type theory $T$ gives rise to a category $Con(T)$ of contexts, and every category $C$ has an internal language $Lan(C)$ which is a type theory. We have a functor $C \to Con(Lan(C))$ and an interpretation $Lan(Con(T)) \to T$, but these may not be invertible.

The reason that nobody is coming out and telling you

Definition. A type theory is …

is because, unlike the definition of category, this has never been fixed in the literature. This may be a good thing! After all, the fixed definition of category is not good enough for all purposes, which is why we need to move to higher categories. When discussing specific theorems about type theories, of course you have to define what it means to be one of the type theories that the theorem is about, but I don't know that anybody has ever claimed to have written down what a type theory is once and for all.

Still, you ought to be able to recognise a type theory when you see one. So keeping in mind that this won't cover all of the possibilities, let me say it like this: A type theory is a collection of inductive rules which specify:

• what a context is,
• what a type in any given context is,
• what a term of any given type in any given context is,
• what a proposition in any given context is,
• and whether any given proposition in any given context is true.

This is definitely a $1$-truncated notion of type theory; you can see that I have Mike's $(-1)$-types (the propositions) and his $0$-types (the types), but no more. Conversely, there are type theories in which there are no propositions as such (or rather, the types play the role of propositions as well).

(And yes, it also seems to me that these are numbered wrong. While the native types of type theory might be interpreted as $0$-groupoids or as $\infty$-groupoids, they don't usually get interpreted as $1$-groupoids, even though Mike's $1$-types are. It didn't bother me so much at first, because I thought that Mike was numbering them by codimension; the only problem is that this would lead to an explosion of negative numbers in higher theories. But numbering them this way by dimension doesn't make any of my intuitive understandings of ‘type’ match Mike's notion of $1$-type, although I admit that the connection with homotopy theory makes sense.)

Posted by: Toby Bartels on March 24, 2010 10:52 PM | Permalink | Reply to this

### Re: Stack Semantics

So keeping in mind that this won’t cover all of the possibilities, let me say it like this: A type theory is a collection of inductive rules which specify:

I tried to write something similar at syntax of type theory

This is definitely a 1-truncated notion of type theory

I think just the fact that you have only types and propositions doesn’t make it 1-truncated unless all the types have equality as a proposition.

Posted by: Mike Shulman on March 25, 2010 12:13 AM | Permalink | Reply to this

### Re: Stack Semantics

The reason that nobody is coming out and telling you

Definition. A type theory is…

is because, unlike the definition of category, this has never been fixed in the literature. This may be a good thing!

This is probably a good thing overall — it’s nice to keep Type Theory as an open-ended term, like, say, “mathematical object”, since we’ra at a stage where new examples are emerging that continue to break new imaginative ground.

However, it has some unfortunate knock-on effects! The biggest one that I’ve encountered is in the generality and re-usability of theorems. If Alice and Bob have proven nice theorems about the type theory I’m working in, then I’d like to be able to use that theorem! But as it stands, they’ve usually used slightly different presentations of their type theories (and my own preferred on is probably slightly different again); and there’s often not a good common bigger picture that both their theorems literally fit into.

So in practice, I’m able to benefit in my own work from ‘knowing’ that their results hold in the theory I’m considering, and why; but if I actually need to apply their theorems, I may well have to port the relevant parts of the proofs over to my system by hand.

There are some definitions out there that capture reasonably wide classes of type theories. The Generalised Algebraic Theories of Cartmell (my personal favourite) quite nicely cover dependent TT’s up to a certain point — but unfortunately they don’t cover constructors that are parametric over type (meta-)variables, e.g. constructors for product types.

Posted by: Peter LeFanu Lumsdaine on March 27, 2010 8:06 PM | Permalink | Reply to this

### Re: Stack Semantics

I don’t know if you and Mike are aware how difficult it is, for outsiders, to figure out what many parts of this kind of discussion mean . Which of them are mathematical statements, and which are more a kind of semi-formalized speaking or the the like. I find it pretty confusing. (I suppose the reason for this is that type theory is a foundational language, which just as naive set theory I have to assume we all understand by grace ? And only with it accepted and understood do we build fully formal mathematics on top of it?)

You write:

Every type theory $T$ gives rise to a category $Con(T)$ of contexts, and every category $C$ has an internal language $Lan(C)$ which is a type theory.

I would enjoy seeing the precise defintition – to the extent that this exists – of $Con$ and $Lan$.

I think I understand what the categorical semantics of a category $C$ is. Is that the same as $Lan(C)$?

[…] they don’t usually get interpreted as 1-groupoids, even though Mike’s 1-types are.

Yes, it’s the same counting problem that goes back to the fact that 1-categories should have been called 2-sets. And of the same style as the 1-stacks really being 2-sheaves and similar. We have to live with the fact that categorical counting conventions will differ by $\pm 1$ from time to time. But as long as a thing is called a 1-thing, I will be able to deduce myself what’s going on. when things are no longer 1-things, the terminology ceases to be a helpful guide.

Posted by: Urs Schreiber on March 24, 2010 11:24 PM | Permalink | Reply to this

### Re: Stack Semantics

I would enjoy seeing the precise defintition – to the extent that this exists – of $Con$ and $Lan$.

I forgot to link to the Lab! For $Con$: category of contexts; for $Lan$: internal language.

Posted by: Toby Bartels on March 24, 2010 11:53 PM | Permalink | Reply to this

### Re: Stack Semantics

I forgot to link to the Lab! For Con: category of contexts; for Lan: internal language.

Hm. I know these pages. Am I expected to deduce from what is said on these pages the definition and details of the adjunction $(Con \dashv Lan)$ that you mentioned?

I am wondering if it would be possible to have a page on this stuff in more traditional math style.

• Definition

• Example

• Propositon

• next Definition

I am struggling with big chunks of text and never know where to base my foot on.

For instance the link to Context that you provided goes to a section that starts as follows:

Category of contexts The morphisms between contexts are substitutions, or interpretations. Such a morphism from the context $\Gamma$ to the context $\Delta$ consists of a way of fulfilling the assumptions required by $\Delta$ by appropriately interpreting those provided by $\Gamma$, generally by substituting terms available in $\Gamma$ for variables needed in $\Delta$ and proving whatever is necessary from the assumptions at hand.

I think I have read this four times now. I am not sure what to make of it!

The disccussion provided after this paragraph does give me some kind of idea. But I feel I am far from knowing what the exact rules of the game are.

Posted by: Urs Schreiber on March 25, 2010 12:10 AM | Permalink | Reply to this

### Re: Stack Semantics

Urs wrote:

But I feel I am far from knowing what the exact rules of the game are.

One reason is that there are many games here, with many rules! And rather than describing any one of them, Toby is trying to describe their common features. For example, he notes that a type theory involves a collection of inductive rules which specify:

• what a context is,
• what a type in any given context is,
• what a term of any given type in any given context is,
• what a proposition in any given context is,
• and whether any given proposition in any given context is true.

But for most mortals it is necessary to see an example of a type theory before trying to understand them in general. This is especially true because the general concept has not yet been formalized. As Toby points out:

The reason that nobody is coming out and telling you

Definition. A type theory is

is because, unlike the definition of category, this has never been fixed in the literature.

So, what’s the easiest example of a type theory to start with? And where to read about it? I’m having trouble finding a good online explanation of some very simple sort of type theory. Maybe Toby or Mike knows one? Personally I found Crole’s book Categories for Types quite approachable.

Ah, I see now that Toby has given a simple example here. Great!

But here are some hand-wavy remarks that may help beginners understand what he’s up to:

A type theory is a way of describing a kind of logic where there are various ‘types’ of things: e.g. propositions, numbers, functions from numbers to numbers, truth values, and so on. This is better than silly approaches to logic where everything is the same type of thing, e.g. where everything is a set. It should make you think of computer programming systems where data comes in various ‘types’ like INTEGER, BOOLEAN, and so on. And indeed, type theory is the way of doing logic that many computer scientists like best.

Often there’s a way to take a type theory and turn it into a category where the types are objects. There are lots of theorems about this, in various examples. So, type theory is the way of doing logic that many category theorists like best. Indeed, to some extent you can even think of it as an alternative viewpoint on category theory: category theory through logicians’ eyes, or programmers’ eyes. And unsurprisingly, it has nice $n$-categorical generalizations…

Posted by: John Baez on March 25, 2010 5:44 AM | Permalink | Reply to this

### Re: Stack Semantics

One reason is that there are many games here, with many rules!

Fine, let’s look at one particular set of rules then, that makes the adjunction that Toby described become a precise statement. Choose your favorite one. Which is it?

But for most mortals it is necessary to see an example of a type theory before trying to understand them in general.

What helped me a lot was unwinding what the categorical semantics of any given category is, the way we (Mike, Toby, me) jointly wrote it down here. This currently what I understand precisely. My next goal is to understand fully precisely the notion of identity type. I have a pretty good rough idea. What I need is a pretty good precise idea. I know what path objects in a category of fibrant objects are, and this is a variation of this theme.

By the way, given that the “types are weak $\omega$-groupoids“-statement is really essentially one about how to understand the path $\infty$-groupoid constructed in a homotopical category with a notion of path space object, I’d be surprised we can’t with well bounded-above effort produce the analogous statement “types are internal Kan complexes”. I think this is well known in terms of homotopical categories and path objects. So I would like to see precisely how type theory relates to that.

But here are some hand-wavy remarks […]

I have seen these. Now I want to understand it in detail.

Or maybe I did, now I am actually running out of time…

Posted by: Urs Schreiber on March 25, 2010 6:26 PM | Permalink | Reply to this

### types as oo-groupoids

I wrote:

By the way, given that the “types are weak ω-groupoids“-statement is really essentially one about how to understand the path ∞-groupoid constructed in a homotopical category with a notion of path space object, I’d be surprised we can’t with well bounded-above effort produce the analogous statement “types are internal Kan complexes”. I think this is well known in terms of homotopical categories and path objects. So I would like to see precisely how type theory relates to that.

Maybe I should indicate in more detail what I am thinking of, in case it is not clear:

In Types are weak $\omega$-groupoids the main statement says that in a category that satisfies essentially the axioms of a category of fibrant objects, we can take the path space objects of any object and arrange them into an internal globular weak $\omega$-groupoid – the internal fundamental $\infty$-groupoid of that object– all whose source and target maps are weak equivalences.

But this process, of arranging all the path objects of an given object in a homotopical category into an internal $\infty$-groupoid all whose source and target maps are weak equivalences is well known if instead of globular models we use simplicial models for $\infty$-groupoids.

I have just added a summary of some standard results along these lines to nLab: simplicial model category: simplicial resolutions. While I’d have to think about how to pass from one model to the other, it is clear that the meaning of these two constructions is the same: the classical literature on simplicial model categories constructs the internal fundamental $\infty$-groupoids in their simplicial incarnation instead of their globular one.

The other main statement in Types are $\omega$-groupoids is that the context categories of certain type theoreies with identity types naturally carry the required homotopical structure, with the identity types providing the path objects. And the fibrations are something like projections out of dependent sum types.

I find it noteworthy that in the article the authors give two versions of their main proof: one non-rigorous and in type theory language. The other rigorous and in homotopical category theory language. (!)

It seems to me from the point of view that this article suggest, that if I take “type theory with identity types” to mean precisely “categorical semantics of $\infty$-toposes” – a definition that I completely understand and can write down precisely on half a page – I am pretty much on the safe side.

Posted by: Urs Schreiber on March 25, 2010 8:58 PM | Permalink | Reply to this

### Re: types as oo-groupoids

In Types are weak ω-groupoids the main statement says that in a category that satisfies essentially the axioms of a category of fibrant objects,

I don’t think that’s right. What they have is much closer to a weak factorization system than to a category of fibrant objects. A category of fibrant objects doesn’t have any analogue of their “I-maps,” i.e. the acyclic cofibrations which satisfy a LLP relative to the fibrations.

Any weak factorization system for which all objects are “fibrant” should satisfy their axioms except for the “Frobenius” condition. I think this condition is a really subtle one that people ignore too much in talking about this work: it says that the pullback of an acyclic cofibration along a fibration is again an acyclic cofibration. Right properness for a model category, or fibrancy of all objects, says that the pullback of a weak equivalence along a fibration is a weak equivalence, but nothing in general says that the pullback of a cofibration is a cofibration. It’s true in simplicial sets, since there the cofibrations are the monomorphisms, and similarly in injective model structures on simplicial presheaves, but I don’t know of many other examples.

I think you’re right, though, that this construction is morally equivalent to simplicial resolutions. I wonder where the difference comes in that requires different assumptions.

“categorical semantics of ∞-toposes” – a definition that I completely understand and can write down precisely on half a page

Really? What is it??

In particular, how can you define any kind of “categorical semantics” before you know what the type theory is that you are talking about semantics of? Perhaps you mean the “internal language” of an $\infty$-topos… but then I would still like to see that half-page definition!

if I take “type theory with identity types” to mean precisely “categorical semantics of ∞-toposes”

I don’t think that’s right either, even if I knew what the internal language of an $\infty$-topos was. Type theory with identity types is more general; I don’t see why semantics for it valued in $\infty$-toposes should be complete.

Posted by: Mike Shulman on March 25, 2010 9:19 PM | Permalink | Reply to this

### Re: types as oo-groupoids

What they have is much closer to a weak factorization system than to a category of fibrant objects.

Yeah, probably closer. I took care to put in an “essentially” in my statement. Maybe I should have used something weaker.

But I don’t believe that the precise extra structure that we allow ourselves to have on the homotopical category should play much of a role. This internal fundamental $\infty$-groupoid should exist under rather general conditions. Of course details may depend.

“categorical semantics of $(\infty,1)$-toposes” – a definition that I completely understand and can write down precisely on half a page

Really? What is it??

Do I still not use the language correctly? What I mean is that everything that is stated at categorical semantics of type-theory I can straightforwartdly generalize to $\infty$-category theory.

Posted by: Urs Schreiber on March 25, 2010 9:39 PM | Permalink | Reply to this

### Re: types as oo-groupoids

But I don’t believe that the precise extra structure that we allow ourselves to have on the homotopical category should play much of a role. This internal fundamental ∞-groupoid should exist under rather general conditions.

Well, this construction isn’t an $(\infty,1)$-categorical construction, or rather at the level of $(\infty,1)$-categories it’s trivial, since all the domain+codomain maps are weak equivalences, so in the $(\infty,1)$-category presented by our homotopical category this “internal $\infty$-groupoid” is just a discrete one. What’s interesting is that in some particular strict model for an $(\infty,1)$-category, like a model category or something weaker, we can construct an internal $\infty$-groupoid already at the point-set level. And I think that very well might depend on the chosen strict model. For instance, if you just have a category with weak equivalences, no factorizations at all, it seems to me you’re sunk.

Do I still not use the language correctly?

Sorry if I was a bit brusque there, I was rushing off and in a hurry. I also didn’t realize that the section in the type theory page was using “categorical semantics” to mean what I think should really be called the “internal language” of a category, which is a particular type theory which happens to have semantics in that category, in a tautological way. The general phrase “categorical semantics” refers to the process of interpreting type theories (which may be given just as type theories, rather than as the internal language of a category) in categories. I’ve fixed it for now by changing the uses of “categorical semantics” to “internal language,” but possibly it would be better to change it to talk about categorical semantics more generally, since we also have a page called internal logic.

What I mean is that everything that is stated at categorical semantics of type-theory I can straightforwardly generalize to ∞-category theory.

I’m not convinced of that. We can certainly interpret types as objects and terms as morphisms. But when we get to type constructors, it’s not so clear. The axioms for a product type, for instance, say that $\pi_1(\langle a,b\rangle) = a$ and so on. But what does this “=” mean in an $(\infty,1)$-category? It doesn’t make sense to talk about two morphisms being equal, only equivalent. Since we’re using identity types to represent path objects (which is also definitely something new in the higher world, not just a straightforward generalization of the 1-categorical world), instead of $\pi_1(\langle a,b\rangle) = a$ we can give ourselves a term of type $Id(\pi_1(\langle a,b\rangle),a)$. But that equivalence should also be coherent. The universal property of a cartesian product in an $(\infty,1)$-category says not just that every pair of morphisms factors through the product, but that every pair of 2-cells between morphisms also factors through, and every pair of 3-cells, and so on. So it seems like maybe we need infinitely many axioms. And we need to figure out how to make precise the semantics of all of this in an $(\infty,1)$-category for some particular model of the latter. And things get even worse when you try to work out other type constructors; I still haven’t figured out how to do quotient types, which is one that I really want to have.

I’m pretty sure we can do all of this; I’m just saying that it’s not really entirely straightforward; it requires a fair amount of work and thought.

Posted by: Mike Shulman on March 26, 2010 6:02 AM | Permalink | Reply to this

### Re: types as oo-groupoids

But I don’t believe that the precise extra structure that we allow ourselves to have on the homotopical category should play much of a role. This internal fundamental ∞-groupoid should exist under rather general conditions.

Well, this construction isn’t an (∞,1)-categorical construction, or rather at the level of (∞,1)-categories it’s trivial

Sure. But we can say “path object” in any category with weak equivalences and products. I’d think this is the important ingredient: that identity types behave as path objects.

Or do you think the weak factorization system that we get from type theory as in the article is also an intrinsic aspect of the type theory?

I also didn’t realize that the section in the type theory page was using “categorical semantics” to mean what I think should really be called the “internal language” of a category, which is a particular type theory which happens to have semantics in that category, in a tautological way. The general phrase “categorical semantics” refers to the process of interpreting type theories (which may be given just as type theories, rather than as the internal language of a category) in categories.

Thanks, that helps. That also clarifies now a few other exchanges on my usage of terminology we had here. All right, so I’ll know how to say it correctly from now on.

What I mean is that everything that is stated at categorical semantics of type-theory I can straightforwardly generalize to ∞-category theory.

I’m not convinced of that. We can certainly interpret types as objects and terms as morphisms. But when we get to type constructors, it’s not so clear. The axioms for a product type, for instance, say that $\pi_1(\langle a,b\rangle, a)$ and so on. But what does this “=” mean in an (∞,1)-category?

I am not sure why you say this. The product in an $(\infty,1)$-category is well-defined up to equivalence of course. I must be missing what you are after here now. I was thinking: the whole point of that “type theoretic internal language of a catgeory” is that it is precisely the category theory of that category, only phrased in different words. All the standard ingredients – objects, morphisms, products, pullbacks, left- and right adjoints etc. – we simply call by different names, indicating that there is another but equivalent way to think about them. To this extent, this “internal language” is the category theory of the given category. Hence to the extent that the latter generalizes, the former does.

But I see that there is a general subtle point here, as I think you are pointing out: when we talk about identity types, we are not exactly talking about something akin to the internal language of an $(\infty,1)$-category, but about a specific presentation of that $(\infty,1)$-category.

But that equivalence should also be coherent. The universal property of a cartesian product in an (∞,1)-category says not just that every pair of morphisms factors through the product, but that every pair of 2-cells between morphisms also factors through, and every pair of 3-cells, and so on. So it seems like maybe we need infinitely many axioms.

I suppose this tower of higher axioms is that whose first step is indicated in that statement about “up to propositional equality” on page 1 of the “Type are $\omega$-groupoids”-discussion?

Maybe I should say more explicitly what kinds of thoughts I have when I see this: my impression is that in type-theoretic language many things are being re-discovered here that were solved in higher category theory before. I feel like we could just jump over this process of re-discovering and simply declare that to the extent that ordinary categories are models for “ordinary” type theory, $(\infty,1)$-categories are models for full-blown type theory (or whatever you say for that) and deduce from that what all the correct terminology is. For instance by declaring that a product type constructor is what corresponds to the cartesian $(\infty,1)$-product.

This is the punchline i draw form the “Types are $\omega$-groupoids”-article: that type theory with identity types is just another way of specifying certain homotopical categories with some extra structure. But we already know that these are not terribly intrinsic in themselves, but rather a means to present something else that is intrinsic. So let’s short-circuit this re-iteration of a historical process that we already went through before.

And things get even worse when you try to work out other type constructors; I still haven’t figured out how to do quotient types, which is one that I really want to have.

So in the spirit of what I just said I would say: quotient types in the internal language of an $(\infty,1)$-category are $(\infty,1)$-coequalizers. Just another word for that.

But probably all these comments of mine just show that i am still misappreciating what type theory is really meant to be…

Posted by: Urs Schreiber on March 26, 2010 8:17 AM | Permalink | Reply to this

### Re: types as oo-groupoids

I wrote:

…(∞,1)-coequalizers…

I suppose it’s clear, but just for the rocord: by this I mean $(\infty,1)$-colimits over a homotopy coherent simplicial diagram.

Posted by: Urs Schreiber on March 26, 2010 9:54 AM | Permalink | Reply to this

### Re: types as oo-groupoids

But we can say “path object” in any category with weak equivalences and products.

We can sort of say it, but without a notion of “fibration” to require of the projection $Path(X) \to X\times X$, the diagonal itself $X\to X\times X$ will be a “path object,” but not produce anything interesting in terms of an internal $\infty$-groupoid. In particular, it won’t “capture the homotopy theory” in the sense that (for good objects $X$ and $Y$) maps $X\to Y$ induce homotopic maps of internal $\infty$-groupoids iff they induce equivalent morphisms in the presented $(\infty,1)$-category. So I’d be inclined to only say “path object” in a category with a notion of fibration, in addition to weak equivalences and products.

Or do you think the weak factorization system that we get from type theory as in the article is also an intrinsic aspect of the type theory?

It’s certainly an intrinsic part of the type theory, but I wouldn’t say that it’s an intrinsic part of the “$(\infty,1)$-category of contexts” of the type theory. Rather it’s structure on the 1-category of contexts that we can use to help us present the $(\infty,1)$-category of contexts, just like for any model-category-like notion. And the “internal $\infty$-groupoid” constructed from an object, in either this way or the simplicial-resolution way, is likewise a tool used in making that presentation tractable, which may or may not be possible in any particular presentation. For the reasons above, I wouldn’t expect it to be particularly helpful unless we have a notion of fibration.

I must be missing what you are after here now. I was thinking: the whole point of that “type theoretic internal language of a category” is that it is precisely the category theory of that category, only phrased in different words.

What I’m after is exactly how we do that rephrasing. Yes, we know what the product and quotient is in an $(\infty,1)$-category, but how do we state the axioms in type theory that correspond to the universal properties of those $(\infty,1)$-categorical things? Declaring that a product type constructor corresponds to the $(\infty,1)$-categorical product is all well and good, but then you have to make explicit exactly how that correspondence works. I don’t think anyone that I’ve talked to about this is reinventing things that already exist; we all know that what’s really going on is an $(\infty,1)$-category. The question is how to reflect the $(\infty,1)$-categorical structures in type theory, not whether to do so.

Posted by: Mike Shulman on March 26, 2010 1:48 PM | Permalink | Reply to this

### Re: types as oo-groupoids

without a notion of “fibration” […] not produce anything interesting in terms of an internal ∞-groupoid.

True. Which brings us back to the category of fibrant objects…

Or do you think the weak factorization system that we get from type theory as in the article is also an intrinsic aspect of the type theory?

[…] it’s structure on the 1-category of contexts that we can use to help us present the (∞,1)-category of contexts, just like for any model-category-like notion.

And as in any model-categorylike context, this is just auxiliary structure.

Let’s see where the discussion of this point here came from:

I said something like, that this “extra structure on the 1-category of contexts” is essentially that of a category of fibrant objects, which allows us to have good path objects #.

You complained that this doesn’t mention the weak factorization system #.

I replied that this shouldn’t make much of a difference. But maybe wrongly overstated this and made it sound as if also the fibrations made no difference #.

Anyway, after this detour, my impression is we effectively agree here on what’s going on.

Yes, we know what the product and quotient is in an (∞,1)-category, but how do we state the axioms in type theory that correspond to the universal properties of those (∞,1)-categorical things?

Good that we have isolated this point, here my question: why would it be of interest to rephrase the well-understood universal properties of $(\infty,1)$-universal constructions in a different language?

Posted by: Urs Schreiber on March 26, 2010 6:00 PM | Permalink | Reply to this

### Re: types as oo-groupoids

Anyway, after this detour, my impression is we effectively agree here on what’s going on.

Yes, if you mean that in general, having a notion of fibration tends to give us a notion of internal $\infty$-groupoid. I still think the difference between a weak factorization system (plus the Frobenius condition) and a category of fibrant objects is important for purposes of getting an algebraic model for this, as is done by Garner-van den Berg and Lumsdaine.

why would it be of interest to rephrase the well-understood universal properties of $(\infty,1)$-universal constructions in a different language?

Potentially, for the same reason that it is of interest to rephrase the well-understood universal properties of 1-categorical constructions in a different language: it may make them easier to work with. There are a number of facts in 1-topos theory which are tedious and fiddly to prove directly, but which can be proven quite easily by way of the internal logic. The stack semantics extends this phenomenon even further.

Posted by: Mike Shulman on March 27, 2010 5:05 AM | Permalink | Reply to this

### Re: Stack Semantics

John wrote:

One reason is that there are many games here, with many rules!

Urs replied:

Fine, let’s look at one particular set of rules then, that makes the adjunction that Toby described become a precise statement. Choose your favorite one. Which is it?

I have a bunch of favorites…

What helped me a lot was unwinding what the categorical semantics of any given category is, the way we (Mike, Toby, me) jointly wrote it down here.

That’s one of my favorite examples. It’s nice and simple, a great place to start. However, it’s a bit too simple to explain some buzzwords that type theorists love, like ‘type constructor’.

Basically, for any kind of category with extra bells and whistles, there will be a story like the one you, Mike and Toby went through.

A nice place to start is monoidal categories. For these we should take your story for categories and enhance it slightly. We need to say something sort of like this…. I’ll copy the language in your story

For any types $A$ and $A'$, there is a tensor product type $A \otimes A'$. This is just a secret way of saying that in a monoidal category we can tensor objects.

For any term $f(x)$ of type $A$ where $x$ is a free variable of type $B$, and any term $f'(x')$ of type $A'$ where $x'$ is a free variable of type $B'$, there is a term $f(x) \otimes f'(x')$ of type $A' \otimes A'$.

But now you’ll see that this term has two free variables in it, $x$ and $x'$. So we need to enhance the story for categories a bit more. In that story, we only had one free variable at a time in expressions like this:

$x\colon B \vdash f(x)\colon A\,.$

Now we might have something like this:

$x \colon B, x' \colon B' \vdash f(x) \otimes f'(x') \colon A \otimes A'$

And so on… we also need to deal with all the usual morphisms a monoidal category has, like the associator. These are dealt with using ‘term constructors’. We must also describe all the equations these morphisms satisfy.

Mike Stay and I told a more precise version of this story for symmetric monoidal closed categories in our Rosetta paper, here.

A much more commonly told version of this tale concerns cartesian closed categories.

And another commonly told tale concerns topoi.

Now, you may at this point decide that a ‘type theory’ is just a complicated-sounding way of describing a kind of category (or $n$-category, or $(\infty,n)$-category) equipped with extra bells and whistles. And that description certainly covers lots of examples of type theories. But it’s worth noting what’s good about this complicated-sounding approach.

One thing is this: in our ‘usual’ approach to categories with extra bells and whistles, we describe them in a way that relies on a background of logic and set theory. In this type-theoretic approach we aren’t using any of that background: we’re going straight to syntax! In other words, we just describe a bunch of rules for writing down and manipulating strings of symbols. So, we can set up the theory of monoidal categories, or topoi, or whatever, ‘from scratch’ — without assuming that we already agree on what a set is, or whether we are using classical or intuitionistic logic. This is especially good for computer science, since computers haven’t been to school and don’t know what a set is.

So, to some extent type theory looks ‘more complicated’ because it’s doing more.

If you’re not interested in this ‘more’, the subject tends to look like an overly complicated way of doing what other people can do more efficiently. I sometimes get the impression that’s how you feel.

Warning: I am not a type theorist, and actual type theorists are likely to disagree with various things I said.

Posted by: John Baez on March 29, 2010 5:09 PM | Permalink | Reply to this

### Re: Stack Semantics

John wrote in part:

But now you’ll see that this term has two free variables in it, $x$ and $x'$. So we need to enhance the story for categories a bit more.

Actually, the story for categories that you cited is really for categories with all finite products, which allows us to assume that every term has exactly one free variable (rather than possibly fewer or more). You probably missed that this story made these assumptions because it was hidden in the parenthetical comment ‘with various subtleties’.

Posted by: Toby Bartels on March 30, 2010 7:58 AM | Permalink | Reply to this

### Re: Stack Semantics

I don’t know if you and Mike are aware how difficult it is, for outsiders, to figure out what many parts of this kind of discussion mean.

Believe it or not, I am, because I was once in your shoes. And I still am, in some respects; over at the dependent types discussion I was trying to figure out what Neel and Dan meant by “parametricity” and not making much progress. But now I find that even with the parts of it that I feel I understand, it’s very hard to explain them! The lack of general definitions is probably a significant part of the problem. That’s why I’m really glad we’re having this discussion, so we can hopefully hack together an explanation that will make sense to more people.

I think I understand what the categorical semantics of a category C is. Is that the same as Lan(C)?

The way you’ve been using it, I think so. But actually, I think it’s more correct to use “categorical semantics” to refer to the entire adjunction $Con\dashv Lan$, and just call $Lan(C)$ the “internal language of $C$.” In general, “a semantics” is a notion of interpretation or model, so “categorical semantics” means the process or study of models of type theories in categories. A particular model of a type theory $T$ in a category $C$ is a functor $Con(T)\to C$, or equivalently a translation of type theories $T \to Lan(C)$. Thus, there is a tautological model of $Lan(C)$ in $C$, and a tautological model of $T$ in $Con(T)$.

when things are no longer 1-things, the terminology ceases to be a helpful guide.

I can see the point you’re making. However, I think I would get a lot more confused having to remember to add/subtract one to the indexing all the time (and remember when to add and when to subtract). It’s much easier for me to remember that an $n$-type acts like an $n$-category, especially when I start thinking about (1,0)-types and (0,1)-types and so on.

And I still maintain that “a type” is not a 1-type or a 0-type or any kind of $n$-type, rather “a type” is a general notion which includes, as special cases, 0-types and 1-types and $\infty$-types and everything else (even types without any notion of equality). Just because we aren’t as used to thinking about 1-types as we are about 0-types or $\infty$-types doesn’t mean that the word doesn’t include them. (And 1-types have been studied by type theorists, for instance the original groupoid model of Hofmann-Streicher is over 10 years old now.)

Starting from this point of view, certainly a more precise thing to say would be “$n$-truncated type,” or maybe “$n$-extensional type.” Those are kind of long and clunky, and I’d hoped to be able to say “$n$-type” just as an abbreviation for them. But if it is really confusing, then maybe I shouldn’t. I really do want the indexing to be such that a type labeled with $n$ is $n$-truncated, even if the specific phrase used isn’t as convenient as “$n$-type.”

Posted by: Mike Shulman on March 25, 2010 12:38 AM | Permalink | PGP Sig | Reply to this

### Re: Stack Semantics

All this discussion of trying to explain type theory reminds me that one of the nice things about the stack semantics approach is that it means you don’t need to understand any type theory to work with the internal language of a category. The stack semantics version of the internal language is just a structural set theory.

Posted by: Mike Shulman on March 25, 2010 3:40 AM | Permalink | Reply to this

### Re: Stack Semantics

All this discussion of trying to explain type theory reminds me that one of the nice things about the stack semantics approach

Right, I am sorry for pushing the discussion off-topic so much.

I think when I read your stack semantics slides I didn’t understand much of it. This here is something like an attempt to get closer to seeing what all the language means.

Posted by: Urs Schreiber on March 25, 2010 9:48 PM | Permalink | Reply to this

### n-types are n-types

It’s much easier for me to remember that an n-type acts like an n-category, especially when I start thinking about (1,0)-types and (0,1)-types and so on.

And I still maintain that “a type” is not a 1-type or a 0-type or any kind of n-type, rather “a type” is a general notion which includes, as special cases, 0-types and 1-types and ∞-types and everything else

You know, I agree with you now. Thanks for patiently repeating your (good) point.

One cool thing about this (your) convention is that an $n$-type in the categorical semantics of an $(\infty,1)$-topos is then precisely the same as an $n$-type in the topos! Namely: the same as a homotopy $n$-type .

This coincidence of the terms “$n$-type” from two completely different regions of mathematics, is a cute coincidence that one should make use of.

Posted by: Urs Schreiber on March 25, 2010 9:28 PM | Permalink | Reply to this

### Re: n-types are n-types

One cool thing about this (your) convention is that an n-type in the categorical semantics of an (∞,1)-topos is then precisely the same as an n-type in the topos! Namely: the same as a homotopy n-type .

Yes, I believe I pointed that out. (-:

I think I’ll continue using “$n$-type” as a short form, but I like “$n$-intensional type” as a more precise name; thanks Toby!

Posted by: Mike Shulman on March 26, 2010 6:06 AM | Permalink | Reply to this

### Re: n-types are n-types

Yes, I believe I pointed that out.

Hmm, and apparently there I used “categorical semantics” in the sense I’ve now been saying is wrong. Sorry for being sloppy and creating confusion!

Posted by: Mike Shulman on March 26, 2010 6:14 AM | Permalink | Reply to this

### Re: Stack Semantics

I like ‘$n$-extensional type’. The problem for me is that I know two kinds of type theory that I would think of as normal: extensional and intensional. For one of these, an ordinary type is (in your numbering) a $0$-type, while for the other it's an $\infty$-type, but never is it a $1$-type. But ‘$n$-extensional type’ makes it clear that we are discussing the degree of extensionality.

One thing though: I'd make it ‘$n$-intensional type’. Because then the fully intensional types, are $\infty$-intensional, while the fully extensional types are $0$-intensional. (Although actually propositions are even less intensional, being $(-1)$-intensional types.) That terminology really makes sense to me, and then it awesomely matches the terminology for homotopy types.

Posted by: Toby Bartels on March 25, 2010 10:34 PM | Permalink | Reply to this

### Re: Stack Semantics

I’d make it ‘n-intensional type’. Because then the fully intensional types, are ∞-intensional, while the fully extensional types are 0-intensional. (Although actually propositions are even less intensional, being (−1)-intensional types.)

And a $(-2)$-type is a true proposition .

That terminology really makes sense to me, and then it awesomely matches the terminology for homotopy types.

And that of your $(-1)$-groupoids and $(-2)$-groupoids, of course.

Which fits the general definition of $n$-truncated object. (This should be cross-linked more with the entries on (-1)-groupoids etc.)

P.S.: …$n$-tensional…

Posted by: Urs Schreiber on March 25, 2010 11:25 PM | Permalink | Reply to this

### Re: Stack Semantics

P.S.: … $n$-tensional…

(^_^)

Posted by: Toby Bartels on March 26, 2010 12:38 AM | Permalink | Reply to this

### Example: the equational theory of a group

Here is a simple type theory, which we may call the equational theory of a group.

A word of warning before I begin: This is so simple that people might hesitate to actually call it a ‘type theory’ at all. The reason is that the types are all fixed in advance; there is no theory of how to build new types out of old. For that reason, people might prefer to call it a ‘typed theory’ (a theory with types) than a ‘type theory’ (a theory about types). In fact, not only are all of the types fixed in advance; there is only one type! So people might prefer to call it an ‘untyped theory’ (this is an example of the [[red herring principle]]: an untyped theory is actually a typed theory in which there is exactly one type). I start with this such a simple example because it is one that you should already know, so it will help you to learn the language and notation.

We define everything recursively, so to tell you what (for example) a context is, I simply define what data you might use to build a context, and then give notation for that context. In fact, I will define through mutually recursive definitions what a context is, what a term in a context is, and what an equational identity in a context is; as a technical device, I will also define the number of free variables (which will always be a natural number) in a context.

One normally uses uppercase Greek letters for contexts. To indicate that $a$ is a term in the context $\Gamma$, we write $\Gamma \vdash a\colon G$. (The ‘$\colon G$’ indicates that $G$ is the type of the term $a$; since there is only one type in this theory, that bit is superflous. But I include it so that we can generalise later.) An equational identity in a context will always be a pair of terms in that context; to indicate $\Gamma \vdash a\colon G$ and $\Gamma \vdash b\colon G$ constitute such a pair, we write $\Gamma \vdash a = b\colon G$. Finally, to indicate the number of free variables in $\Gamma$, we write ${|\Gamma|}$.

So: a context in the equational theory of a group may be constructed in any of three ways:

• There exists a context, called the empty context and denoted $\epsilon$.
• For every context $\Gamma$, there exists another context, called the extension of $\Gamma$ by a new variable and denoted $\Gamma, \mathrm{x}_n\colon G$, where $n$ is the number of free variables in $\Gamma$.
• For every context $\Gamma$, every term $a$ in $\Gamma$, and every term $b$ in $\Gamma$, there exists a context, called the extension of $\Gamma$ by the equation of $a$ and $b$ and denoted $\Gamma, a = b\colon G$.

The notation for these recursively built contexts mirrors the notation for terms and equational identities in a context, which is no coincidence. However, notice that we have commas here, not ‘${\vdash}$’.

Since the only way to begin constructing a context is with the empty context, we usually leave out ‘$\epsilon$’ and the first comma: thus, $\epsilon, \mathrm{x}_0\colon G, \mathrm{x}_1\colon G$, for example, becomes $\mathrm{x}_0\colon G, \mathrm{x}_1\colon G$. Similarly, we abbreviate ‘$\epsilon \vdash \cdots$’ to simply ‘$\vdash \cdots$’.

Since contexts are constructed recursively, I may define the number of free variables in a context recursively:

• The number of free variables in the empty context is $0$.
• For every context $\Gamma$, the number of free variables in the extension of $\Gamma$ by a new variable is $n + 1$, where $n$ is the number of free variables in $\Gamma$.
• For every context $\Gamma$, the number of free variables in the extension of $\Gamma$ by any equation is the same as the number of free variables in $\Gamma$.

So every time we extend a context by a new variable, we get one more free variable.

Next, a term in a context in the equational theory of a group may be constructed in any of four ways.

• For every context $\Gamma$ and every natural number $i \lt {|\Gamma|}$, there is a term in $\Gamma$, called the $i$th variable of $\Gamma$ and denoted $\mathrm{x}_i$.
• For every context $\Gamma$, there is a term in $\Gamma$, called the identity element and denoted $1$.
• For every context $\Gamma$ and term $a$ in $\Gamma$, there is a term in $\Gamma$, called the inverse of $a$ and denoted $a'$.
• For every context $\Gamma$ and terms $a$ and $b$ in $\Gamma$, there is a term in $\Gamma$, called the product of $a$ and $b$ and denoted $(a b)$.

As always in algebra, we tend to omit the outer pair of parentheses for a term constructed as a product, although we need to keep those parentheses if we continue to construct a new term out of it.

Finally, an equational identity in a context $\Gamma$ in the equational theory of a group is any pair of terms in $\Gamma$ that may be obtained in any of the following ways.

• For every context $\Gamma$ and term $a$ in $\Gamma$, we have the equational identity $\Gamma\vdash a = a\colon G$, by reflexivity at $a$.
• For every context $\Gamma$, terms $a$ and $b$ in $\Gamma$, and equational identity $\Gamma \vdash a = b\colon G$, we have $\Gamma \vdash b = a\colon G$, by symmetry at $a$ and $b$.
• For every $\Gamma \vdash a = b\colon G$ and $\Gamma\vdash b = c\colon G$, we have $\Gamma \vdash a = c\colon G$, by transitivity at $a$, $b$, and $c$.
• For every $\Gamma \vdash a = b\colon G$, we have $\Gamma \vdash a' = b'\colon G$, called substitution into an inverse at $a$ and $b$.
• For every $\Gamma \vdash a = b\colon G$ and $\Gamma \vdash c\colon G$, we have $\Gamma \vdash a c = b c\colon G$, called left substitution into a product at $a$, $b$, and $c$.
• For every $\Gamma \vdash a\colon G$ and $\Gamma \vdash b = c\colon G$, we have $\Gamma \vdash a b = a c\colon G$, called right substitution into a product at $a$, $b$, and $c$.
• For every term $\Gamma \vdash a\colon G$ in context, we have $\Gamma \vdash 1 a = a\colon G$, called the left unit law at $a$.
• For every term $\Gamma \vdash a\colon G$ in context, we have $\Gamma \vdash a 1 = a\colon G$, called the right unit law at $a$.
• For every term $\Gamma \vdash a\colon G$ in context, we have $\Gamma \vdash a' a = 1\colon G$, called the left inverse law at $a$.
• For every term $\Gamma \vdash a\colon G$ in context, we have $\Gamma \vdash a a' = 1\colon G$, called the right inverse law at $a$.
• For every $\Gamma \vdash a\colon G$, $\Gamma \vdash b\colon G$, and $\Gamma \vdash c\colon G$, we have $\Gamma \vdash (a b) c = a (b c)$, called associativity at $a$, $b$, and $c$.

That's it!

There are many variations that we can introduce for the rules handling variables. Here I've adopted the practice of numbering the variables; as you can see if you start building up a few contexts using the first two rules, the variables are simply numbered in order. Thus a typical context is $\mathrm{x}_0\colon G, \mathrm{x}_1\colon G, \mathrm{x}_0' = \mathrm{x}_1$, which has $2$ free variables. It is more common in practice to allow one to introduce arbitrary notation for the free variables; so this same context might be denoted $x\colon G, y\colon G, x' = y\colon G$ or $y\colon G, x\colon G, y' = x\colon G$ or $\alpha\colon G, \beta\colon G, \alpha' = \beta\colon G$, etc. It is possible to write down a formal definition that takes this into account and avoids the counting, but it is technically more complicated, because you have to keep track of which variable names are taken. (In particular, $x\colon G, x\colon G, x' = x$ is not allowed!) So for my purposes, all of the actual variable names are of the form ‘$\mathrm{x}_i$’ and anything else is an informal notation.

There are standard ways of symbolically stating all or most of the rules above. I'm not sure about how to put the rules for forming contexts or counting the number of free variables in a context; people would probably prefer to just say in words that a context is a list of variable declarations and equational hypotheses, leaving it to the reader to realise that you have to keep track of how many variables appear (or which variables appear, without repeating any) and that you can only insert an equation that goes between terms that are valid in the context so far. But the other rules may be written like this: $\array { \frac {} {\Gamma \vdash 1} \\\\ \frac {\Gamma \vdash a\colon G\qquad \Gamma \vdash b\colon G} {\Gamma \vdash a b\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash a'\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash a = a\colon G} \\\\ \frac {\Gamma \vdash a = b\colon G} {\Gamma \vdash b = a\colon G} \\\\ \frac {\Gamma \vdash a = b\colon G \qquad \Gamma \vdash b = c\colon G} {\Gamma \vdash a = c\colon G} \\\\ \frac {\Gamma \vdash a = b\colon G} {\Gamma \vdash a' = b'\colon G} \\\\ \frac {\Gamma \vdash a = b\colon G \qquad \Gamma \vdash c\colon G} {\Gamma \vdash a c = b c\colon G} \\\\ \frac {\Gamma \vdash a\colon G \qquad \Gamma \vdash b = c\colon G} {\Gamma \vdash a b = a c\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash 1 a = a\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash a 1 = a\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash a' a = 1\colon G} \\\\ \frac {\Gamma \vdash a\colon G} {\Gamma \vdash a a' = 1\colon G} \\\\ \frac {\Gamma \vdash a\colon G \qquad \Gamma \vdash b\colon G \qquad \Gamma \vdash c\colon G} {\Gamma \vdash (a b) c = a (b c)\colon G} }$ The idea is that if you have everything above a line, then you get the judgement below that line.

If you play around with this, you should find that you can start proving theorems in the equational theory of a group. For example, you can prove that the identity element is its own inverse as follows:

• As always, $\epsilon$ is a context (the empty context).
• As in any context, we have the term $\vdash 1\colon G$ (remember that this abbreviates $\epsilon \vdash 1\colon G$).
• Continuing, we have its inverse, the term $\vdash 1'\colon G$, and their product, the term $\vdash 1 1'\colon G$.
• By the right identity law at $1$, we have $\vdash 1 1' = 1\colon G$.
• By symmetry at $1 1'$ and $1$, we have $\vdash 1 = 1 1'\colon G$.
• By the left unit law at $1'$, we have $\epsilon \vdash 1 1' = 1'\colon G$.
• Finally, by transitivity at $1$, $1 1'$, and $1'$, we have $\vdash 1 = 1'\colon G$.

This formalises the equational proof $1 = 1 1' = 1' ,$ only noting explicitly where the properties of equality and the group axioms are used.

I carried out this proof in the empty context, but I could just as well have done it in an arbitrary context $\Gamma$. Sometimes it's nice to do that explicitly; just write ‘$\Gamma$’ before every ‘$\vdash$’ in the explicit proof above. But also, there is a metatheorem that any equational identity in the empty context is valid in every context; more generally, any equational identity valid in some context $\Gamma$ is also valid in any context that is built from $\Gamma$ by any number of extensions. This metatheorem can be proved by structural induction on the ways in which equational identities (and hence terms and contexts) are constructed.

Here is another theorem, that the inverse of a product is the product of the inverses in the opposite order. I will formalise this argument:

$(x y)' = (x y)' 1 = (x y)' (x x') = ((x y)' x) x' = (((x y)' x) 1) x' = (((x y)' x) (y y')) x' = ((((x y)' x) y) y') x' = (((x y)' (x y)) y') x' = (1 y') x' = y' x' .$

• As always, $\epsilon$ is a context.
• Extending by a new variable twice, we have the context $\mathrm{x}_0\colon G, \mathrm{x}_1\colon G$; but I'll write this $x\colon G, y\colon G$ instead.
• Since ${|\epsilon|} = 0$, we have ${|x\colon G|} = 1$ and ${|x\colon G, y\colon G|} = 2$, so we have the terms $\mathrm{x}_0$ and $\mathrm{x}_1$ in the last context. But again, I'm writing these differently, so it comes out like this: $x\colon G, y \colon G \vdash x\colon G$ and $x\colon G, y \colon G \vdash y\colon G$.
• Now I can also build more terms:
• $x\colon G, y\colon G \vdash x y\colon G$;
• $x\colon G, y\colon G \vdash (x y)'\colon G$;
• $x\colon G, y\colon G \vdash 1\colon G$;
• $x\colon G, y\colon G \vdash (x y)' 1\colon G$;
• etc, ultimately building all of the terms that appear in the long equation above.
• Then I start building the equational identities:
• $x\colon G, y\colon G \vdash (x y)' = (x y)' 1$ by the left unit law;
• $x\colon G, y\colon G \vdash x x' = 1$ by the right inverse law;
• $x\colon G, y\colon G \vdash 1 = x x'$ by symmetry;
• $x\colon G, y\colon G \vdash (x y)' 1 = (x y)' (x x')$ by right substitution into a product;
• etc, etc, etc, finishing with $8$ applications of transitivity.

But we could also formalise this by working in an arbitrary context $\Gamma$ and using any two arbitrary terms (not necessarily variables!) $\Gamma \vdash x\colon G$ and $\Gamma \vdash y\colon G$. Then the same argument works, only writing $\Gamma$ in place of the context $x\colon G, y\colon G$ (and skipping the beginning where I derive that $x$ and $y$ are terms in that context). Once again, there is a metatheorem that anything that can proved one way can also be proved the other way.

Finally, consider the theorem that the inverse of a commutative product is the product of the inverses in the same order. That is, if $x y = y x$, then $(x y)' = x' y'$. The argument is $(x y)' = (y x)' = \cdots = x' y' ,$ where the dots are filled in the same way as in the previous argument (only with $x$ and $y$ swapped). To formalise this, we work in the context $x\colon G, y\colon G, x y = y x$; that is, our context now consists of two free variables and an equation stating that they commute. Alternatively, we formalise the argument in an arbitrary context $\Gamma$ with arbitrary terms $\Gamma \vdash x\colon G$ and $\Gamma \vdash y\colon G$, such that $\Gamma \vdash x y = y x\colon G$ is valid.

Of course, having just proved $x\colon G, y\colon G \vdash (x y)' = y' x'\colon G$, it's silly to go through the entire proof of $x\colon G, y\colon G, x y = y x\colon G \vdash (x y)' = x' y'\colon G$. Instead, we should use the previous result in the course of proving this new one. We actually cannot do so directly, since $(x y)' = y' x'$ is irrelevant; it's $(y x)' = x' y'$ that we need. However, suppose that we take the broader view, in which we have proved that $\Gamma \vdash (x y)' = y' x'$ is valid in any context and any two terms in that context. Then we are OK: if we take $\Gamma$ to be $x\colon G, y\colon G, x y = y x\colon G$, take $x$ to be $y$, and take $y$ to be $x$, then we have $x\colon G, y\colon G, x y = y x\colon G \vdash (y x)' = x' y'\colon G$, and a single application of transitivity gives us the new theorem.

In fact, we can think of the arbitrary-context versions of these three theorems as derived rules: $\array { \frac {} {\Gamma \vdash 1 = 1'\colon G} \\\\ \frac {\Gamma \vdash a\colon G \qquad \Gamma \vdash b\colon G} {\Gamma \vdash (a b)' = b' a'\colon G} \\\\ \frac {\Gamma \vdash a\colon G \qquad \Gamma \vdash b\colon G \qquad \Gamma \vdash a b = b a\colon G} {\Gamma \vdash (a b)' = a' b'} }$ Then we use the second derived rule in the proof of the third; more generally, we can use a derived rule to shorten any later proofs.

Well, this is all very long and tedious, but that is not really the fault of type theory as such, but rather of formal logic. Any way of making completely formal the equational reasoning of elementary group theory would be just as long and tedious. One reason that people don't ever write down anything so long and tedious when describing group theory is that they take basic logic for granted. Here I have spelt out that logic — and it is only a very weak logic! (In my next post, I hope to go beyond the equational theory of a group to describe the first-order theory of a group, spelling out a more powerful logic that allows us to state things such as ‘Only the identity equals its inverse.’ as a hypothesis and prove consequences of it. You can't do that in the equational theory.)

The benefit of taking logic for granted is obvious. But the benefit of spelling it out in so much detail is flexibility. The equational theory of a group can be interpreted in any category with finite products, after all, while first-order logic requires more. And you can even take completely different approaches to logic, as with propositions as types or Mike's $2$-logic.

Posted by: Toby Bartels on March 25, 2010 5:15 AM | Permalink | Reply to this

### Re: Example: the equational theory of a group

That's it!

Oops, I left out one rule!

Just as every variable that appears in a context is also a term in that context, so every equation that appears in a context is also an identity in that context. For example, $x\colon G, y\colon G, x y = y x\colon G \vdash x y = y x\colon G$. Without this rule, the proof of $x\colon G, y\colon G, x y = y x\colon G \vdash (x y)' = x' y'\colon G$ doesn't work.

Posted by: Toby Bartels on March 25, 2010 5:24 AM | Permalink | Reply to this

### Re: Example: the equational theory of a group

I want to give the first-order theory of a group, but I need to go to bed before I finish it. So I will just indicate one highlight: while the equational theory of a group had only one type, $G$, for elements of the group, the first-order theory of a group will have two types, $G$ and $Prop$, for propositions. We had propositions in the equational theory, but they were all equations between terms, so we could deal with them using the general concept of term without bringing in the general concept of proposition. But in first-order logic, we have means of combining arbitrary propositions, so we need a general type of propositions to talk about.

This will still fall short of a bona fide type theory as such, where one has means of combining arbitrary types. (It is a type theory, but a rather degenerate sort of one.)

Posted by: Toby Bartels on March 25, 2010 6:52 AM | Permalink | Reply to this

### Re: Example: the equational theory of a group

Here is a simple type theory,

Thanks, that’s very useful.

When I have some other things out of the way, I get back to this…

Posted by: Urs Schreiber on March 25, 2010 6:31 PM | Permalink | Reply to this

Post a New Comment