Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 4, 2013

Presentations and Representations in Foundations

Posted by Mike Shulman

I’ve had several requests to write more introductory posts on type theory like the last one. Today I’m going to give it a go by adapting my notes from a talk I just gave here at IAS. So rather than trying to continue directly from the last post, today we’ll take a different tack. Namely, suppose we were designing a new foundational system from scratch. This system will probably be about some kind of “basic objects”, and it’ll probably also include some ways to construct such objects (often using other given objects as data). The question is, how do we describe or characterize these constructions?

To help get an idea what our options might be, let’s think about the structures we see arising in set-based mathematics. As an example, I’ll consider one familiar sort of structure: groups.

One way to describe or construct a group is what I’ll call the explicit approach: we say precisely what its elements are and what the group structure is. This might take the form of a multiplication table. But more generally, I’ll call a description “explicit” if it is as explicit as its inputs. For instance, the product of two groups G×HG\times H can be defined to have elements ordered pairs (g,h)(g,h) with gGg\in G and hHh\in H, and multiplication defined by (g,h)(g,h)=(gg,hh)(g,h)(g',h') = (g g', h h'); this is fully explicit in terms of the structure of GG and HH.

A characteristic of an explicit description is that the different “parts” of the structure can be defined more or less separately (subject to the rules governing the structure). For instance, if GG happens to act on HH in a specified way, then the set of ordered pairs (g,h)(g,h) can be given a different group structure (g,h)(g,h)=(gg,h gh)(g,h)(g',h') = (g g', h^{g'} h'), yielding the semidirect product GHG\ltimes H.

A different way to describe or construct a group is with a presentation, consisting of some “generating” elements and equations. An explicit description can be converted into a presentation in a fairly trivial way, but the converse is not always true: the word problem for groups is not solvable in general. This may seem to make presentations less useful than explicit descriptions, but in fact lots of interesting and important structures are known best (or only) by presentations.

So, in building our foundational system, we might consider using either explicit descriptions or presentations as ways to construct objects. For instance, I think it’s reasonable to say that material set theories (like ZFC) mostly take the explicit approach. Material-sets are determined by their elements, and most of the axioms of material set theory tell us how to construct particular sets by specifying their elements (in terms of other sets we already have).

On the other hand, both of these approaches are very reductionist, depending on the fact that our objects (such as groups) are structured sets. Thus, neither is very promising if we’re looking for a foundational system whose basic objects are not sets. So what else might we consider?

Category theory suggests another fundamental way to describe objects: with universal properties. More precisely, we can characterize something by exhibiting it as a representing object for a functor. Since functors can be covariant or contravariant, we obtain two dual sorts of characterizations: “left” universal propreties and “right” universal properties. For instance, the cartesian product of groups G×HG\times H can be characterized as a (right) representing object for the functor Hom(,G)×Hom(,H)Hom(-,G)\times Hom(-,H), while the tensor product of abelian groups GHG\otimes H can be characterized as a (left) representing object for the functor Bilin(G,H;)Bilin(G,H; -).

There are close connections between (1) right universal properties and explicit descriptions, and (2) left universal properties and presentations. The latter is familiar to a category theorist: the group presented by generators SS and relations RR is the coequalizer of the pair FRFSF R \;\rightrightarrows\; F S, where FF denotes the free group functor; this is precisely a left universal property. (Note, though, that this is not to say that this universal property is the same as the intuitive meaning of a “presentation” in the sense of “all the elements are obtained by applying the operations to SS subject to relations RR”.) Morever, many or most left universal properties can be expressed as presentations: for instance, this is the case for the tensor product.

On the other hand, in most “categories of structures”, we can extract an explicit description fairly easily from a right universal property—because of the presence of certain special objects with left universal properties. For instance, in the case of groups we have the free group on one generator, \mathbb{Z}, such that Hom(,G)Hom(\mathbb{Z},G) is the underlying set of GG. It follows immediately that if G×HG\times H is defined using the (right) universal property of a product, then its underlying set must be the cartesian product of those of GG and HH; and so on. Many explicit descriptions can be expressed as right universal properties, but sometimes this is tricky; for instance, what is the universal property of the semidirect product? (There are answers; I’m just saying they are less obvious, and in other cases they may be even harder.)

So, returning to our foundational system, we might alternatively consider expressing constructions in terms of universal properties. This is pretty clearly the choice made by structural set theories such as ETCS. In fact, the axioms of ETCS are usually expressed exactly in terms of universal properties in the category of sets.

If we’re hoping to free ourselves from sets, this approach is more promising, but it still has a drawback: universal properties are usually expressed in terms of (natural) bijections of hom-sets. ETCS deals with this by shifting the statement of universal properties into the ambient first-order logic. That is, instead of characterizing the product of sets A×BA\times B by a natural bijection Hom(,A×B)Hom(,A)×Hom(,B)Hom(-,A\times B)\cong Hom(-,A)\times Hom(-,B), we have instead axioms which say that “for every f:XAf:X\to A and g:XBg:X\to B, there exists a unique h:XA×Bh:X\to A\times B such that …”. While this does avoid circularity, it more or less forces the notions like “uniqueness” used in these universal properties to be those of the ambient first-order logic. Thus, if this logic is of the traditional sort, this means that there is still a “set-like” structure hanging around somewhere.

Fortunately, we can make one further improvement that helps resolve this problem. It’s well-known that in a closed category (perhaps monoidal, or cartesian, or whatever is appropriate), universal properties can usually be “internalized”. For instance, the exponential object Z YZ^Y in a cartesian closed category is characterized by natural isomorphisms

Hom(X,Z Y)Hom(X×Y,Z) Hom(X, Z^Y) \cong Hom(X\times Y, Z)

but this automatically implies that we also have isomorphisms of exponential objects

(Z Y) XZ X×Y. (Z^Y)^X \cong Z^{X\times Y}.

This latter property seems more amenable to being expressed in a foundational system without needing any ambient set-like structure. It’s a long way from there to actually finding such a foundational system, but fortunately the work’s already been done for us. Of course, as always, I’m talking about type theory.

In type theory, the basic objects are types and their elements. Traditionally, one thinks of types as being like sets, and that’s sometimes a very helpful guide when reasoning in type theory. However, the essential point today is that types don’t have to be like sets.

In order to explain how type theory expresses definitions by universal properties “internally”, I need to say something about how it handles variables. In set-theoretic foundations, if I have a variable xx and an expression involving xx, I can substitute any value for xx and the expression will then denote a resulting value. For instance, the expression x 2+1x^2+1 denotes 55 when I substitute 22 for xx. Thus, an expression containing a free variable xx implicitly describes a “function” of sorts, from input values to output values.

This is also true in type theory, but because types might be more structured than sets are, an expression containing a free variable does not merely describe a function (between sets), but a morphism (between types). One way to think of this is that the “elements” of a type act more like generalized elements in category theory. Another way to think of it is that we should be able to “substitute” for a variable not just the “values” in its domain in the naive sense (e.g. the elements of the underlying set of a group), but also whatever structure there may be relating those values, to ensure that the “morphism” defined by any expression preserves that structure.

The practical upshot of this is that we can express rules for constructing types in terms of their “elements”, and then these rules will automatically characterize those types by a full universal property with respect to all “morphisms”. For instance, one of the rules for cartesian product types says that “if we have elements a:Aa:A and b:Bb:B, then we can form an element (a,b):A×B(a,b):A\times B.” Since these elements aa and bb may involve free variables, this implies automatically that any pair of morphisms into AA and BB also yield a morphism into A×BA\times B.

This should be contrasted with the treatment in material set theory, where A×BA\times B is defined explicitly in terms of its elements (a,b)(a,b), and in order to prove its universal property in the category of sets we have to say something like “given f:XAf:X\to A and g:XBg:X\to B, for any xXx\in X we have elements f(x)Af(x)\in A and g(x)Bg(x)\in B, hence we can form the pair (f(x),g(x))(f(x),g(x)), and we define h:XA×Bh:X\to A\times B to send xx to (f(x),g(x))(f(x),g(x)).” The latter argument depends on knowing that sets “have no structure”; if we wanted to do the same thing in the category of groups, we would have to add an additional part verifying that hh is a group homomorphism. In type theory, all the extra structure that types might have is handled automatically by the interpretation of variables.

Nearly every rule for constructing types in type theory can be regarded as expressing a universal property. The division into “left” and “right” universal properties is referred to by type theorists as the division into “positive types” and “negative types”, and called generically “polarity”. In type-theoretic language, we say:

  • A negative type is characterized by giving the basic ways to use an element of it. For instance, the ways to use an element of A×BA\times B are to extract its first or second component. The way to use an element of the function type B AB^A is to apply it to an element of AA and obtain an element of BB.

    Given these, it follows that the way to construct an element of such a type is to specify what happens when we use our putative element in all possible ways. For instance, the way to construct an element of A×BA\times B is to specify its two components. And the way to construct an element of B AB^A is to give a way to make an element of BB under the assumption of an element of AA, i.e. to give an expression of type BB containing a (new) free variable of type AA.

  • A positive type is characterized by giving the basic ways to construct parts of it. For instance, the ways to construct parts of the coproduct A+BA+B are to inject an element of AA, or to inject an element of BB. And the ways to construct a natural number are either to take 00, or to take the successor of some other natural number (this an example of a “properly recursive” inductive type, for which some of the ways to construct new elements involve having other elements of the same type).

    Given these, it follows that the way to use an element of such a type is to specify what is to be done in all possible cases, depending on the ways that such an element might have been constructed. For instance, the way to use an element of A+BA+B is to divide into two cases, according to whether that element came from AA or from BB. And the way to use a natural number is to do a proof by induction, or a definition by recursion: we divide into two cases according to whether we have 00 or a successor, and in the latter case we can assume we’ve already dealt with the predecessor natural number.

Recalling that “elements” of a type include the information of all “generalized elements”, I think it’s not too hard to see how the rule for constructing elements of, say, B AB^A, express its universal property. Morphisms XB AX\to B^A — that is, expressions of type B AB^A containing a variable of type XX — are the same as morphisms X×ABX\times A\to B — that is, expressions of type BB containing a variable of type AA in addition to the variable of type XX. (Hopefully it’s not too hard to believe that a variable of type X×AX\times A is roughly the same as a variable of type XX and another variable of type AA. There’s a subtle issue here involving things called contexts, but I don’t want to get into it.) Moreover, because expressions with free variables are also how we construct elements of function types, we have immediately the internalized isomorphism (B A) XB X×A(B^A)^X \cong B^{X\times A}. The other examples are similar.

There’s an important difference between positive and negative types, however, coming from the asymmetry that “(generalized) elements” are “maps in” rather than “maps out”. To describe a positive type, we give the basic “ways to construct elements”, but nothing requires that these be the only elements of the resulting type. For instance, if types are groups, then the coproduct type A+BA+B will be the free product of AA and BB, and this contains many more elements than those coming directly from AA or from BB. This is despite the fact that the coproduct type is a positive type whose “basic elements” are those coming from AA or from BB, and moreover that in order to use an element of A+BA+B it suffices to deal with the two cases when it comes from AA and when it comes from BB. In the types-are-groups world, there are more elements of A+BA+B than these, but because all constructions must respect group structure, anything we do with a variable of type A+BA+B is uniquely determined by what happens to the images of AA and BB.

(In case you’re wondering, yes, there is a type theory whose types are groups. The category of groups is regular, which is more than sufficient for it to have an internal type theory. This type theory doesn’t have as much structure as the ones we usually use as foundations for mathematics — for instance, there are no function types, since GrpGrp is not cartesian closed — but it’s a perfectly good type theory. Even more interesting is the category of abelian groups, whose internal type theory is a form of linear logic.)

I stress this point because there seems to be a common feeling among type theorists that “there should be nothing in an inductive (positive) type except what you put into it”. Probably this belief arises from thinking of types as like sets, where there are no “operations” in sight to make new stuff out of the generating stuff that you put in. But if types aren’t like sets, then all bets are off. Indeed, the great thing about type theory, as I’ve been saying, is that it frees us to consider theories whose basic objects aren’t set-like.

Now let’s bring in the homotopy theory. Suppose that we are higher category theorists, and we’d like a “natively higher-categorical” foundation for mathematics. To keep things easy, let’s look for a foundation whose basic objects are \infty-groupoids — more precisely, for a type theory whose types behave like \infty-groupoids. Surprisingly enough, it turns out that there’s not a whole lot of work to do: up to a point, plain old ordinary type theories can easily be interpreted as being about \infty-groupoids rather than (say) sets.

The essential thing to note is that when we do this, the “morphisms” are automatically functors between our \infty-groupoids, just as in the generic case discussed above. In particular, any expression with a free variable denotes a functorial operation. This is in harmony with what we would expect from a natively category-theoretic foundation of mathematics: everything you can do or say is automatically invariant (or, more precisely, “equivariant” or “covariant”) under equivalence.

One important new thing that we can add to this homotopy type theory is a new kind of positive type. The positive types in ordinary type theory are ones that make sense in any category, and are generally inspired by the category of sets (or other set-like categories). Since sets have only elements and nothing else, to give a presentation of a set is nothing but to give some elements and some equations between them.

However, \infty-groupoids have all sorts of higher structure, so a “presentation” of an \infty-groupoid can have “generators” of arbitrary dimension: not just points, but equivalences (or paths) between them, and higher equivalences between those, and so on. Moreover, when talking about weak \infty-groupoids, as we generally do, we can omit the “relations” entirely, since quotienting (weakly) by a relation at level nn is the same as adding a generator of level n+1n+1. Thus, in homotopy type theory we should have positive types that correspond to presentations of this sort.

These are called higher inductive types and I think they are the most exciting thing in the world. (-: For instance, they give us a way to import all the “spaces” considered in classical homotopy theory into homotopy type theory, incarnated as type-theoretic presentations of their fundamental \infty-groupoids. The higher inductive type generated by a single point and a single path from that point to itself represents (the fundamental \infty-groupoid of) the circle, S 1S^1. We similarly have nn-dimensional spheres, tori, manifolds, CW complexes, etc. Moreover, fundamental homotopical constructions like truncation, suspension, localization, and stabilization can all be defined as higher inductive types. So we really do have a foundational system in which the “basic objects” are \infty-groupoids — or homotopy types — and in which all the constructions that we want to do with them are available and characterized in a convenient way.

Let me end by introducing the famous identity type from this perspective. The classical sort of “inductive type” allows us to define not only single types, but dependent types, i.e. families of types indexed by some other type. For instance, for any type AA, there is a family of types VecVec indexed by the natural numbers, such that Vec(n)Vec(n) is the type of lists of elements of AA of length nn, i.e. Vec(n)=A××AVec(n) = A\times \cdots\times A, with nn factors. This family can be presented by (1) a single generator generator of Vec(0)Vec(0), called the “empty list”, and (2) for each a:Aa:A and each l:Vec(n)l:Vec(n), a generator of Vec(n+1)Vec(n+1), called the “cons” of aa and ll. This is a perfectly good (properly recursive) presentation, a.k.a. a positive type, which makes sense whether we regard types as sets or as \infty-groupoids.

Identity types are another example of this sort of “inductively defined family”. Namely, given a fixed type AA and an element a:Aa:A, the identity type Id A(a,)Id_A(a,-) is a family of types indexed by AA itself, which is presented by a single generator of Id A(a,a)Id_A(a,a), called 1 a1_a. Now if types are like sets, then there are never any “operations” to think about, so there is nothing in a presented type except what we put into it: thus Id A(a,b)Id_A(a,b) is empty unless aa and bb are the same and otherwise it contains only 1 a1_a. But in other cases, of course, there is no reason to expect this to be true.

In particular, if types are \infty-groupoids, then a dependent type over AA is a functor from AA to the \infty-groupoid of \infty-groupoids (the universe type). This is the only thing it could possibly be; remember I said that in homotopy type theory, everything is functorial. That means that the identity type Id A(a,)Id_A(a,-) is the functor AGpdA\to \infty Gpd which is freely generated by one element in Id A(a,a)Id_A(a,a). What is that? By the Yoneda lemma, it’s exactly the hom-functor Hom A(a,)Hom_A(a,-). In topological language, this is the space of paths in AA starting at aa.

Thus, there is not really anything mysterious about of identity types, nor should it be surprising that — if we regard types as \infty-groupoids — there is “more in the identity type” than we put into it, any more than it should be surprising that the free product of groups contains more elements than those coming directly from the two factor groups. Of course, it’s very important in working with the formal system of homotopy type theory that we have this “internalized” way to talk about the hom-functors, and so identity types naturally play a special role. But when you come down to it, they are just another sort of inductive (positive) type: a presentation of something with a left universal property.

Posted at February 4, 2013 4:36 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2595

32 Comments & 1 Trackback

Re: Presentations and Representations in Foundations

nice – you’ve made it clear how type theory and CT are two sides of the same coin: intro and elim rules (positive or negative) are essentially the same as universal properties (left or right). the point is that the constructions are determined by how they map into and out of other objects, rather than “what they are made of”.

now we just need to get the type theorists on board with the idea that a type is not just the collection of its closed terms.

Posted by: Steve Awodey on February 5, 2013 5:22 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

This is nice indeed but I would not characterize the restriction of a type to its collection of closed terms as “type theoretic” :-)

Type theorists have been concerned with the analysis of open terms in a type (where one looks at the element of a type relativized to a context) e.g. for normalization proofs, and the type theoretic idea of looking at element of a type in a context is reminiscent of the notion of generalized element in category theory.

However, in an empty context, one would expect that each element of an inductive type reduces to one element in constructor form (something that you put inside the inductive type). I would not say that this comes from “thinking of types as sets” but from proof theory (Gentzen-Prawitz natural deduction). So a nice challenge would be to have a proof theoretic explanation of the univalence principle and higher-order inductive types.

Posted by: Thierry Coquand on February 6, 2013 8:18 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Thanks for bringing this up! (Coincidentally, the computational behavior of proof theory came up on another thread just now too.) After thinking about this some more, I think I would say that what’s going on can be broken down into two steps.

  1. The proof theory leads us to expect that in an empty context, every term can be reduced to one in canonical form.
  2. Thinking of types as sets is what leads us to identify canonical forms with constructor forms.

This separation between canonical and constructor forms is exactly what Licata and Harper use to understand the computational content of 1-truncated homotopy type theory, and which is proposed to generalize to the untruncated case. The idea is that a term such as “looplooploop\circ loop” in the identity type Paths S 1(base,base)Paths_{S^1}(base,base) should be regarded as canonical, even though it is not a constructor. (Alternatively, we could call path-concatenation a “constructor”, but I would prefer not to, since it’s not one of the constructors that the user specified when defining a type.)

More generally, I think the idea is that in negative types such as dependent function types, coinductive types, and the universe, we can specify exactly what the “canonical forms” of paths are. A canonical path between functions is a pointwise homotopy (function extensionality), a canonical path between coinductive objects is a bisimulation, and a canonical path between types is an equivalence (univalence). The (higher) groupoid operations on these types, like concatenation of paths, can be defined to compute on these canonical forms, e.g. the concatenation of paths coming from equivalences should come from the composition of equivalences.

By contrast, in positive types such as inductives and higher inductives, since all the higher structure should be freely generated, we consider a “canonical form” in an (iterated) path-type to be one obtained by applying the (higher) groupoid operations to constructor forms. (In general, because the word problem is undecidable, there will be “canonical forms” that are propositionally equal but not definitionally equal.) Then we may hope to recover the canonicity property of type theory, while still allowing for nontrivial terms in higher identity types that don’t come directly from constructors (such as the higher homotopy groups of spheres).

Posted by: Mike Shulman on February 8, 2013 12:16 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

One thing that should be stressed is that the notion of polarity in type theory/proof theory is not something which is fully understood. As I mentioned at your talk, though, it is most usefully understood as a property of connectives rather than of types—it extends to types by considering a type’s outermost connective. The original motivation for considering polarity (in Andreoli’s and Girard’s work) was from thinking about the way that connectives can be composed into new “macro connectives”. In particular, certain combinations of connectives (e.g., the ternary connective ()-\otimes(-\oplus-)) admit sound and complete inference rules, while others (e.g., &()-\&(-\oplus-)) do not.

The point is that for a particular type, it may not make sense to say that it is defined by its constructors or by its eliminators, because although its outermost connective (or outermost series of connectives) may have a particular polarity, its full definition may involve connectives of different polarity.

In fact, I am pretty sure that the right analysis (or at least, a good analysis) of higher inductives involves just such a polarity shift, because at least one interpretation of the coercion from positive to negative types (written \uparrow in the linear logic literature) is of endowing a collection of values with some free algebraic structure.

Posted by: Noam Zeilberger on February 9, 2013 1:01 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Although I talked about positive and negative “types”, I think if you read carefully you’ll see that I’m actually talking about “ways to construct types”, i.e. type-forming operations, which I think are the same as what you’re calling “connectives”.

Posted by: Mike Shulman on February 9, 2013 1:14 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

My point was that one of the conclusions of the work on polarity is that certain “ways to construct types” do not correspond to “logical” connectives in the full sense of the term, since they violate important proof-theoretic principles.

Take the ternary connective [&()][-\&(-\oplus-)] as an example. Suppose we try to give it left and right rules by composing the corresponding rules for &\& and \oplus. We get two right rules, ΓAΓBΓA&(BC)ΓAΓCΓA&(BC)\frac{\Gamma\vdash A \qquad \Gamma\vdash B}{\Gamma\vdash A \& (B \oplus C)} \quad \frac{\Gamma\vdash A \qquad \Gamma\vdash C}{\Gamma\vdash A \& (B \oplus C)} and two left rules, Γ,AΔΓ,A&(BC)ΔΓ,BΔΓ,CΔΓ,A&(BC)Δ\frac{\Gamma, A \vdash \Delta}{\Gamma, A \& (B \oplus C) \vdash \Delta} \quad \frac{\Gamma, B \vdash \Delta \qquad \Gamma,C\vdash \Delta}{\Gamma, A \& (B \oplus C)\vdash \Delta} Now, one of the conditions for being a logical connective is that one can derive initial axioms from atomic initial axioms—type-theoretically, this corresponds to η\eta-expansion, and is a source of canonical forms. So we need to be able to derive the sequent A&(BC)A&(BC)A \& (B\oplus C) \vdash A \& (B \oplus C) given derivations of AAA\vdash A and BBB\vdash B and CCC\vdash C. But in fact we can’t. In particular, the linear logic derivation AAA&(BC)ABBCCBCBCA&(BC)BCA&(BC)A&(BC) \frac{\frac{A\vdash A}{A \& (B\oplus C) \vdash A} \qquad \frac{\frac{B\vdash B \qquad C\vdash C}{B\oplus C\vdash B\oplus C}}{A \& (B\oplus C) \vdash B \oplus C}} {A \& (B\oplus C) \vdash A \& (B \oplus C)} cannot be replayed by the above “big-step” rules. Thus, although it makes sense to say that A&(BC)A \& (B \oplus C) is a negative type (since its outermost connective is negative), it does not make sense to say that [&()][-\&(-\oplus-)] is a negative connective—rather, it is a composition of negative and positive connectives.

My intuition is that higher inductives could be viewed in the same way, and that this is one way to explain the mismatch between “constructor forms” and “canonical forms”.

Posted by: Noam Zeilberger on February 9, 2013 2:49 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Of course, if you compose something that has a right universal property with something that has a left universal property, the result isn’t generally going to have any universal property any more. But why should this tell us anything about higher inductives, which do have a left universal property?

Posted by: Mike Shulman on February 9, 2013 5:40 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

In proof theory, polarity is interesting because there are coercions between positive and negative types—either implicit, or explicitly written as unary connectives. Categorically, these coercions can be modelled as an adjunction between positive and negative types. (I recommend you read Melliès’s Dialogue categories and chiralities for an explanation of this.)

Under this interpretation of polarity, you do not have the equation “left universal = positive”. In particular, when you apply a left adjoint to an object with a left universal property, it retains a left universal property…but obviously living in a different category.

My suggestion (though it was only a suggestion) was that the freely generated higher structure of higher inductives might be modelled in this way.

Posted by: Noam Zeilberger on February 9, 2013 7:07 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

If I understand Mellies’ paper correctly, he’s talking about an adjunction between a category and its opposite (up to equivalence), corresponding to the “dualizing operation” () (-)^\perp of linear logic which turns positive things into negative things and vice versa, (AB) =A B (A\otimes B)^\perp = A^\perp \parr B^\perp, and is contravariant and adjoint to itself. So indeed, when you apply a left adjoint 𝒞𝒞 op\mathcal{C} \to \mathcal{C}^op to something with a left universal property, you get something with a left universal property, but it’s a left universal property in 𝒞 op\mathcal{C}^op, i.e. a right universal property in 𝒞\mathcal{C}. Of course you can always interchange right and left universal properties by switching between a category and its opposite, but most categories arising “in nature” behave very differently from their opposites. The idea that objects with left universal properties come from “presentations” and “free generators” is of course only true in “algebraic” categories, and not in their opposites! But I think it gives the right intuition here, because \infty-groupoids are algebraic.

Posted by: Mike Shulman on February 9, 2013 9:15 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

If I understand Mellies’ paper correctly […]

I think mostly, but there’s some danger of being confused by notation, so let me try to clarify a bit. One of the questions that Melliès’s work tries to answer is how to transfer the ideas about duality in linear logic to more general settings beyond *-autonomous categories. As I understand it, historically one of the reasons polarities were studied was to address the difficulty in giving a computational interpretation to classical linear logic. Melliès’s work explains how the classical “unpolarized” models of linear logic (such as *-autonomous categories) arise by collapsing structure of the polarized models.

In the terminology of this paper, a dialogue chirality is a pair of monoidal categories (𝒜,)(𝒜,ℬ) equipped with a monoidal equivalence () *:𝒜 op(0,1)(-)^* : 𝒜 \cong ℬ^{op(0,1)} and an adjunction LR:𝒜L \dashv R : ℬ \to 𝒜 together with some extra structure. The “coercions” I was referring to above correspond to LL and RRnot to the dualizing operations. In ordinary linear logic they are invisible, while in the notation of polarized linear logic they are written with arrows \uparrow and \downarrow.

Any *-autonomous category 𝒞𝒞 gives rise to a dialogue chirality by taking 𝒜==𝒞𝒜 = ℬ = 𝒞 and () *(-)^* as the dualizing operation of 𝒞𝒞, together with LL and RR as the identity functors. But so too does any more general dialogue category 𝒞𝒞 (i.e., a symmetric monoidal category with closures into some fixed object \bot) by taking 𝒜=𝒞𝒜 = 𝒞, =𝒞 op(0,1)ℬ = 𝒞^{op(0,1)} and the dualizing operation () *(-)^* as the identity, together with LL and RR given by the (non-involutive) negation functors.

Finally, the requirement that 𝒜 op(0,1)𝒜 \cong ℬ^{op(0,1)} is not really essential and can be relaxed (this is discussed a bit in Section 8), in which case we have even more freedom in choosing an adjunction LRL \dashv R.

Posted by: Noam Zeilberger on February 10, 2013 4:17 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

for instance, there are no function types, since Grp is not cartesian closed

Well, I wouldn’t be so quick to conclude that. I’m trying not to speak nonsense as I’m completely not acquainted with non-abelian groups, nonetheless, it seems to me that group homomorphisms form a group (using pointwise operations). Hence there would be a function type. The fact that we cannot interpret the context to be a free product (or some kind of tensor product as in abelian groups) shouldn’t be relevant here, should it? As long as there is a notion of multiple-argument group morphism (I’m guessing the definition of bilinear functions of abelian groups can be directly reused).

It’s one of the thing with type theory, that we can have exponential types without products (like with multi-categories I daresay, which are probably quite related to type theory).

That said, I’d like to understand the general topic of classifying type theories like we classify categories (and higher categories). The obvious method is to describe a type theory as the internal language of a family of category and reuse the categorical denomination. But what if we want to describe properties directly in term of type theoretical construct? Well, maybe someone knows something about that. I’m done ranting!

Posted by: Arnaud Spiwack on February 5, 2013 10:40 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Did you check whether the pointwise product of two group homomorphisms is again a group homomorphism?

Posted by: Todd Trimble on February 5, 2013 12:12 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Oh I see. I knew I must have been saying something stupid.

Still. The failure to have a function type is not really related to cartesian-closedness.

Posted by: Arnaud Spiwack on February 5, 2013 12:48 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

I don’t understand “not really related”. If a category “fails to have function types”, which I interpret here to mean “is not closed” or, more precisely, has no closed category structure, then this directly implies its not being cartesian closed.

Posted by: Todd Trimble on February 5, 2013 2:17 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Ok, it wasn’t phrased properly. Not being cartesian closed is certainly a necessary condition for the absence of function space, I meant it wasn’t sufficient. Oh well.

Posted by: Arnaud Spiwack on February 6, 2013 9:43 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

In ordinary non-linear type theory (which is the only sort I was really talking about here), I think contexts really have to be interpreted by cartesian products, and function types really have to be interpreted by (weak) cartesian closure. In order to use a non-cartesian monoidal structure, you have to go to some kind of linear type theory.

Posted by: Mike Shulman on February 5, 2013 4:53 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Is this really the case? The rules for Pi-types in dependent type theory are a bit weaker than categorical exponentials, and so it’s certainly conceivable that a category could fail cartesian closure, and yet still have Pi-types.

(Semi-simplicial sets looks, at least at first, like it might give an example of this, with the “geometric product” and its corresponding hom giving a candidate for a non-cartesian monoidal closed structure that could be used in a model of type theory. I’m no longer terribly confident that this can be made to work for a full model, but it’s at the very least suggestive of how non-cartesian Pi-types could potentially look.)

Posted by: Peter LeFanu Lumsdaine on February 5, 2013 5:29 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

You’ve said this before, but I don’t really understand what you have in mind. In nonlinear type theory, contexts have diagonal and projection maps coming from the weakening and contraction structural properties, and any monoidal category with diagonal and projection maps must be cartesian monoidal. And then if you include judgmental η\eta-conversion, then I’m pretty sure that the rules for function types (let’s stick with the non-dependent version for simplicity) really do say exactly that they are cartesian-closure exponentials. If you leave out judgmental η\eta, then I think you get weak exponentials, which satisfy the existence but not the uniqueness aspect of the universal property. Am I missing something?

Posted by: Mike Shulman on February 5, 2013 5:42 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Well, if the context has to be interepreted by a product, then sure, almost by definition (I don’t about Peter’s remark, I never thought about it, so let’s stay at the level of non-dependent exponential).

But there is nothing preventing me to cook up a multi-category with exponentials but no product (still with duplication and weakening and such). That would be a model of the (minimal) simply typed lambda-calculus, I guess.

I suppose there is a way to force it into a product (by considering a new category whose objects are lists of objects of the multi-category). So I’m not sure it matters. Just that, when thinking with type theory as a starting point, the products are not so important as in category theory. Or so it seems to me, at least.

Posted by: Arnaud Spiwack on February 6, 2013 10:03 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Bart Jacobs discusses the relation between cartesian-closedness and product types in ‘Simply typed and untyped lambda calculus revisited’ (1992) available for download on his homepage.

Posted by: thomas holder on February 6, 2013 12:07 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Thanks! Here’s a friendly hint (which has been building up inside me for a while, so don’t take it personally; it’s more of a general suggestion for all blog commenters). When you mention a paper that’s related to a blog discussion, I think people are much more likely to go have a look at the paper if you also say something about the content of the paper and how it’s related. For me personally, at least, it’s harder to motivate myself to read a paper if someone has just told me “you should read this paper” versus if someone’s told me about why I should read that paper and a bit about what I can expect to see in it. Furthermore, the fact that you’re mentioning the paper means that you already understand something about it, so it’s more efficient all around for you to give a brief summary, rather than everyone else having separately to go track down the paper and puzzle through it.

I’ve certainly been guilty of this myself in the past, and of course time is in short supply for all of us. But it’s something to strive for, if we have time.

Posted by: Mike Shulman on February 6, 2013 5:00 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Indeed, you’re right; the most “natural” place to interpret (non-dependent, non-linear) type theory is a cartesian multicategory. But as soon as you have cartesian products (which the category of groups does), then I think the multimorphisms in a cartesian multicategory must be identified with morphisms out of cartesian products. (And, as you say, any cartesian multicategory can be embedded in a cartesian monoidal category.)

So you could have a nonlinear type theory with function types which is not cartesian closed, but it would fail to be cartesian closed only by virtue of not having cartesian products. The function types would still behave like the exponential objects in a cartesian closed category.

Posted by: Mike Shulman on February 6, 2013 4:47 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Hi Mike. I’ve been really enjoying these posts. But while every sentence feels exactly right, in reality I have no idea what they actually mean. I think what I, and perhaps others, need now is not to read more general discussion but to see type theory in action, to see some familiar bits of mathematics re-expressed in type theory. I imagine a few well chosen examples, written out in complete detail (and involving more than replacing epsilons with colons), would do wonders. And after that I can start worrying about homotopy type theory.

Posted by: James Borger on February 5, 2013 11:29 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

I second this!

Posted by: Charles Rezk on February 5, 2013 3:11 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

If you can wait a few more weeks, there may be something just for you. (-:

Posted by: Mike Shulman on February 6, 2013 5:32 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

I look forward to it!

Posted by: James Borger on February 6, 2013 6:48 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Here’s something I can imagine type theory handling gracefully, in a way that would improve some people’s lives. However, I don’t know whether type theory does handle this gracefully: hopefully Mike or Steve or Peter or someone will tell us.

Suppose we’re thinking about internal groups in some category EE (with finite products) other than SetSet. We’d like to be able to take identities from ordinary group theory and transplant them to EE. For example, we’d like to know that identities such as

(xy) 1=y 1x 1 (x y)^{-1} = y^{-1} x^{-1}

and

[[x,y 1],z] y[[y,z 1],x] z[[z,x 1],y] x=1 [[x,y^{-1}], z]^y \cdot [[y, z^{-1}], x]^z \cdot [[z, x^{-1}], y]^x = 1

hold in EE simply because they hold in SetSet.

Without further technology, this seems to be a cumbersome task. First we have to express the equations, and then we have to do a huge diagram chase to prove them. (Even the first example appears to be quite laborious.) What we’d really like is a magic wand we could wave over the SetSet-based proof to turn it into a proof valid in EE.

And there is such a wand. We discussed this a while ago, in a thread I’m too lazy to find right now, but there’s an explanation on pages 12–13 of my introduction to topos theory. It hasn’t really got anything to do with toposes. The key idea is “generalized element”, which is one of the basic ingredients of the so-called internal language of a category. Essentially, one takes the SetSet-based proof and re-interprets all the elements mentioned in the proof as generalized elements; and that’s that.

As I said, this is the kind of thing I can imagine type theory taking very naturally in its stride. What made me think of this application was Mike’s mention of generalized elements and internal languages in his post. I don’t know enough about it, though, to pronounce confidently on this.

Posted by: Tom Leinster on February 6, 2013 6:32 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Yeah, I’ve had the same impression. But I do have a question about this: is it necessary to specify the domain of generalized elements in type theory?

For instance, in classical algebraic geometry, you could talk about the function C 2CC^2\to C defined by y=z 2+wy=z^2+w. This is nice and easy. In scheme theory, one might talk about the map A 2A 1A^2\to A^1 defined by the algebra map C[y]C[z,w]C[y]\to C[z,w] defined by yz 2+wy\mapsto z^2+w. This is essentially the same thing, but a bigger mouthful. In practice, people would say the first because everyone knows how to translate it into scheme language. Now we can also use generalized elements (aka ring-valued points aka the functor of points) to say something completely rigorous but closer to the classical language as follows: Let A 2A 1A^2\to A^1 be the morphism defined on RR-points, for any algebra RR, by y=z 2+wy=z^2+w, for all z,wRz,w\in R. But what I’d like to do is to be able to suppress any mention of RR. If define an “element of CC” to be an algebra RR plus an element of RR, then we’re in trouble when we take two elements of CC because they could have different RR’s. My sense is that this is not a problem in type theory, but I’m never sure if it’s because people are speaking loosely, essentially using classical language abusively, or if you don’t need to mention RR in some real sense.

Posted by: James Borger on February 6, 2013 7:08 AM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Ah, yes, it’s coming back to me now. I distinctly remember you talking about the functor of points in connection with generalized elements.

I have the same question as you regarding domains.

Posted by: Tom Leinster on February 6, 2013 3:59 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

Indeed, this is exactly the point of the internal language. If you can prove a theorem in a flavor of type theory that interprets internally in some category, then that theorem must be true in that category.

Internal groups are also one of my own favorite examples. The type-theoretic theory of a group involves, among other things, a multiplication term that we can express as (x:G),(y:G)(m(x,y):G). (x:G),(y:G) \;\vdash\; (m(x,y):G). This is type-theorist notation for a term of type GG containing free variables xx and yy of type GG. When interpreted categorically, the comma on the left side of the \vdash becomes a cartesian product (or, more generally, the comma in the domain of a multimorphism in a multicategory), and the \vdash itself becomes an arrow, so this term becomes a morphism G×GGG\times G\to G. Similarly, the inversion operator will be (x:G)(i(x):G). (x:G) \;\vdash\; (i(x):G).

Because propositions are types, the same thing happens to theorems. For instance, say we can prove in our type theory that if GG is a group object, then (xy) 1=y 1x 1(x y)^{-1} = y^{-1} x^{-1}. What this means is that there’s some term tt we can construct with the type: (x:G),(y:G)(t:i(m(x,y))=m(i(y),i(x))). (x:G),(y:G) \;\vdash\; (t : i(m(x,y)) = m(i(y),i(x))). In order to see what this means categorically, we need to know what == means. Equality in type theory is, like everything else we might try to prove, a type—but it’s a dependent type, because the statement “aa is equal to bb” involves elements aa and bb of some base type. In other words, for any type AA and a,b:Aa,b:A, there is a type called a=ba=b, and to prove that aa is equal to bb is precisely to exhibit a term of type a=ba=b. In particular, given free variables x,y:Gx,y:G in our group, we have two further elements i(m(x,y)):Gi(m(x,y)):G and m(i(y),i(x)):Gm(i(y),i(x)):G, so we can form the equality type i(m(x,y))=m(i(y),i(x))i(m(x,y)) = m(i(y),i(x)), and proving the identity (xy) 1=y 1x 1(x y)^{-1} = y^{-1} x^{-1} means exhibiting an element of this type.

Now categorically, a type B(a)B(a) dependent on an element a:Aa:A means a morphism with codomain AA, say BAB\to A. We think of this morphism as having the type B(a)B(a) as the “fiber” over each a:Aa:A. Thus, the type a=ba=b dependent on a,b:Aa,b:A must be some morphism with codomain A×AA\times A (two copies of AA, one for each variable aa and bb). It shouldn’t be too surprising that this morphism is the diagonal AA×AA\to A\times A.

Next, if we have an expression a(x):Aa(x):A depending on a free variable x:Xx:X, then by substitution we get a dependent type B(a(x))B(a(x)) depending on an element of XX. If you think about fibers and so on, then eventually you reach the conclusion that the morphism with codomain XX corresponding to B(a(x))B(a(x)) ought to be the pullback of the morphism BAB\to A (corresponding to B(a)B(a)) along the morphism XAX\to A (corresponding to a(x)a(x)).

In particular, if we have two terms a 1(x),a 2(x):Aa_1(x), a_2(x):A involving a variable x:Xx:X, then the type a 1(x)=a 2(x)a_1(x)=a_2(x) is the morphism into XX obtained by pulling back the diagonal of AA along (a 1,a 2):XA×A(a_1,a_2):X\to A\times A. This is otherwise known as the equalizer of the morphisms a 1:XAa_1:X\to A and a 2:XAa_2:X\to A.

Finally, to give an expression b(a):B(a)b(a):B(a) with a dependent type and involving a variable a:Aa:A should mean to give a morphism ABA\to B, which moreover “sends each element a:Aa:A to a point in the fiber over itself”. Of course, this means that this morphism ABA\to B is a section (er, right inverse?) of the morphism BAB\to A corresponding to the dependent type B(a)B(a). Thus, to give an inhabitant of a 1(x)=a 2(x)a_1(x)=a_2(x) depending on a free variable xx means to give a section of the equalizer of a 1:XAa_1:X\to A and a 2:XAa_2:X\to A, which is exactly to say that these morphisms are equal.

In particular, our term (x:G),(y:G)(t:i(m(x,y))=m(i(y),i(x))) (x:G),(y:G) \;\vdash\; (t : i(m(x,y)) = m(i(y),i(x))) yields a section of the equalizer of the two morphisms G×GGG\times G\to G that represent “(xy) 1(x y)^{-1}” and “y 1x 1y^{-1} x^{-1}”, and hence these morphisms are equal.

I think the answer to the question about domains is that an element in type theory corresponds categorically not to an arbitrary generalized element, but to the generalized element with “universal domain”. That is, a term with free variables x:Ax:A and y:By:B, say, is a morphism with domain A×BA\times B, and the universal property of a product means that any “generalized” version of this element must factor uniquely through this “universal” one. Hence, anything we prove about the universal element automatically carries over to all generalized elements.

(You can also express the categorical semantics of type theory directly in terms of generalized elements. This is usually called “Kripke-Joyal semantics.”)

Posted by: Mike Shulman on February 6, 2013 4:42 PM | Permalink | Reply to this
Read the post Category Theory in Homotopy Type Theory
Weblog: The n-Category Café
Excerpt: Doing category theory in homotopy type theory allows us to satisfy the principle of equivalence automatically.
Tracked: March 5, 2013 4:19 AM

Re: Presentations and Representations in Foundations

Could it be that universal properties are not objects that definable independent of coordinate systems and bases (or isomorphs)?

Posted by: Stephen Paul King on March 24, 2013 11:12 PM | Permalink | Reply to this

Re: Presentations and Representations in Foundations

No, I don’t think it could.

Posted by: Mike Shulman on March 25, 2013 2:18 AM | Permalink | Reply to this

Post a New Comment