Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

March 11, 2011

Homotopy Type Theory, I

Posted by Mike Shulman

Last week I was at a mini-workshop at Oberwolfach entitled “The Homotopy Interpretation of Constructive Type Theory.” Some aspects of this subject have come up here a few times before, but mostly as a digression from other topics; now I want to describe it in a more structured way.

Roughly, the goal of this project is to develop a formal language and semantics, similar to the language and semantics of set theory, but in which the fundamental objects are homotopy types (a.k.a. \infty-groupoids) rather than sets. There are several motivations for this, which I’ll mention below, but the most radical hope (which has been put forward most strongly by Voevodsky) is to create a new foundation for all of mathematics which is natively homotopical—that is, in which homotopy types are the basic objects, rather than being built up out of other basic objects such as sets.

I find this idea extremely exciting; at times I think it has the potential to transform the practice of everyday mathematics in a way that no foundational development has done since (probably) Cantor. But the most exciting thing of all is that the essential core of this language already exists in a completely different area of mathematics: intensional type theory. All that needs to be done is to reinterpret some words and perhaps add some additional axioms. (I especially enjoy the coincidence of terminology which allows me to say “homotopy type theory” and mean both “the homotopy version of type theory” and “the theory of homotopy types”!)

One reason this is especially exciting is that intensional type theory is (with good reason) the foundation of some of the most successful proof assistants for computer-verified mathematics, such as Coq and Agda. Thus, the \infty-categorical revolution, if carried out in the language of homotopy type theory, will support and be supported by the inevitable advent of better computer-aided tools for doing mathematics. I never would have guessed that the computerization of mathematics would be best carried out, not by set-theory-like reductionism, but by an enrichment of mathematics to be natively \infty-categorical.

Obviously there are more facets to this story than will fit in one blog post. Today, I’ll lay the groudwork by talking about the correspondence between homotopical semantics and intensional type theory; this is due to Steve Awodey, Nicola Gambino, Richard Garner, Peter Lumsdaine, Benno van den Berg, Michael Warren, and perhaps others I have forgotten (the original inspiration can be traced back at least to Hofmann and Streicher). In subsequent posts, I’ll say a bit about what mathematics looks like when done in homotopy type theory, and about some homotopically motivated axioms we may want to add to intensional type theory; this is the direction of Voevodsky’s recent work.

To start with, let’s recall a bit about set-theoretic language. Consider the following three ways to formalize set-theoretic reasoning:

  1. Axiomatizing sets with a global membership relation between them, as in ZFC.

  2. Axiomatizing the category of sets and functions, as in ETCS.

  3. Axiomatizing sets and elements equipped with operations, such as constructions of products, exponentials, power sets, etc.

The first two have been called “material” and “structural” set theory, respectively, while the third is basically a funny way to describe extensional type theory. I’ll come back to the meaning of “extensional” in a bit; if you aren’t comfortable with type theory, you can think of extensional type theory as an “algebraicization” of the structure of a category, which says “type” instead of “object” or “set”. Where a category has a cartesian product determined up to isomorphism by a universal property, an extensional type theory has a specified type A×BA\times B, with term-formation and computation rules that express essentially the same universal property, but in a set-like way involving elements. See also the nlab entry.

We also care particularly about dependent type theories, which we’ve talked about here before in general terms. Recall that this means that for any type AA we have a notion of “AA-indexed family of types,” so that if BB is such a thing we have a type B(x)B(x) for each xAx\in A. The basic operations on dependent types are the dependent sum Σ x:AB(x)\Sigma_{x:A} B(x), which is the type of pairs (xA,yB(x))(x\in A, y\in B(x)), and the dependent product Π x:AB(x)\Pi_{x:A} B(x), which is the type of functions ff such that for all xAx\in A, f(x)B(x)f(x)\in B(x). When BB is constant (independent of xx), the dependent sum reduces to the binary product A×BA\times B, and the dependent product reduces to the function type B AB^A.

In the “structural/categorial” approach we think of a type dependent on AA as a morphism BAB\to A, whose fiber over xAx\in A is B(x)B(x). There are a couple of subtleties here. Firstly, in basic dependent type theory nothing requires that every morphism BAB\to A be interpretable as the projection of a dependent type. Thus, for a categorial theory of equal generality we should consider categories equipped with a special class of maps, often called display maps (and satisfying some axioms). Secondly, since substitution in type theory is strictly functorial, the display maps should have strictly functorial pullbacks. Of course strictly functorial pullbacks are hard to find in nature, but there are standard techniques for strictifying things.

The dependent sum and dependent product are modeled categorially by left and right adjoints to pullback. Thus, the “natural” semantics of (extensional) dependent type theory with dependent sums and products is in a locally cartesian closed category—although to be more precise we should consider the Σ\Sigma and Π\Pi functors only acting on the display maps.


Now suppose we want to axiomatize \infty-groupoids instead of sets. It’s hard (for me) to imagine a way to do that based on a membership relation, but we could consider axioms on an (,1)(\infty,1)-category that make it behave like Gpd\infty Gpd, or we could extend the syntax of extensional type theory to describe \infty-groupoids rather than sets.

We do have a pretty good idea of what (,1)(\infty,1)-categorical axioms we should use, coming from the theory of Grothendieck (∞,1)-topoi. Specifically, we have the \infty-categorical Giraud exactness axioms, and also the notion of an object classifier, and probably we’ll want a notion of local cartesian closure as well. However, (,1)(\infty,1)-categories (using any model) are somewhat technical to work with directly. Moreover, since they include infinitely many sorts of data (nn-morphisms for all nn), they can’t serve as a fully “elementary” foundational theory; they need a metatheory which at least knows about natural numbers.

However, in homotopy theory we have tools for working with (,1)(\infty,1)-categories in a simpler way: we present them using structures on 1-categories, such as model categories or other weaker things. Amazingly, one particular categorical structure of this sort turns out to correspond fairly closely to a particular already existing version of type theory, called intensional type theory.

The categorical structure in question looks roughly like this. We have a category equipped with a single weak factorization system, which we think of as like the (acyclic cofibration, fibration) factorization system in a model structure. The class of “fibrations” also plays the role of the “display maps” mentioned above, which represent dependent types. Since every type can be considered as dependent on the unit type 11 (or more precisely, dependent on the empty context), every morphism A1A\to 1 is a fibration/display-map, i.e. “every object is fibrant.” (We don’t assume the category has even finite limits or colimits, but we do assume it has a terminal object and pullbacks of fibrations. Combined with every object being fibrant, this implies it has finite products.) For a type theory with dependent products, we also want the category to be locally cartesian closed, or at least to have Π\Pi-functors acting on the fibrations.

This looks very much like a category of fibrant objects, except that we don’t postulate a separate class of “weak equivalences.” In fact, the only equivalences we’re interested in are the homotopy equivalences (which suggests that in a sense “all objects are cofibrant” as well, although we don’t actually have a notion of “cofibration”). The notion of “homotopy” here is defined in the usual way for a category of fibrant objects: every object XX has a path object PXP X obtained by factoring the diagonal XX×XX\to X\times X as an acyclic-cofibration followed by a fibration, and a homotopy fgf\sim g is a map XPYX\to P Y which projects to (f,g):XY×Y(f,g)\colon X\to Y\times Y.


Now, what is “intensional type theory” and how does it correspond to categories with weak factorization systems? Recall that one use of dependent types is to represent logic: a proposition φ(x)\varphi(x) with a free variable xx of type AA can be represented by a type dependent on AA, whose fibers might be subsingletons representing truth values, or they might contain more information, such as collecting all proofs of φ(x)\varphi(x). In either case, the operations on dependent types correspond to logical operations on propositions: A×BA\times B is “AA and BB,” B AB^A is “AA implies BB”, Π x:AB(x)\Pi_{x:A} B(x) is “for all xAx\in A, B(x)B(x),” and Σ x:AB(x)\Sigma_{x:A} B(x) is either “there exists an xAx\in A such that B(x)B(x)” or “the type of all xAx\in A such that B(x)B(x)” (the latter being the type of witnesses, or proofs, of the former).

Now one of the most basic propositions in mathematics is an assertion of equality, “x=yx=y”. Thus, for any type AA, we should have a type dependent on two variables x,yAx,y\in A expressing this proposition. This type is usually written as Id A(x,y)Id_A(x,y) and called an identity type. If each type Id A(x,y)Id_A(x,y) is always a subsingleton (i.e. if two things can be equal in at most one way), then the identity types are called extensional; otherwise they are intensional.

Type theory with extensional identity is the most appropriate for a correspondence with set theory and 1-category theory; in that case the identity type is represented by the diagonal AA×AA\to A\times A. However, extensional type theory is poorly behaved in ways that type theorists care about, so intensional type theory is also well studied, but for completely separate reasons from ours. What we care about right now is that the axioms for intensional identity types correspond very closely to saying that Id AId_A is a path object!

Specifically, the rules for identity types say:

  • For each xAx\in A, there is a specified element r xId A(x,x)r_x \in Id_A(x,x) (every point is equal to itself in a canonical way); and

  • Given a type C(x,y,p)C(x,y,p) which may depend on two points x,yAx,y\in A and an equality pId A(x,y)p\in Id_A(x,y) between them, if we have a way to produce an element of C(x,x,r x)C(x,x,r_x) for any xAx\in A, then we can “transport” it along any equality pId A(x,y)p\in Id_A(x,y) to produce a canonical element of C(x,y,p)C(x,y,p) (and in such a way that if we transport it along r xr_x then it doesn’t change).

The first rule says that we have a map r:AId Ar\colon A\to Id_A which factors the diagonal AA×AA\to A\times A as AId AA×AA\to Id_A \to A\times A. Since Id AId_A is a type dependent on A×AA\times A, the second map in that factorization is a display map, a.k.a. fibration. And the “transport” property of the second rule is at least intuitively analogous to the “path-lifting” property of a fibration; it implies, among other things, that r:AId Ar\colon A\to Id_A satisfies the left lifting property with respect to all display maps. In fact, for any suitable weak factorization system, the path objects satisfy the rules of identity types, although there are tricky coherence questions to be addressed; see this overview by Awodey and Warren, Warren’s thesis, and more recently this paper by van den Berg and Garner. Conversely, Gambino and Garner showed that the identity type rules imply a whole weak factorization system.


To conclude, I should mention one further aspect of this correspondence. We know that any object XX in a model category has a “simplicial resolution” R XR_\bullet X which encodes all paths, paths-between-paths, etc. in XX. The simplicial object R XR_\bullet X is Reedy fibrant, which implies that it is “internally a Kan complex” in the sense that Hom(A,R X)Hom(A,R_\bullet X) is a Kan complex for any cofibrant AA. We can then use these \infty-groupoid hom-spaces to make the model category into an (,1)(\infty,1)-category.

We only need the one (acyclic cofibration, fibration) factorization system to construct R XR_\bullet X, but we do need finite limits and colimits, which the syntactic category of a type theory doesn’t necessarily have. I believe the pullbacks of fibrations that it has are sufficient to construct a “semi-simplicial” Kan complex (one lacking degeneracies), and since any “semi-simplicial” Kan complex admits a choice of degeneracies, this is probably enough to capture the homotopy theory. However, we can construct a similar reflexive globular resolution without using any colimits, and (using some special properties of the identity type weak factorization system) the resulting globular object admits the structure of a Batanin/Leinster ω-groupoid. This was shown independently by Lumsdaine and van den Berg and Garner. In Peter Lumsdaine’s thesis, he extends these hom-\infty-groupoids to make the category of types/contexts into Batanin-Leinster (,1)(\infty,1)-category.

It should have been obvious that these are two viewpoints on the same thing, but I didn’t realize it until Urs pointed it out. People are now thinking about ways to relate them directly.

Posted at March 11, 2011 8:14 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2372

103 Comments & 2 Trackbacks

Re: Homotopy Type Theory, I

Very nice summary, Mike!

I guess part II of the story is about how Vladimir Voevodsky has been really pushing things forward lately. His Univalent Foundations page is here:

http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations.html

Posted by: Steve Awodey on March 11, 2011 9:43 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I guess part II of the story is about how Vladimir Voevodsky has been really pushing things forward lately.

Yes, it will be! If anyone wants to read ahead, in addition to Voevodsky’s page I recommend Andrej Bauer’s page. His Github repository contains a link to Voevodsky’s, and also a re-worked development of the beginning of Voevodsky’s work using more idiomatic Coq techniques. His is also written using “Coqdoc” so that a PDF can also be generated from it, which contains some helpful introductory and explanatory remarks.

Posted by: Mike Shulman on March 12, 2011 4:44 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks Mike. This was great!

Suppose, hypothetically, one knew some (excellent) undergraduates who might be interested in getting into this sort of thing. Where to start?

I’d be interested in learning

  • what sort of mathematical background they would do well to acquire;
  • what sources would provide an entry to some of the specific details of this program;
  • what sources might provide a good overview of this program to their mentors (whose logic background is comparatively weaker than the undergraduates’).
Posted by: Emily Riehl on March 11, 2011 10:49 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Hi Emily,

That’s a tricky question, and I’d welcome any input from anyone else. I think the necessary background has two very different parts: type theory and homotopy theory. You probably know where people can learn homotopy theory; and maybe one of the type theorists listening in can tell us a good book to learn type theory from. Type theory is a bit tricky (for me, at least) because type theorists seem to think very differently from other mathematicians, and it’s hard to find their way of thinking explained explicitly anywhere. At Oberwolfach I had the opportunity to read Per Martin-Löf’s book Intuitionistic Type Theory, which I thought had a very clear explanation of the philosophy (or one philosophy at least) behind the “usual” type theoretic approach to foundations.

In terms of introductions to homotopy type theory in particular, I don’t really know of a basic introduction to the whole project suitable for the undergraduates or their mentors; the whole thing is so new that people have been spending more time doing it than explaining it. (This sequence of blog posts is intended to be a rough approximation.) Probably most any of the papers I linked to above would be reasonable starting points to learn about the correspondence I sketched above between type theory and homotopy theory, although if your hypothetical undergraduates don’t want to spend a while understanding Batanin ω\omega-categories they should perhaps avoid the papers specifically about those. (But Peter Lumsdaine’s thesis, which is officially about classifying Batanin ω\omega-categories of type theories, looks like it contains some very good introductory material as well; and being one of the more recent papers, it has references to most of the others.)

Regarding actually doing math in homotopy type theory, eventually one will need to dive into Voevodsky’s Coq files, but as I said in response to Steve, I recommend starting with Andrej Bauer’s version, accessible from here. His also contains some introductory comments about Coq, although it’s not a substitute for a real book or tutorial.

Posted by: Mike Shulman on March 15, 2011 6:43 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Mike wrote:

type theorists seem to think very differently from other mathematicians

I agree! And it’s a barrier.

I have one humble suggestion if you ever find yourself explaining type theory to a novice:

use brackets.

Use brackets plentifully; use them where no type theorist would ever use them; use them.

For example, here’s a typical — even quite short — type-theoretic expression: x,y:XId(x,y):type. x, y : X \vdash Id(x, y) : type. You can presumably look at that and immediately know where the brackets implicitly are — I mean, what the logical precedence is. But it’s not the same for a novice, who may read it as x,y:(X(Id(x,y):type)) x, y : (X \vdash (Id(x, y): type)) or x,y:(XId(x,y)):type x, y: (X \vdash Id(x, y)) : type or any of several other wrong possibilities. Or they may simply see a jumble of symbols and waste valuable mental energy figuring out how it’s supposed to be parsed. (You can tell I’m speaking from personal experience.) So, on behalf of all non-type-theorists in the world, I make a plea for (x,y:X)(Id(x,y):type) (x, y : X) \vdash (Id(x, y) : type) — at least until everyone’s got the hang of it.

Posted by: Tom Leinster on March 15, 2011 7:25 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks for that suggestion, Tom! I think that wasn’t a problem for me, since I came to type theory from already knowing some logic and was used to the idea that \vdash binds more loosely than just about anything. So I didn’t realize it might be difficult for someone unfamiliar with it.

What about (x:A),(y:B)(C(x,y):Type) (x:A), (y:B) \vdash (C(x,y):Type) ? Is it clear that the \vdash binds more loosely than the comma?

Posted by: Mike Shulman on March 15, 2011 4:07 PM | Permalink | PGP Sig | Reply to this

Re: Homotopy Type Theory, I

Hard for me to say: I’m too familiar with this notation now myself.

I thought a bit more about why this is a particular problem in type theory, as opposed to other branches of mathematics. Several reasons came to mind. One is that there are quite a lot of unusual symbols: logicians seem particularly happy to use relatively uncommon symbols or make up new ones. Other branches of mathematics are perhaps more likely to recycle familiar symbols, which in a way makes things easier because everyone knows how to parse them.

Another is really mundane: Latex seems to put a little more space around colons than turnstiles (or maybe it’s just the shape of the symbols that makes it look that way), suggesting that colons bind loosely, when e.g. in this example that’s misleading.

I was thinking that for a very long time, logic has scared people off with its syntax. (Think back to Russell and Whitehead.) Most people aren’t nearly so scared when they see, say, a PDE that takes up three lines. You might think it looks horrendous, but quite likely you know how to read and understand it at a superficial level. But logic often uses symbols that are less widely understood.

Posted by: Tom Leinster on March 16, 2011 12:17 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

logicians seem particularly happy to use relatively uncommon symbols or make up new ones

Especially in the foundations of mathematics, this may be because they don't want to reuse, in their metalanguage, any symbols that might already appear in the object language that they're studying.

Posted by: Toby Bartels on March 16, 2011 11:48 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Latex seems to put a little more space around colons than turnstiles

Yes, that’s also annoyed me in writing type-theoretic notation. We have \colon which adjusts the spacing for when colons are used in f:XYf\colon X\to Y; it would be nice to have another command which adjusts the spacing differently for use in typing declarations.

Most people aren’t nearly so scared when they see, say, a PDE that takes up three lines.

Isn’t that likely just because everyone learns calculus in university, but not everyone learns logic (sadly)?

Posted by: Mike Shulman on March 16, 2011 3:46 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

We have \colon which adjusts the spacing for when colons are used in f:XYf\colon X \to Y; it would be nice to have another command which adjusts the spacing differently for use in typing declarations.

But the colon in ‘f:XYf\colon X \to Yis a typing declaration! It's the example that most mathematicians know, which we can use to introduce them to it more generally. I always use \colon for typing declarations.

Posted by: Toby Bartels on March 16, 2011 11:47 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

But the colon in ‘f:XYf\colon X\to Y’ is a typing declaration!

I suppose it is, at that, or at least it can be thought of as one. Not sure why that slipped my mind. I’d still prefer a colon with even less space on both sides for typing declarations in type theory.

Posted by: Mike Shulman on March 17, 2011 4:42 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I’d still prefer a colon with even less space on both sides

Less space on both sides than \colon? The space on the right is a thin space, which is the least space that TeX supports at a high level (although one can always hack a smaller one, of course). And there is no space on the left side at all! (Except the space within the boxes, that is; one could also hack the inter-box space negative to make the printed material directly touch, although this would be very difficult to program in general and tedious to do in each specific case.)

There does seem to be some space in the rendering through iTeX, MathML, my browser, and my font. Most of that seems to be due to the large amount of kerning space in the ‘ff’. Compare it with ‘gg’:

g:XY g\colon X \to Y

I wouldn't really want less space on the left than that!

Posted by: Toby Bartels on March 17, 2011 8:27 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

When I typeset x\colon A, y\colon B \vdash z\colon C in TeX, the spaces before the colons are about equal to the space after the comma, while the spaces after the colons are about equal to the space on either side of the turnstile. But the colons bind tighter than both the comma and the turnstile, so they should have less space around them. I suppose maybe it would be better to increase the spaces around the comma and turnstile, but I’d also expect the spaces on both sides of the colon to be equal (I’m not sure why, but I would). Also, I don’t know anything about the details of TeX spacing, but \mathord{:} has less space on both sides than \colon.

Posted by: Mike Shulman on March 18, 2011 4:34 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

That's really weird. In TeX (both plain TeX and LaTeX by default), \colon is of type punct, which is the same as ,. Therefore, the spacing around it should be the same (in the same context). If the spacing is different, then either you've redefined one of them, your typesetter has a bug, or (most likely but still surprising) there is more extra space around the printed material in the box for the character ‘:’ than for the character ‘,’. And if \mathord : has less space on the left side than \colon does, then even this possibility is ruled out; either you're using unusual symbol definitions, or you're using a program with a bug. (It should produce less space on the right side, however.)

Now in iTeX, it's another matter. For some reason, iTeX puts verythinmathspace to the left of \colon, while , gets no space there at all. (Neither should \mathord:, but iTeX doesn't support that anyway.) I see that it also puts more space to the right of , than of \colon (or even :); this is because MathML puts too much space after , by default (verythickmathspace) and iTeX makes no attempt to correct this.

Posted by: Toby Bartels on March 19, 2011 5:00 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Hmm, I guess what I wrote before was an illusion created by italicization and font serifs. Correcting for that, I see that the spacing around \colon is in fact the same as the spacing around a comma, as you say.

Experimenting with adding extra space (such as by making the comma into a \mathrel), I guess I think the most readable version for me is Tom’s suggestion: (x\colon A), (y\colon B) \vdash (z\colon C), although replacing the \colons with \mathord{:}s to get rid of the after-space comes in a close second. I think \mathord{:} looks a bit better when the types are single letters, but when they’re composites like BCB\to C or Π x:AB(x)\Pi_{x\colon A} B(x) then \colon is maybe better.

I do like the parentheses, which have the additional virtue of matching the syntax of Coq and Agda.

Posted by: Mike Shulman on March 21, 2011 4:23 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Putting parentheses here seems like putting parentheses in (2x)+3(2x) + 3, or (which is maybe not quite so bad) in 3(x 2)3(x^2). We adopt conventions so that we don't have to write such things. On the other hand, in an introductory treatment, they can be useful reminders (and in any case it would help to explain the conventions).

Incidentally, to make x:A,y:Bz:Cx\colon A, y\colon B \vdash z\colon C more like 2x+32x + 3 (and less like 3x 23x^2, where the typography is less helpful), it would be nice to have a little extra space after the comma. One could get this in TeX with x\colon A,\, y\colon B \vdash z\colon C. (Here is where the large space after the comma in MathML and hence iTeX becomes a feature rather than a bug.) My standard TeX macros include \comma (defined as ,\,) for this and other uses.

Even more incidentally, when mixing logical symbols with a mathematical object language, I often put extra space in the metalanguage. Thus, if AA, BB, CC, and zz above are complicated terms containing mathematical symbols with spacing around them, I write x\colon A,\; y\colon B \;\vdash\; z\colon C (producing x:A,y:Bz:Cx\colon A,\; y\colon B \;\vdash\; z\colon C here). I also have macros for these (in this case, \pred for ,\; and \proves for \;\vdash\;). I notice that my practice implicitly considers the space after the comma to be real space, but not the space after the colon.

An even more obscure (because this only comes up when you mix mathematical object language with logical metalanguage in material which is printed inline rather than displayed) problem is that you need to allow line breaks after the comma, which you can do with \allowbreak. I actually have this in both \pred (whose definition is really as ,\;\allowbreak) and the type-declaration colon (for which I use the macro \:, defined as \colon\allowbreak). TeX already allows breaking after a relation symbol such as the turnstile, so that's fine as it is.

Posted by: Toby Bartels on March 21, 2011 6:40 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Just stumbled over Mike Shulman’s very relevant question

Isn’t that likely just because everyone learns calculus in university, but not everyone learns logic (sadly)?

which caught my eye as expressing a thought that I see as underemphasized, am would like to agree with his “sadly” and amplify it:

  • the ‘place’ of logic in the vast majority of undergraduate curricula for ‘mathematics majors’, and also in other ‘mathematical’ subjects, at universities around the world is the single most misguided anomaly in mathematics education these days.

A logic course (and a good one, taught by a professional logician) should be part of everyone’s first-year. Currently, it is assumed that all students can pick up logic ‘on the job’, and if there is any logic worthy of note in the first year at all, then this does most often not progress much further than truth-tables, and some of the usual syntactical manipulations of the logical connectives, and even this is often done ineptly. Expert treatments of the syntax-semantics-distinction are mostly lacking, not to speak of the completeness theorem for first-order predicate logic, or the incompleteness of arithmetic, let alone categorical semantics (which could at least be treated in some toy version, Heyting algebras for example). Any serious logic seems to be confined to some ‘graduate courses’, if the university is lucky enough to have a person who can teach one.

I don’t think there is much intent, let alone any malice, in the current state of affairs. It is probably historical hazard, but a deplorable one, and a result some ‘prioritizing’ of one kind or the other.

In the middle ages, there was the educational concept of the

https://en.wikipedia.org/wiki/Trivium

in which logic had a foundational role.

In this respect at least, universities should become more ‘medieval’.

Posted by: Peter Heinig on September 28, 2017 8:15 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Whoops! I forgot this introductory article by Steve Awodey! There are also some introductory talks linked from Steve’s page.

Posted by: Mike Shulman on March 15, 2011 6:49 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

does HomotopyTypeTheory.org do the trick?

Posted by: steve Awodey on March 23, 2011 5:33 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks for the explanation, Mike. I saw Nicola Gambino shortly before the workshop (giving a talk about this stuff at Strathclyde) and Richard Garner shortly afterwards, so it’s especially nice to be told what actually happened there.

You wrote:

the most radical hope (which has been put forward most strongly by Voevodsky) is to create a new foundation for all of mathematics which is natively homotopical—that is, in which homotopy types are the basic objects

I’m not at all clear what this means. Perhaps I don’t really understand what people mean by phrases such as “foundation for all of mathematics”.

What about parts of mathematics in which homotopy types aren’t the basic objects? What about, say, numerical analysis? If one believes that there will be a new foundation of mathematics in which homotopy types are the basic objects, does one also have to believe that homotopy types are (maybe secretly) basic objects in all branches of mathematics? Is the assertion that there are no parts of mathematics in which homotopy types are not basic objects?

For example, today I’ve been editing a document (mentioned here) about generalized means. This is very elementary stuff, all about operations on the real numbers, and it’s hard for me to imagine what role homotopy types could have there. I’m not excluding the possibility, but I don’t currently see it.

These questions aren’t intended in an aggressive way. I fully understand that one can (should?) have big dreams, and while one is at the dreaming stage, curmudgeons are apt to come along and say “this has nothing to do with the mathematics I care about”. That’s not what I’m saying. Rather, I’d like to understand in more detail what’s envisioned in the passage quoted.

Posted by: Tom Leinster on March 11, 2011 11:02 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Tom wrote:

What about parts of mathematics in which homotopy types aren’t the basic objects?

I’m no expert on this, but here are my guesses:

From the viewpoint of homotopy theory a set can be seen as a homotopy 0-type, so one doesn’t lose access to set theory using homotopical foundations.

It’s just like how a foundations of mathematics based on \infty-categories can still handle sets, by working with \infty-categories that are equivalent to sets.

I think a foundational approach based on \infty-categories or homotopy types is extremely important. At first it will seem very complicated, but I believe that people can polish it down over a decade or two or three until it’s quite simple. And then \infty-category theory (or at least \infty-groupoid theory) will gradually cease to be an elaborate structure built on top of existing math; it’ll become a self-standing way to think.

(It already is a way to think… but right now, it’s a lot easier to think about \infty-categories than make these thoughts rigorous. Over time, this problem will fade away.)

Posted by: John Baez on March 12, 2011 12:35 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

That’s a good question, Tom! My answer is that homotopy types are strictly more general than sets. In particular, sets can be identified with homotopy 0-types. And the envisioned foundational axioms for homotopy types certainly ought to imply that the homotopy 0-types satisfy all the familiar properties of sets.

So, in all the areas of mathematics where homotopy theory doesn’t (yet) play any role, a foundation in which homotopy types are basic objects will be just as good as a foundation in which sets are the basic objects; in fact you can feel free to define “set” to mean “homotopy 0-type”. The other homotopy types will be hovering in the background waiting for you to need them, but you’ll be feel free to ignore them until you do.

In the next post(s), when I talk about the work Voevodsky has done in the direction of developing mathematics in homotopy type theory, I’ll mention how to define and work with homotopy nn-types. Not only do you get set theory as a special case of homotopy type theory by looking at 0-types, you get logic as a special case of homotopy type theory by looking at (-1)-types!

Posted by: Mike Shulman on March 12, 2011 1:04 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Aahh, I see John beat me to it.

Posted by: Mike Shulman on March 12, 2011 1:06 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks for your answers, John and Mike.

I still think I don’t really understand what people mean when they say “foundation of mathematics”. Of course this phrase is usually used about sets.

My thoughts on this aren’t clear, but I think I have some instinctive scepticism about the idea that sets are basic objects in all branches of mathematics. You could change “sets” to “categories” or “\infty-categories” or “homotopy types” in that sentence and it would be equally true.

To take my recent interests as an example again: I’ve been reading the book Inequalities of Hardy, Littlewood and Pólya. I don’t think anyone would challenge the statement that it’s been an important and influential work. And yeah, sure, it involves the set of real numbers, functions on the real numbers, etc. But the basic objects aren’t sets; I guess they’re real numbers, and inequalities between them. If one were to design “foundations” for that area of mathematics, it seems unlikely that sets would feature prominently.

But maybe I’m not understanding what people mean by “foundations”.

Posted by: Tom Leinster on March 12, 2011 2:01 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Before anyone else answers, let me try to stop the discussion from getting sidetracked. The question of what “foundations” means in the abstract is kind of interesting, and a lot’s been said about it, but it’s not really what I’m trying to find out here. Mostly, I want to know about this specific vision of foundations put forward by Voevodsky and others.

In what sense will this proposal provide “foundations for all of mathematics”? In the sense of logical safety? In the sense of providing a natural language for all of mathematics? In the sense of organizing all of mathematics? Or if none of those, in what sense?

Posted by: Tom Leinster on March 12, 2011 2:42 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Tom wrote:

In what sense will this proposal provide “foundations for all of mathematics”?

I don’t know what other people want, but I spent a bunch of time categorifying ideas in math ‘by hand’, and by now it’s clear there are huge wads of math where this process is quite systematic and should happen ‘automatically’, without us needing to explicitly write down endless lists of coherence laws. In fact, it should happen in the background, without us even having to think about it much.

In this vision, everything that’s true should be true only up to coherent homotopy… or if you prefer, up to coherent natural isomorphism.

People have discussed this for a long time using terms like ‘the homotopification of mathematics’. Concepts like homotopy algebras of operads and algebraic theories are steps in this direction. ‘Brave new algebra’ is a slightly different but deeply related approach, based on spectra rather than spaces—or if you prefer, \mathbb{Z}-groupoids rather than ω\omega-groupoids. Lurie’s book Higher Algebra systematizes and extends many of these ideas. Makkai has worked out an alternative, more explicitly ‘foundational’ approach where the key step is forbidding equality from the very start. Voevodsky has his ideas, and Mike here has listed others as well. Someday many of these ideas will be incorporated into a convenient framework… and then gradually this framework will become taken for granted and recede into the background. Then we can just do math in a world where everything is true up to coherent homotopy.

I think people will then be in a better position to think about pure math and fundamental physics—but also other things, too.

The main reason I don’t want to work on this stuff is that it seems so destined to happen, while still demanding so much energy, time, intelligence, and focus. It’s a bit like helping push an enormous boulder down a hill. If you don’t push, it will still gradually go down the hill, thanks to gravity and the natural process of erosion.

Maybe I’m just getting old and tired. Or maybe what once seemed so unlikely and amazing—the \infty-categorification of mathematics—has by now acquired an aura of inevitability.

I think Manin conveys a somewhat similar feeling here:

And so I don’t foresee anything extraordinary in the next twenty years. Probably, a rebuilding of what I call the “pragmatic foundations of mathematics” will continue. By this I mean simply a codification of efficient new intuitive tools, such as Feynman path integrals, higher categories, the “brave new algebra” of homotopy theorists, as well as emerging new value systems and accepted forms of presenting results that exist in the minds and research papers of working mathematicians here and now, at each particular time. When “pragmatic foundations” of mathematics are made explicit, usually in several variants, the advocates of different versions may start quarreling, but to the extent that it all exists in the brains of the working generation of mathematicians, there is always something they have in common. So, after Cantor and Bourbaki, no matter what we say, set theoretic mathematics resides in our brains.

The amusing part is of course that he begins by saying “I don’t foresee anything extraordinary in the next twenty years”, even though he describes a quite significant transformation of what he calls the “pragmatic foundations” of mathematics.

Posted by: John Baez on March 12, 2011 5:34 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

What do you think he means by the “codification” of “emerging new value systems and accepted forms of presenting results that exist in the minds and research papers of working mathematicians here and now”?

Posted by: Jamie Vicary on March 12, 2011 9:17 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Jamie wrote:

What do you think he means by the “codification” of “emerging new value systems and accepted forms of presenting results that exist in the minds and research papers of working mathematicians here and now”?

In 20 years you’ll be able to get tenure by writing really good blog entries on the nn-Category Café.

Posted by: John Baez on March 14, 2011 2:50 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

People have discussed this for a long time using terms like ‘the homotopification of mathematics’. Concepts like homotopy algebras of operads and algebraic theories are steps in this direction. ‘Brave new algebra’ is a slightly different but deeply related approach, based on spectra rather than spaces—or if you prefer, ℤ-groupoids rather than ω-groupoids.

A quick note in my function as Lab elf:

I had forgotten about the page “homotopy algebras” and it wasn’t included into the larger body of entries we have meanwhile, but more under the term “\infty-algebras”. The canonical starting point is the entry

And the page that discusses the higher algebraic theories and for instance Badzioch’s work that John linked to is

I hope it’s clear that the word “homotopy” and the words “higher categorical” mean the same thing. (Or rather if they are meant to be different, then only in as far as the (n,r)(n,r)-parameter is concerned.)

Finally a remark on actual content: I wouldn’t say that Brave New Algebra is a different approach. Rather, in direct generalization of the 1-categorical situation, one has a trinity of universal algebraic concepts

monad algebraictheory operad \array{ & monad \\ algebraic theory && operad }

and this generalizes to

monad algebraictheory operad. \array{ & \infty-monad \\ \infty-algebraic theory && \infty-operad } \,.

Algebras over any of these three concepts are “\infty-algebras” and to some extent all three notions are rouhgly equivalent, but in non-Cartesian contexts there is a tendency to look at algebras over \infty-operads, and “Brave new algebra” is mostly situated there.

Posted by: Urs Schreiber on March 12, 2011 6:47 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Urs wrote:
I hope it’s clear that the word “homotopy”; and the words “higher categorical”; mean the same thing. (Or rather if they are meant to be different, then only in as far as the (n,r)-parameter is concerned.)

Please remember your audience. At the very least, specify at the beginning of a discussion/paper that we assume (n,r) =

For “homotopy algebra”, there is yet a third meaning, the Whitehead algebra of homotopy groups and generalizations.

Posted by: jim stasheff on March 12, 2011 1:35 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks for trying to keep the discussion on track, but I’m finding it difficult to answer the particular question about homotopy type theory without somewhat addressing the larger question of foundations.

I think the answer to your question about Inequalities is that the whole point of foundations is to construct the basic objects of each area of mathematics out of the basic objects of the foundational language. E.g. we can construct real numbers out of sets using Dedekind cuts or Cauchy sequences or etc. It doesn’t mean the real numbers “really are” Dedekind cuts or Cauchy sequences, or that they’re any less basic objects in the particular fields where they’re used; it just means those fields could be translated into a different language with different primitives, like a high-level programming language being compiled into machine code.

And the only way I can think of to answer

In what sense will this proposal provide “foundations for all of mathematics”?

is “in exactly the same ways that other proposals do;” it’ll just be better in some ways. For instance:

  • As regards logical safety, the usual construction of \infty-groupoids in terms of sets, and the fact that we can recover sets as homotopy 0-types, should show that homotopical foundations are just as consistent as set theory.

  • As regards a natural language: it’ll be at least as natural as set theory, and more natural for some (growing) parts of mathematics (namely, homotopy theory and higher category theory, and anything which uses them).

  • As regards organizing mathematics: just as the realization that sets are everywhere in mathematics has an organizing effect, so does the realization that homotopy types are also everywhere (sometimes only in their incarnation as homotopy 0-types, aka sets, but often in full glory).

  • Another reason to care about foundations which you didn’t mention is that if we want to interpret mathematics “internally” in some constructed world, then if all mathematics can be translated into a particular foundational language, all we have to do is interpret that foundational language, rather than separately interpreting each new part of mathematics we care about doing internally. The internal language of a topos is the primary set-theoretic (or extensional-type-theoretic, which amounts to the same thing) example.

    However, a particular foundation may be better or worse adapted to a particular constructed world. We can build homotopy theory out of sets in the internal language of a 1-topos and thereby interpret a homotopical foundation internally to a 1-topos. But if we want an internal language of a general (,1)(\infty,1)-topos in which “internal homotopy theory” naturally captures the behavior of the (,1)(\infty,1)-topos, then it seems unlikely that anything of that sort will suffice, since the (,1)(\infty,1)-topos need not be “generated by 0-types” in any sense.

  • And yet another reason which you didn’t mention is that if we want to formalize mathematics in a computer, then it makes things much easier (some might even say “possible”) if we choose a simple foundational language to teach the computer about directly, then translate all the rest of mathematics into that language as we put it into the computer. The naturalness of the chosen foundational language for a particular part of mathematics then correlates strongly to the ease of formalizing it in the computer. Thus, homotopy type theory, and its strong connection to intensional type theory (which, as I said above, is already the foundational language used in some of the best computer proof assistants), will make the formalization of homotopical and higher-categorical mathematics much easier.

Let me also address how I think this relates to some of the other stuff that John mentioned. Firstly, I don’t regard the ideas I discussed above as substantially distinct from Voevodsky’s work. What I talked about above is the general interpretation of homotopy type theory in terms of things constructed from sets. Voevodsky’s work has been more in the direction of what I’ll talk about in later posts (developing mathematics in HoTT, and new axioms for it), but that’s really just other sides of the same coin.

Homotopy algebra, higher algebra, and brave new algebra are important aspects of homotopy theory and higher category theory, but in and of themselves they are not foundational — one should be able to do them in any reasonable sort of “homotopy theory,” whether that homotopy theory is a foundational theory in its own right or is constructed out of sets.

Finally, Makkai’s work is very interesting, but I think it suffers from the problems I mentioned above regarding directly axiomatizing \infty-groupoids (or \infty-categories, which is even harder): it’s quite complicated to work with directly, and it requires the natural numbers in the metatheory. Arguably, homotopy type theory also “forbids equality from the start” and is substantially easier to work with: Voevodsky has already formalized an impressive amount of homotopy theory in Coq using it.

It’s true, as David says, that we might eventually like a truly \infty-categorical, rather than merely \infty-groupoidal, foundation. I’ve thought about about what “directed identity types” would be like, but they seem to be very difficult! Homotopy type theory can be said to be a foundational version of the important realization of the (,1)(\infty,1)-category theorists: it seems to make things easier to take the nn in (n,r)(n,r) up to infinity first, then bring the rr along with it. It also has the advantage of matching up with the large existing theory of intensional identity types.

Posted by: Mike Shulman on March 13, 2011 3:37 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I think the answer to your question about Inequalities is that the whole point of foundations is to construct the basic objects of each area of mathematics out of the basic objects of the foundational language. E.g. we can construct real numbers out of sets using Dedekind cuts or Cauchy sequences or etc. It doesn’t mean the real numbers “really are” Dedekind cuts or Cauchy sequences, or that they’re any less basic objects in the particular fields where they’re used; it just means those fields could be translated into a different language with different primitives, like a high-level programming language being compiled into machine code.

A minor comment: the Cauchy and Dedekind reals are not equivalent in constructive set theory, since choice is required. Now, choice is a theorem in Martin-Lof type theory, but offhand I don’t know whether the proof of equivalence goes through.

There are two suspicious things to look out for. First, watch out for any impredicative constructions. Second, track the passage between the two views of families of subsets over a set SS as predicates (ie, functions P:SSetP : S \to \mathrm{Set}) and indexed families (ie, elements of F:ΣI:Set.(IS)F : \Sigma I : \mathrm{Set}.\; (I \to S)). The second thing is probably a place where homotopical ideas may become relevant, since passing between the two uses the equality predicate, and relies on something like UIP for subsets to have the appropriate cardinality.

Stepping back a bit, my experience with constructive mathematics is that it is a bit like a microscope – it reveals distinctions in things that we thought looked the same. The test of whether new foundations for a subject are any use comes from whether they reveal distinctions which turn out to be relevant to the subject.

Homotopical ideas are of obvious importance to computer science, because they look like they might eventually explain how CS ideas like coercions should interact with equality types, and I would hope that they would be of relevance to numerical analysis through the computability-topology dictionary.

Posted by: Neel Krishnaswami on March 14, 2011 2:38 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Without choice Dedekind and Cauchy reals are different; see the Elephant.

Posted by: Bas Spitters on March 14, 2011 3:17 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Yes, I’m well aware that constructively, Cauchy and Dedekind reals are different. But this discussion was not about constructivism, and I just needed an example of how the “basic objects” in one field of math (real numbers) could be reduced to the “basic objects” in a foundational language (sets). In a constructive framework, one would of course have to pick whichever construction was better for one’s purposes.

(By the way, I believe choice is not a theorem in univalent foundations, since we interpret propositions as (-1)-types, rather than as arbitrary types.)

The test of whether new foundations for a subject are any use comes from whether they reveal distinctions which turn out to be relevant to the subject.

I think that’s a very narrow view of the usefulness of foundations. I mentioned a bunch of other reasons to care about foundations in my previous comment; probably there are even more that I didn’t think of.

Posted by: Mike Shulman on March 14, 2011 4:33 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Mike said:
…(By the way, I believe choice is not a theorem in univalent foundations, since we interpret propositions as (-1)-types, rather than as arbitrary types.)…

Not so fast! While “is a proposition” is a definable property of a type, the corresponding (-1)-types do not themselves form a type in which the propositions are valued (like Coq’s Prop). In general, the “univalent foundations” approach – i.e. the homotopy type theory interpretation –does use the usual propositions-as-types idea. Clearly, some logical constructions (like Id-types) are not in general (-1)-types! So the type-theoretic axiom of choice does hold as usual. The question of when AC holds under the “proof-irrelevant” propositions as (-1)-types interpretation is presumably related Aczel’s results on logic enriched type theory.

Posted by: Steve Awodey on March 14, 2011 6:28 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

While “is a proposition” is a definable property of a type, the corresponding (-1)-types do not themselves form a type in which the propositions are valued (like Coq’s Prop).

This is something I’ve been wondering about (in a slightly differente sense than your remark above): could they form a type?

One of the interesting ideas I’ve heard floating around here is that the n-categorical stuff forms a more natural organizing hierarchy than the size stuff we usually use. Roughly:

  1. The ‘collection’ of all contractible things is a proposition
  2. The collection of all propositions is a set
  3. The collection of all sets is a category or a groupoid (perhaps depending on whether we’re interested in just equivalences or arbitrary maps)
  4. The collection of categories/groupoids is a 2-category/2-groupoid …

So the ideal would be to throw out the size hierarchy and make a dimensional hierarchy instead, and hopefully that Just Works. As a type theory maybe it looks like:

True, False : Prop = (-1)-type
Prop        : Set  = 0-type
Set         : Gpd  = 1-type
n-type      : (n+1)-type

C : (n+1)-type    X, Y : C (0-cells)
------------------------------------
     X -> Y, I_C X Y : n-type

...

I expect the right sort of theory for this is related to Mike’s 2-logic (only, made n- or something).

Anyhow, univalent foundations don’t take this approach. Instead we stick with the size hierarchy and define the dimensionality as properties within the theory. That’s very useful, because ITT is already well studied, and we need few additional axioms to get a widely applicable internal language (from what I gather). But, do you think in the future something more radical like the above might be a ‘better’ language that takes this n-categorical stuff seriously? Or is the current univalent foundations approach what you think is appropriate?

Am I out of my element?

Thanks for your insight.

Posted by: Dan Doel on March 14, 2011 10:02 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

  1. The ‘collection’ of all contractible things is a proposition
  2. The collection of all propositions is a set
  3. The collection of all sets is a category or a groupoid

If you're going to allow a category here, then you ought to make this decision one step earlier and allow the collection of all propositions to be a poset; then the collection of all posets is a 22-poset (aka a locally posetal 22-category or PosPos-enriched category).

So the ideal would be to throw out the size hierarchy and make a dimensional hierarchy instead, and hopefully that Just Works.

For nn-category theory with finite (but arbitrarily large) nn, this should work. However, for \infty-category theory (or even just \infty-groupoid theory, and therefore certainly for homotopy theory), it does not work. This is because of line \infty in the progression above:

  • \infty. The collection of all \infty-groupoids is an \infty-groupoid.

This leads to the usual paradoxes.

Posted by: Toby Bartels on March 14, 2011 11:25 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

For n-category theory with finite (but arbitrarily large) n, this should work. However, for ∞-category theory (or even just ∞-groupoid theory, and therefore certainly for homotopy theory), it does not work.

Right, I’m aware of that problem. And as someone interested primarily in a nice type theory, I’m tentatively okay with saying, “lets limit ourselves to finite n.” Or roughly, our type theory might be some kind of internal language for the ∞-category of n-categories, but ∞-categories are not themselves dealt with in the language.

I realize this is kind of unsatisfactory for mathematicians who are actually interested in ∞-categories in their own right, though. So maybe the univalent stuff is significantly more attractive from that perspective.

Perhaps I should be more interested in ∞-categories than I am, too. But I tend to anticipate myself wanting (in my area of interest) to talk about n-categories for small n, and everything higher is just there to make things work nicely (kind of like how no one ever really uses more than 4 or 5 levels of Agda’s infinite universe hierarchy in practice). And from that perspective, I feel a lot more attracted to the inductive-ish definition of n-categories than jumping straight to ∞-categories and defining n-categories by truncation of some sort. That may be my lack of equipment for understanding the ∞-category definitions too, though.

Posted by: Dan Doel on March 14, 2011 11:52 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I tend to anticipate myself wanting (in my area of interest) to talk about n-categories for small n, and everything higher is just there to make things work nicely

I think it’s important to distinguish here between the nn and the rr in (n,r)-category. I think what you say is true for most people with respect to the index rr of directedness (although I await being proven wrong). But for the index nn of groupoidalness, I think it’s definitely important to let it be all the way up at \infty. For instance, the 2-sphere S 2S^2, regarded as an \infty-groupoid, is not an nn-groupoid for any finite nn.

Posted by: Mike Shulman on March 15, 2011 12:25 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

A related point is that, given an elementary theory of \infty-groupoids, we ought to be able to write down an elementary (albeit soon inconveniently complicated) finitistic definition of (,r)(\infty,r)-category for any finite rr, but the notion of \infty-category will rely on the axiom of infinity (just as both \infty-categories and \infty-groupoids rely on infinity in a set-theoretic foundation).

Posted by: Toby Bartels on March 15, 2011 9:00 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

given an elementary theory of ∞-groupoids, we ought to be able to write down an elementary (albeit soon inconveniently complicated) finitistic definition of (∞,r)-category for any finite r

I would like that very much, but it’s not obvious to me that it’s true. Almost all the ways I know of to define (,1)(\infty,1)-categories in terms of \infty-groupoids already involve infinitely many \infty-groupoids. The only exception I can think of is that if you have a strict model for your \infty-groupoids, like simplicial sets, then you can talk about strictly enriched categories—but homotopy type theory, at least, doesn’t seem to be strict enough to phrase a definition like that. As far as I can tell. Do you have an idea?

Posted by: Mike Shulman on March 15, 2011 9:48 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

That would be a lovely picture. Unfortunately, it turns out to be very hard to avoid paradoxes, even for finite nn. It’s okay to have a type of all (-1)-types (like Coq’s Prop, or for that matter the subobject classifier in a topos), but even trying to get a groupoid of all sets (where “set” means “discrete groupoid” with no “size” restriction) is very difficult, perhaps impossible.

Consider the following argument: suppose there is a groupoid of all sets. It has a sub-groupoid of all subsingletons, which is a subobject classifier. If we assume function types, then the category of sets is an elementary topos, and in particular has power objects, and thus a powerset endofunctor.

However, if there is a category of all sets, then it has all limits and colimits. (The usual construction of the limit or colimit of a diagram of sets always produces a set, i.e. a perhaps-large discrete groupoid, even if the diagram is large. But we have no size restrictions on our “sets.”) But by the Knaster-Tarski theorem, this implies that any endofunctor of Set has a fixed point, including the powerset endofunctor; but Cantor’s diagonal argument is valid in any topos, leading to a contradiction.

Some more speculation along these lines can be found here. It’s conceivable that this sort of argument might be avoidable in an “intensional enough” world, but I kind of doubt it.

Posted by: Mike Shulman on March 15, 2011 12:22 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

where “set” means “discrete groupoid” with no “size” restriction

I would envision abandoning this. After all, we define set before groupoid in this picture, and a priori there's no reason why every discrete groupoid must be a set; in fact, your argument proves otherwise. So there are still size distinctions, but they're tied in with dimension in a way that we don't normally assume (and can't be extended to \infty).

Posted by: Toby Bartels on March 15, 2011 9:06 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Well, certainly you can have a hierarchy of universes stratified by both size and dimension. It doesn’t even contradict the univalence axiom. But I assumed from Dan’s phrasing “throw out the size hierarchy and make a dimensional hierarchy instead” that he wanted a hierarchy stratified only by dimension.

Posted by: Mike Shulman on March 15, 2011 9:47 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I’d tentatively support Toby. Inasmuch as there are still size distinctions, they would, I think, be side effects of dimensionality, not explicitly recognized in the theory. That is, it may be that groupoids are simply capable of being ‘bigger’ somehow than sets, but we still do not have various size levels of sets and groupoids and whatnot (except as discrete versions of increasingly higher dimensional objects; this is a possible wart).

Roughly, I think my objection to your argument (what I understand of it, and I think it’s the same as Toby’s), is that sets are not defined as discrete groupoids. Sets are sets; objects of the groupoid of sets. Every set can be promoted to a discrete groupoid, perhaps, but there is no inverse operation, ensuring that there is a set for every discrete groupoid.

If I’m not mistaken (and maybe this is what your argument does), an example would be that there is a discrete subgroupoid of the groupoid of all sets containing the same objects. If that discrete subgroupoid were a set, it would be a set of all sets, which we want to rule out.

The justification for this, as it were, is that even though discrete groupoids only have identity arrows or nothing between objects, these are still grouped in sets. So the distinction goes down one level, in that sufficiently trivial sets needn’t be classified as propositions. And so on. So there are one and zero-element sets that are not classified in the theory as propositions, though they’d be isomorphic to the propositions as promoted to sets.

Anyhow, I don’t have anywhere near a fully baked theory of all this. I kind of thought other folks were working on it, and I’d leave it to the people with more category theoretic chops.

Posted by: Dan Doel on March 15, 2011 11:00 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

The wart is inevitable if you want something that looks like current category theory. Here it is in its essence:

There is a category of sets. Every category has a core, its underlying groupoid, in this case the groupoid of sets and bijections. Every core has a discrete reflection, its set of isomorphism classes, in this case the set of cardinal numbers. But a set of all cardinal numbers leads to a contradiction.

(By the way, does anybody know a really slick proof that a set of all cardinal numbers is contradictory? That is, given a set KK and a full functor #:SetDisc(K)#\colon Set \to Disc(K), derive a contradiction. I can do this by recreating the Burali-Forti paradox, which always seems to work, but that's complicated. There ought to be some easy contradiction involving #K#K, but I can't find it.)

Mike's idea (‘Some more speculation along these lines […]’) breaks the step from a category to its core, but that's not normal category theory. The usual idea is to break the step from a groupoid to its discrete reflection unless the groupoid is small (or else the discrete reflection is only a proper class). I think that you still need to do this; it's just that you wouldn't invite people to talk about proper classes in the first place except as discrete groupoids.

Like Mike, I don't think that this really solves anything; it just suggests a natural way to use the language.

Posted by: Toby Bartels on March 16, 2011 11:42 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

By the way:

there is a discrete subgroupoid of the groupoid of all sets containing the same objects

This needn’t be true, and my argument didn’t use it. It’s true in classical set-theoretic foundations for category theory, but there’s no reason, in a theory where groupoids are the basic objects, that every groupoid should admit an essentially surjective functor from a discrete groupoid. In fact, I believe there are plenty of (2,1)-toposes where this fails.

What will be true, in a groupoid-based foundation, is that the category of sets has a core, i.e. a groupoid of sets. (In a category-based foundation instead, you can avoid that, but as Toby says, that takes us out of the realm of “normal category theory.”) That’s sufficient for a paradox, either by my argument or by Toby’s.

Note that Toby’s construction goes by way of producing a set of all cardinal numbers, which is not really the same as a set of all sets, unless you have a strong enough axiom of choice. A set of all cardinal numbers admits a full functor from the groupoid of all sets, whereas what I would want to call a “set of all sets” is one admitting an essentially surjective functor to the groupoid of all sets. If the functor from the groupoid of sets to the set of cardinal numbers admits a splitting, then the set of cardinal numbers can function as a set of sets. Mathieu Dupont has referred to the assertion “every essentially surjective functor from a groupoid to a set splits” as the axiom of 2-choice.

Posted by: Mike Shulman on March 17, 2011 4:40 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I think I misunderstood what you wanted, but now I don’t understand at all what you want. If I have a hierarchy U 1U 2U 3U_1 \in U_2 \in U_3 \in\cdots of universes based only on size, I can always define “proposition” to mean “a (-1)-type in U 1U_1”, define “set” to mean “a 0-type in U 2U_2”, and so on, and thereby pick out a hierarchy of universes U 1U 2U 3U_1' \in U_2' \in U_3' \in \cdots that are stratified by dimension as well as by size. That’s basically what we do in current set-theoretic foundations, actually, with the exception that we have to construct the higher groupoids out of sets first.

It sounded to me like you were suggesting that having a hierarchy of universes stratified by dimension would solve some problem, but now I don’t know what problem; can you explain further?

One interesting point which came up at Oberwolfach is that there’s a natural way you might try to construct homotopy-quotients in terms of universes, analogously to how an elementary topos automatically has quotients. But this approach can fail if your universes are dimensionally-restricted: e.g. the quotient “ought to be” of higher dimensionality, but if you tried to construct it with a dimensionally-restricted universe, you’d only get its “truncation” to a lower-dimensional object. (I don’t know whether there are good axioms one can impose on the universes to ensure that it will always succeed.) So this might be one argument against postulating dimensionally-restricted universes to start with, if that’s what you’re suggesting, rather than defining them in terms of non-dimensionally-restricted ones as I suggested above.

Posted by: Mike Shulman on March 15, 2011 11:12 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

It sounded to me like you were suggesting that having a hierarchy of universes stratified by dimension would solve some problem, but now I don’t know what problem; can you explain further?

I’m getting confused myself. The problem, as it were, is that size hierarchies are motivated only by avoiding paradox, while if you can have a dimensionality hierarchy, it’s motivated by the fact that propositions naturally form a set, and sets naturaly form a category, and so on.

But, you seem to be suggesting that if we forget about size, there is no difference at all between a discrete groupoid and a set. I’m no longer sure whether or not I agree with that.

Previously I would have said that’s wrong. A set has elements, 0-cells, and we’re allowed to talk about them up to equality (whatever that mean). A groupoid has objects, 0-cells, and isomorphisms, 1-cells. But it does not have a set of objects. It has a set of isomorphisms between any two objects, but we are only allowed to talk about objects up to isomorphisms between them. And even if we throw away all the non-identity isomorphisms, we’re still not allowed to pretend the 1-cells don’t exist and talk about the groupoid having a set of objects/being a set.

But, is that restriction just sneaking size considerations in through the back door without saying so? Similar to your ‘a proposition is a (-1)-type in U 1U_1’? I’m not sure any more.

Posted by: Dan Doel on March 16, 2011 12:41 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

A set has elements, 0-cells, and we’re allowed to talk about them up to equality (whatever that mean). A groupoid has objects, 0-cells, and isomorphisms, 1-cells. But it does not have a set of objects. … And even if we throw away all the non-identity isomorphisms, we’re still not allowed to pretend the 1-cells don’t exist and talk about the groupoid having a set of objects/being a set.

Yes, I agree with all of that. However, a discrete groupoid (one with at most one isomorphism between any two objects) is not really distinguishable from a set in my book. We can call its objects “elements” and talk about them up to “equality”, where we define “equality” to mean “isomorphism.”

Actually, we can do that with any groupoid, and what we get is a set that is the discrete reflection (or π 0\pi_0) of the groupoid. But if the groupoid started out discrete, then it is equivalent to its own π 0\pi_0.

Posted by: Mike Shulman on March 16, 2011 3:53 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

the homotopy type theory interpretation… does use the usual propositions-as-types idea.

I disagree.

Clearly, some logical constructions (like Id-types) are not in general (-1)-types!

The way I see it, the perspective of homotopy type theory is that Id-types are not a “logical construction”, because they are not (-1)-types. That’s why we call them “paths” instead. The corresponding proposition would be π 1\pi_{-1} of the Id-type.

Posted by: Mike Shulman on March 14, 2011 6:37 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

unless you enrich/extend the type theory in some way, π1\pi - 1 is not a logically definable operation. You need either Aczel’s logic-enriched type theory, the [bracket types] that Andrej and I proposed, the type Prop of the Coq system, or something similar. Each of these has different consequences for the decidability of type-checking, etc., and the homotopical semantics are not fully clear (yet).

But the whole issue here is just about terminology - right? It’s a matter of what we decide to call “propositions”. I fully agree that there is an important difference between arbitrary types and (-1)-types (resp. between families of each), and it’s natural to call the latter “propositions” (resp. “propositional functions” or “relations”). But the proof-relevant interpretation of logic has its place, too – sometimes we do want to think of Id as a relation (with evidence), and of a Σ\Sigma as an existential quantifier (with evidence). Vladimir’s rendering of contractibility as “there is a point such that every point is identical to it”, for example, relies on that reading.

Posted by: Steve Awodey on March 14, 2011 8:04 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I’m well aware that constructively, Cauchy and Dedekind reals are different

Perhaps I'm getting even further into a minor point here, but this is a very misleading thing to say (and I know, Neel said it first).

If my understanding isn't out of date, most practising constructive mathematicians accept countable choice and therefore accept that the Cauchy real line is Dedekind-complete. Only constructivists can doubt this result, since it has another proof that uses excluded middle instead. Nevertheless, if you say the theorem fails constructively because some constructivists don't accept it, then you may as well just say that the theorem fails because some people don't accept it, and that would be misleading.

(For details of the mathematics, see [[weak countable choice]] and the reference there.)

Posted by: Toby Bartels on March 14, 2011 11:01 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

You must be excluding from “most practising constructive mathematicians” anyone who cares about doing mathematics which is valid in an arbitrary topos. I would have guessed that nowadays such people make up at least a substantial minority of the people who care about doing without excluded middle for one reason or another. I have a feeling that their presence may be even more pronounced around here. But I have no data to support my guess.

Regardless, I think it is just as valid for “constructively” to refer to “in constructive logic” as it is for it to refer to “in the mathematics which the majority of mathematicians who call themselves ‘constructivist’ would accept”.

Posted by: Mike Shulman on March 14, 2011 11:11 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

You must be excluding from “most practising constructive mathematicians” anyone who cares about doing mathematics which is valid in an arbitrary topos.

I'm excluding people who do applied topos theory but don't adopt a constructivist philosophy. I'm excluding anybody who says ‘SetSet is a boolean topos’ with a straight face and no internal twinge of guilt. A good example of the sort of person that I'm excluding is Peter Johnstone, who writes lots of great stuff about mathematics internal to an arbitrary topos but who also writes about topos theory using excluded middle.

Of course, doing mathematics which is valid in an arbitrary topos can lead one to become a constructive mathematician. For example, I don't think that I'm excluding you. (But I'd be more inclined to think of you as a pluralist than as a constructivist; I put myself in that same category.)

Regardless, I think it is just as valid for “constructively” to refer to “in constructive logic” as it is for it to refer to “in the mathematics which the majority of mathematicians who call themselves ‘constructivist’ would accept”.

I think that ‘intuitionistically’ is the usual term for this, although it has its own problems. (Brouwer accepted WCCWCC and even AC 0,0AC_{0,0}, although not CCCC in general.) I would suggest saying ‘in constructive logic’ (or ‘in intuitionistic logic’), which is nearly as short, leaving ‘constructively’ available to substitute for the longer phrase.

Posted by: Toby Bartels on March 14, 2011 11:48 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Well, perhaps you’re right that “in intuitionistic logic” is the clearest. I’ll try to remember. (-:

Posted by: Mike Shulman on March 15, 2011 12:49 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

And ‘in an arbitrary topos’ might be even better if it's what you really mean. (I don't know if it was here.)

Posted by: Toby Bartels on March 15, 2011 9:33 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Hi Toby,

I apologize for my imprecision/misleading-ness. Let me phrase my comment as a pair of questions:

1. Does the proof of the Dedekind-completeness of the reals go through in Martin-Lof type theory?
2. If not, is there some weaker notion of equivalence that we can use to relate Dedekind cuts and Cauchy sequences?

The reason the answer is not obvious to me is for two reasons. First, truth means something different in type theory (with props-as-types) than in topoi (which has a truth value poset), and second, the usual constructions of the reals use quotients, which are tricky to emulate in type theory. (If the univalent view makes quotients easier, then that’s a big mark in its favor!)

Posted by: Neel Krishnaswami on March 15, 2011 12:44 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Yes, Martin-Löf can prove that the Cauchy reals are Dedekind-complete.

Of course, we don't have a set of Cauchy reals in the sense of set native to Martin-Löf; what the Cauchy reals form is sometimes called a ‘setoid’ (and is sometimes called simply a ‘set’ if the fundamental concept is called ‘type’ instead): a type C\mathbb{R}_C together with an equivalence relation = =_{\mathbb{R}} on it. A more serious issue is that a sentence beginning L: CProp\forall L\colon \mathbb{R}_C \to Prop can't be directly formulated in Martin-Löf's predicative framework.

Nevertheless, we may prove that, in any context in which we have x: CL[x]:Type x\colon \mathbb{R}_C \vdash L[x]\colon Type and x: CU[x]:Type x\colon \mathbb{R}_C \vdash U[x]\colon Type (where C\mathbb{R}_C is a type of the form α:×Cauchy(α), \sum_{\alpha\colon \mathbb{N} \to \mathbb{Q} \times \mathbb{Q}} Cauchy(\alpha) , details elided), we also have cut(L,U) x: C y: C(L[y]y< x)×(U[y]x< y) cut(L,U) \vdash \sum_{x\colon \mathbb{R}_C} \prod_{y\colon \mathbb{R}_C} (L[y] \leftrightarrow y \lt_\mathbb{R} x) \times (U[y] \leftrightarrow x \lt_\mathbb{R} y) (and can furthermore prove that this xx is unique, up to = =_\mathbb{R}). And we essentially do this by first proving cut(L,U) k: x:× cut(L,U) \vdash \prod_{k\colon \mathbb{N}} \sum_{x\colon \mathbb{Q} \times \mathbb{Q}} \cdots and using Martin-Löf's choice principle.

Or so I recall; it's been a while since I've done this, and I'm not sure that I could write out the proof.

Posted by: Toby Bartels on March 15, 2011 10:17 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thinking semantically, it also seems to me that the answer should be yes. Semantically, the propositions-as-types logic amounts to embedding the category of types into its free regular or exact completion and using the resulting categorical logic (but restricted to objects coming from the original category). A free exact completion validates the presentation axiom, and in particular validates countable choice, so the usual topos-theoretic proof should go through there—or it would, if the exact completion were a topos. In general it won’t be, but you can still reason internally about arbitrary subobjects even if you don’t necessarily have an object classifying them.

the usual constructions of the reals use quotients, which are tricky to emulate in type theory. (If the univalent view makes quotients easier, then that’s a big mark in its favor!)

I haven’t quite managed to understand what is so difficult about quotients in type theory, so I can’t say anything yet about whether univalence makes them easier. It seems like passing to “setoids” in type theory, in order to “emulate” quotients, should just correspond to taking the free exact completion of the category of types. Why is that difficult?

Posted by: Mike Shulman on March 15, 2011 10:26 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Hi Tom,

as far as I understand, an important aspect of numerical analysis
is to carry error estimates through all methods and calculations.
This means doing arithmetics not just with points (real numbers
and vectors) but also with small intervals, boxes, and balls,
— contractible spaces.

Now of course the whole real line is also contractible, and we
don’t want to allow the error to escape to infinity, so there is
more to it than just homotopy 0-types. Perhaps some homotopy
theory of measured spaces and bounded maps.

Maybe my remark is off the point. Perhaps so far off the point
that the only way I can bound the error is to stop writing.

Cheers,
Joachim.

Posted by: Joachim Kock on March 12, 2011 3:58 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Your remark may be off the point, but it’s in a small neighborhood of the point.

Posted by: John Baez on March 12, 2011 5:52 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Hi Joachim. That’s interesting.

I wonder how things look when one is working with real numbers at fixed or finite precision. (Maybe one always is.) Then the continuous real line is just an approximation to the discrete reality. The error bars would fundamentally not be contractible spaces, but small discrete spaces.

Unfortunately I know little about these matters.

Posted by: Tom Leinster on March 12, 2011 6:50 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

[Another comment on something I know very little about…
I am a bit uneasy, posting it here on the n-Cat Café, such a
big visible place. I would rather post it to The Error Bar,
that small discrete space Tom is talking about.]

> I wonder how things look when one is working with real numbers at
> fixed or finite precision. (Maybe one always is.) Then the
> continuous real line is just an approximation to the discrete
> reality. The error bars would fundamentally not be contractible
> spaces, but small discrete spaces.

I think the discrete model (floating point arithmetic) F should be
sufficiently rich to support a homotopy theory (with a notion of
geometric realisation into R), so that even two consecutive floating
point numbers still span a useful interval — contractible of course,
just like the simplicial interval, which also has a discrete
appearance.

Naturally one has to study the shortcomings of the number system F,
and learn to control the errors. This is for example the numerical
study of exact methods, such as Gaussian elimination, where the main
concern is to minimise the rounding errors (by carefully choosing
where to pivot, in which order to multiply numbers, etc.). Here the
focus is on the number system.

But this is only the low-level aspect of numerical analysis. Quickly,
it seems, this aspect fades into the background: once it is realised
that it is an illusion to get exact solutions even with exact methods,
why not go natively approximate? Why spend time with looking for
roots of a charateristic polynomial to find exact eigenvalues subject
to rounding errors, when we can find just as good approximations with
inherently approximate techniques such as iteration? Once the
algorithm itself is approximative, it doesn’t matter so much whether
some errors originate in rounding or in measurement, or in previous
calculations — in any case, each result is just a hypothesis which
the following iteration will replace by an improved hypothesis (a sort
of Bayesian interpretation?). Now, the analysis of convergence of
such methods take place in the classical R, it seems, not in F, even
though of course the implementation must eventually be in F. Rather
than saying that R then serves as an approximation to the discrete
reality, I think it is better to regard the whole method as pertaining
to R, independently of any discrete implementation, and just accept
that the actual implementation, in whatever F, may imply further
errors.

I think this is vaguely analogous to the situation in modern homotopy
theory: once you have accepted to work with homotopy types and you
happily replace a point by some ‘big’ EG suggested by the calculations
you are doing, it is not so important anymore whether your homotopy
types are implemented as topological spaces or simplicial sets —
again something discrete of nature — and whether this implementation
actually involves fiddling with anodyne extensions, from which you are
spared in the high-level homotopy theory.

Also, once you have realised that even classical 0-mathematics
inevitably leads to homotopies anyway — why not go natively
homotopic?

I realise that these analogies are not very watertight, and it is not
something I have thought a lot about. Sorry for the noise. (In the
dark smoke-filled corners of The Error Bar, this sort of halfbaked
speculation is quite common.)

Posted by: Joachim Kock on March 13, 2011 10:53 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

It seems likely to me (in fact, almost tautological) that once you start distinguishing between spaces that have the same homotopy type, you don’t want to be thinking of them as homotopy types but as actual topological (or uniform, or metric) spaces.

Posted by: Mike Shulman on March 13, 2011 4:07 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Regarding numerical analysis, there are a number of points I would like to make (some contradictory), but let me restrict to the following one.

We are using type theory (Coq) to implement exact numerical analysis. A number of operations (e.g. modulus of continuity) do not naturally respect the set theoretic equivalence on R, or even on Q. Hence, intensional type theory seems like a good framework for the implementation. Homotopy Type Theory gives a number of desirable axioms:
Proof irrelevance, function extensionality, K-axiom on sets, …

So, from an implementation point of view, HTT has its advantages.

Bas

Posted by: Bas Spitters on March 14, 2011 9:36 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Without having read all of the other answers yet, here’s my answer:

First of all, sets don’t underlie numerical analysis any more than homotopy types do. (One talks about certain sets of numbers in numerical analysis, but this is not the general set theory that Cantor gave us.) In fact, the most obvious foundation of numerical analysis is second-order arithmetic, a weak foundation which is underlain by numbers! Why pay more?

Of course, numerical analysis can also be formulated in set theory, and therefore (by interpreting sets as 00-types) in any foundational homotopy theory. So (if this programme works out as expected,) homotopy theory can be a foundation for numerical analysis (and indeed all of mathematics), but numerical analysis is not a reason for considering that foundation. It’s not even a reason for considering foundational set theory, after all.

What, then, is the reason to consider homotopy theory as a foundation? Others may have their reasons, but for me, the reason is higher category theory, and by ‘higher’ I actually don't mean anything more than 11! This is because set theory, as normally construed, forces us to discuss category theory using language which is inherently ‘evil’, in that for two arbitrary objects aa and bb of a given category CC, there must be a proposition (a=b)(a = b) whose meaning is stronger than the existence of an isomorphism between aa and bb in CC. But for category theory as I understand it (which has the category of groups as a basic example), there is no meaningful such proposition in general.

My usual approach to this is to use a foundation in which there is no inherent notion of equality, such as preset theory, intensional type theory without identity types, or Michael Makkai's FOLDS. However, another approach (and one which is now better developed) is to use a foundation whose inherent equality is rich enough to contain the entire structure of an \infty-groupoid (or just a 00-groupoid, but that wouldn't be enough for 22-category theory). Then for two arbitrary objects aa and bb of a given category (or higher category) CC, there is something like a proposition (a=b)(a = b), but this is actually a homotopy type (which when CC is a 11-category will happen to be a homotopy 00-type), and it is not too strong, being simply the set (or \infty-groupoid) of isomorphisms (or equivalences) between aa and bb.

For numerical analysis, you don't need all of this fancy stuff. However, the basic notion of number can be formulated in this framework (at least given an axiom of infinity), so it still works.

Posted by: Toby Bartels on March 14, 2011 10:32 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

We just released a preprint:
Computer certified efficient exact reals in Coq.

Posted by: Bas Spitters on March 15, 2011 10:28 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Here’s a late comment from another dark, smoky corner of The Error Bar:

In fact, the most obvious foundation of numerical analysis is second-order arithmetic, a weak foundation which is underlain by numbers! Why pay more?

Here you’re of course talking about economy of concepts, but I’m sure you’re aware of the irony in another reading of this statement:

Second-order arithmetic (with full comprehension) is incomprehensibly much stronger than any proposed type-theoretical foundation in terms of proof-theoretical strength.

Ordinary mathematics can be done in vastly weaker systems, and numerical analysis itself can (and probably should) be done in recursive systems (perhaps even very weak recursive systems that guarantee small computational complexity of extracted programs).

Posted by: Ulrik on March 18, 2011 10:52 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

That's a good point. The ‘most obvious’ foundation of anything is often much stronger than one really needs (at least when it's strong enough). Still, I think that even a reasonably weak foundation for numerical analysis would most naturally be built using numbers as the basic concept. And I don't think that you're disagreeing with that.

Posted by: Toby Bartels on March 19, 2011 5:08 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Mike wrote

Roughly, the goal of this project is to develop a formal language and semantics, similar to the language and semantics of set theory, but in which the fundamental objects are homotopy types (a.k.a. \infty-groupoids) rather than sets.

Isn’t this still a limited vision for foundations in that, rather than just homotopy types, we should want fundamental objects to be directed? I.e., shouldn’t we hope for \infty-categories to be the proper foundation?

Posted by: David Corfield on March 13, 2011 11:02 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Now one of the most basic propositions in mathematics is an assertion of equality, “x=yx=y”. Thus, for any type AA, we should have a type dependent on two variables x,yAx,y \in A expressing this proposition. This type is usually written as Id A(x,y)Id_A(x,y) and called an identity type. If each type Id A(x,y)Id_A(x,y) is always a subsingleton (i.e. if two things can be equal in at most one way), then the identity types are called extensional; otherwise they are intensional.

This does not look correct to me. Whether a type theory, or its identity types, are intentional or extensional depends on what identities you can prove, not necessarily what the identity types look like, or how many inhabitants they have.

Intensional type theory and intensional identity types roughly follow the principle that two things are identical if they are defined identically. You can take this very literally, and say that 1+11 + 1 and 22 are not intensionally equal (prima facie), but most intensional type theories are a little more lax, and will say that two definitions are identical if they are convertible, which can be tested reducing both to normal form (induction principles will get you a little beyond this, though). So, two values of an inductive type are identical if they reduce to the same tree of constructors, and two lambda terms are equal if they have the same beta (possibly eta) normal form (even eta is sometimes called ‘extensional’ though, as it is not purely a computational rule).

Now, by contrast, I think most type theorists would say that identity types are extensional if they allow two values to be proved equal by more appropriately mathematical standards. So, for instance, we don’t think of inhabitants of ABA \to B as mere lambda terms, but as functions, and so we might expect to be able prove two functions equal using the usual mathematical rule:

(x.fx=gx)f=g(\forall x. f x = g x) \to f = g

Two functions are identical if they yield identical results for all inputs (or possibly, if they yield identical results for identical inputs). But this cannot be proved for intensional identity types/type theory, because the fact that two functions produce the same result for all inputs does not mean that they are convertible terms.

What extensional type theories do specifically is not merely add the above sort of extensionality proofs, but give them force during type checking. They basically say that if two things are provably equal, then they are convertible. This is why they have undecidable type checking, because to decide whether T[f]T[f] and T[g]T[g] are the same type expression, one has to decide whether ff and gg are the same, which in general entails deciding whether two arbitrary functions are equal on all inputs, and that is impossible.

There are, however, some type theories bouncing round (like observational type theory) where the equality/identity types are kept separate from the equality used for checking types. This allows the identity types to be extensional, which is useful, while the type checking is decidable, which is also useful. Both traditional intensional and extensional type theory keep the two sorts of equality tied together, sacrificing one of the useful properties. And this actually makes usual intensional type theory very unsatisfying as well, because the things we end up allowing to be proved identical are chosen for reasons of mechanical decidability, not because they actually make any kind of mathematical sense.

(My earlier mention of eta equality is an example of this. Coq does not do eta expansion during normalization, so ¬f.f=(λxfx)\neg \forall f. f = (\lambda x \to f x). But Agda does, so f.f=(λxfx)\forall f. f = (\lambda x \to f x). But both are (arguably) intensional type theories.)

Anyhow, there are rules for identity types that are still considered intensional, but conflict with the homotopy stuff. For instance, Agda (by default) allows the KK rule for equality, which looks like:

K:Π x:AΠ eq:Id Axx.PreflPeqK : \Pi_{x : A} \Pi_{eq : Id_A x x}. P refl \to P eq

This rule is implied by the dependent pattern matching it normally allows. But, it says that every proof of Id AxxId_A x x is reflrefl, essentially. However, this causes a conflict if we want equality of sets to be isomorphism, because then there may be many isomorphisms between a set and itself, but KK will say that all isomorphisms are the identity. Traditional Martin Loef intenstional type theory lacks KK, but it is not this lack that makes it intensional specifically. But, that it does lack it puts it in the right spot to be the internal language of model categories, I guess, which is lucky.

Extensional type theories are likely to have KK as well, I imagine, since they might simply decide that the right notion of equality on identity proofs is, “they’re all equal.” And then it becomes so. It also wouldn’t surprise me if they have type formations like unions and intersections (and quotients?) that don’t respect isomorphism. So they may be more likely to fall afoul of the homotopy stuff.

As a final note: I said earlier that intensional type theory makes an unfortunate sacrifice. It ties the equality proofs to rules we have selected for decidability, not mathematical sensibility. The good news appears to be that homotopy type theory is actually not intensional in this sense. The example earlier was:

(x.fx=gx)f=g(\forall x. f x = g x) \to f = g

And Voevodsky has reportedly proved that that is implied by the univalence axiom. So the axioms added for homotopy type theory turn the intensional identity types into extensional identity types. And from what I can gather, one can see homotopy type theory in a similar way to the observational type theory I mentioned. The difference is that OTT builds up a proof irrelevant equality (at most one inhabitant), while univalent identity types may have computational content, being isomorphisms (or even higher equivalences) between types. But I would expect that a usable homotopy type theory would maintain the separation between (extensional) propositional identity types, and the equality used for type checking, because otherwise one would have to decide isomorphism of sets (for instance) to check types.

Sorry for rambling at such length. Hopefully some of the content is useful.

Posted by: Dan Doel on March 14, 2011 6:00 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Oh, darn, I knew I was going to get into trouble with this.

I was trying to make a distinction between extensional identity types and function extensionality, which I think I’ve gathered are different beasts. I was also trying to avoid delving into the distinction between definitional equality and propositional equality. But in the process I evidently started using words in the wrong way. Would it be correct if everywhere I said “extensional” I changed it to say “UIP” (the axiom of Uniqueness of Identity Proofs), or “propositional equality coincides with definitional equality,” instead?

Also, I did not realize that Agda’s dependent pattern matching proves axiom K! Are there natural fixes?

Posted by: Mike Shulman on March 14, 2011 7:13 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

By the way, I’ve been told that Coq v8.4 will do eta expansion. Eta expansion is, I would say, basically the same as “functional extensionality w.r.t. definitional equality”: it says that two functions which compute definitionally equal outputs on all possible inputs (like ff and λx.fx\lambda x. f x) are themselves definitionally equal. Similarly, “functional extensionality w.r.t. propositional equality”, i.e. path spaces (which is what can be proved from the univalence axiom together with eta expansion—I’ll get to that in a later post) says that two functions which compute propositionally equal (i.e. homotopic) outputs on all inputs are themselves propositionally equal (i.e. homotopic).

(From the homotopical viewpoint, “definitional equality” can be thought of as equality in the point-set model, while “propositional equality” can be thought of as “homotopy” or “equality in the homotopy theory”. This is not of course what the type theorists mean by them, but I believe it’s what they interpret to in the homotopy models. Thus, from this point of view, eta expansion says that the 1-category of types (the category on which we have the weak factorization system) has actual dependent products instead of just weak ones (that satisfy existence but not uniqueness), while propositional function extensionality says that the (,1)(\infty,1)-category presented by this homotopy theory has actual (,1)(\infty,1)-categorical dependent products, rather than just “weak” ones.)

Both sorts of function extensionality are, I believe, completely disjoint from the question I was trying to refer to by “extensional identity types”, which is about whether the path objects are monomorphisms, whether identity types are subsingletons, whether identity proofs are unique, whether propositional equality coincides with definitional equality, and whether the (,1)(\infty,1)-category is actually a 1-category. (In particular, this is about all identity types, not just identity types of function types.)

It seems logical to me to say that identity types are “intensional” if they allow us to distinguish between different proofs of the same equality (and otherwise “extensional”), in the same way that function types are intensional if they allow us to distinguish between different “functions” which nevertheless compute the same value on all inputs (and otherwise “extensional”). But it sounds almost as though you want “intensional” and “extensional” when applied to identity types to have the reverse meanings!

Anyway, is there a concise type-theoretic name for the thing I do want to talk about?

Posted by: Mike Shulman on March 14, 2011 7:46 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I was trying to make a distinction between extensional identity types and function extensionality, which I think I’ve gathered are different beasts.

There are two ways you could take the phrase “extensional identity types.” Myself, I’d probably use it to mean identity types that admit proofs of function extensionality (and other types; extensional equality of codata via bisimulation for instance). Namely, there is a term:

ext ABfg:(Π x:A.Id B(fx)(gx))Id ABfgext_{A \to B} f g : (\Pi_{x : A}. Id_B (f x) (g x)) \to Id_{A \to B} f g

And the univalent stuff does this, which is good. The other way you could take it is that equality of the inhabitants of identity types is extensional, and since “they’re proofs” and “proofs should be irrelevant,” that extensional equality says all inhabitants of identity types are equal. This would be the thing that conflicts with the homotopy interpretation.

Would it be correct if everywhere I said “extensional” I changed it to say “UIP” (the axiom of Uniqueness of Identity Proofs)

Probably. Or something close. UIP truncates everything to the set (in the homotopy sense) level.

or “propositional equality coincides with definitional equality,” instead?

Propositional equality sort-of coincides with definitional equality in intensional type theory, in that the latter determines what we can prove with the former. This is because whether

refl:Id Axyrefl : Id_A x y

is well-typed is determined by whether xx is definitionally equal to yy. Induction principles get a little extra mileage out of propositional equality, because we can prove things like:

lemma:Π n:.Id n(n+0)lemma : \Pi_{n:\mathbb{N}}. Id_{\mathbb{N}} n (n+0)

whereas a variable nn is not definitionally equal to n+0n+0.

However, whether we can demonstrate a term:

eq:Id ABf(λx.fx)eq : Id_{A \to B} f (\lambda x. f x)

is entirely dependent on whether the definitional equality includes eta.

So, despite Martin-loef having the idea to acknowledge definitional (or, he actually called it judgmental; he used “definitional” for notational aliases) equality as distinct from propositional equality, he kept them tied together in both his intensional and extensional type theories. Intensional type theory goes by the rule, “if two things are definitionally equal, they are propositionally equal.” Extensional type theory adds some ways of proving propositional equality, and then turns it around and says, “if two things are propositionally equal, they’re definitionally equal.” Leaving them disconnected seems to be a newer idea.

Also, I did not realize that Agda’s dependent pattern matching proves axiom K! Are there natural fixes?

Recently a flag --without-K was added that disallows certain pattern matching such that K is hopefully not derivable. I don’t think anyone’s done a rigorous proof that it certainly rules it out, though. And of course, the flag was added because this stuff is getting popular. :)

By the way, I’ve been told that Coq v8.4 will do eta expansion.

Yeah, now that you mention it, I think I heard that too. I don’t keep up with Coq as closely, so I guess it slipped my mind.

Eta expansion is, I would say, basically the same as “functional extensionality w.r.t. definitional equality”: it says that two functions which compute definitionally equal outputs on all possible inputs (like f and λx.fx) are themselves definitionally equal.

Yeah, I suppose it may make sense to say that, inasmuch as definitional equality has to deal with neutral terms (variables, for instance) in a more first class way. So if z is some variable of unknown value, the fact that f and g actually compute the same function does not imply that f z is definitionally equal to g z. Whereas f z is definitionally equal to (λx.fx) z.

But the really important thing (if you ask, say, Conor McBride, and I’m inclined to agree with him) is to not tie definitional and propositional equality together. Whether your definitional equality involves eta expansion (or other fancy processing; Epigram 2 tries to use identities like mapid=idmap \; id = id to reduce things, too) or not should only affect which terms/programs pass the type checker, not which terms can be proved propositionally equal. And I think the homotopy type theories do better in this regard than traditional intensional type theories.

It seems logical to me to say that identity types are “intensional” if they allow us to distinguish between different proofs of the same equality (and otherwise “extensional”), in the same way that function types are intensional if they allow us to distinguish between different “functions” which nevertheless compute the same value on all inputs (and otherwise “extensional”). But it sounds almost as though you want “intensional” and “extensional” when applied to identity types to have the reverse meanings!

Characterizations of type theories as intensional and extensional is all about how equality behaves. So, I don’t think a type theorist would say that function types are intensional or extensional. They would say that a type theory is intensional if equality of functions therein is intensional.

Where it is kind of unclear is that “extensional type theory” typically means that the definitional equality is extensional, and in the sense of f and g above, too, not just eta. In traditional extensional type theory, a term implementing merge sort is (supposed to be) definitionally equal to a term implementing insertion sort.

But if we’re serious about separating definitional and propositional equality, we can ask how the latter sees terms of other types. So I would say an intensional identity type would be unlikely to admit a proof that our merge sort function is propositionally equal to our insertion sort function. While an extensional identity type would admit such a proof. That is how I’d use the words, at least.

Posted by: Dan Doel on March 14, 2011 9:13 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Incidentally, one needn’t even get as fancy as merge sort vs. insertion sort to see unfortunate consequences of sticking with (traditional) intensional type theory. I noted that a variable nn is not definitionally equal to n+0n+0 (assuming that addition is defined by recursion on the left value).

Using induction, we can prove:

n:.Id n(n+0)\forall n : \mathbb{N}. Id_\mathbb{N} n (n + 0)

However, it is not possible to prove:

Id (λn.n)(λn.n+0)Id_{\mathbb{N} \to \mathbb{N}} (\lambda n. n) (\lambda n. n + 0)

in intensional type theory (say, Agda or Coq). We could, of course, make the definitional equality aware of some arithmetic identities on \mathbb{N}, in which case it would be provable. But that would just be ad-hoc, and we could come up with as many additional examples as we wish.

Posted by: Dan Doel on March 14, 2011 9:38 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Propositional equality sort-of coincides with definitional equality in intensional type theory, in that the latter determines what we can prove with the former.

Yes, but there’s a big difference, especially constructively, between not being able to disprove something and asserting that it is true. To me saying “propositional equality coincides with definitional equality” means we can definitely assert that propositionally equal things are definitionally equal, instead of leaving the question open as intensional type theory does.

Recently a flag –without-K was added that disallows certain pattern matching such that K is hopefully not derivable. I don’t think anyone’s done a rigorous proof that it certainly rules it out, though.

Isn’t there a theory of dependent inductive types??

The other way you could take it is that equality of the inhabitants of identity types is extensional

Yes, that’s what I want. It seems natural to me to call that “extensional identity types,” since if “extensional identity types” means instead that the function types have extensional equality, then there’s no corresponding phrase left to mean that the identity types have extensional equality—whereas we can always say “extensional function types” to mean that the function types have extensional equality.

By the way, it seems curious to me to hear you call intensional function types “unfortunate,” because I’ve heard other people say that intensional equality for functions is intended, since in practice there’s no way to verify that (say) any two computer programs actually compute the same function or not.

Posted by: Mike Shulman on March 14, 2011 7:11 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Isn’t there a theory of dependent inductive types??

Yes. Agda does not use that theory directly like Coq does. Dependent pattern matching is a primitive. That is, definitions like:

g : (x y : A) -> x = y -> ...
g x .x refl = ...

are the primitive means of eliminating inductive types in Agda. There is also a theory of how to turn such definitions by dependent pattern-matching into definitions using eliminators (explained in Eliminating Dependent Pattern Matching), but it uses K. I don’t know off hand of any work on what would be the laxest set of restrictions to the above definitions that would ensure that you can translate them to just eliminators including J. I guess it may have been worked on in Coq, but I don’t follow it that closely.

By the way, it seems curious to me to hear you call intensional function types “unfortunate,” because I’ve heard other people say that intensional equality for functions is intended, since in practice there’s no way to verify that (say) any two computer programs actually compute the same function or not.

I’m not sure how anyone could consider my λn.n\lambda n. n versus λn.n+0\lambda n. n + 0 example not to be somewhat ugly, unless they are an ultrafinitist or something. We are easily able to prove within the theory that they return the same value for every possible input, but we are not able to prove that they are the same function. Except, if we make our type checker reduce some common arithmetic identities during checking, then we will be able to prove that they are the same function.

It occurs to me that propositional vs. definitional equality in actual computer implementations of type theory is viewed a little differently than the propositional vs. judgmental dichotomy that Martin-loef worked with. That is, he may have viewed the judgmental equality as the primary notion of equality of objects in the theory. Propositional equality is then a reification of the judgmental equality into a type. But what was really cared about was whether two things were judgmentally equal.

This makes the decision to reflect propositional equality back into judgmental equality in extensional theory a little more understandable as well. You still care about what things are judgmentally equal, but you want that equality to allow functional extensionality and whatnot. (And he wasn’t trying to implement it on a computer, so who cares if it’s undecidable?)

But, in computer implementations, the definitional equality is generally not something we care about. It is typically ‘just’ the equality that the type checker uses, and thus it is convenient if λx.fx\lambda x. f x is definitionally equal to ff, because then programs will pass the checker with less work, but it is frequently overly conservative, because it needs to be decidable. The equality I care about is the propositional identity type, and I would consider two terms (provably) equal if the identity type between them is inhabited.

But this means that, ideally, I want the behavior of the propositional equality to be specified on its own terms, and not to just inherit whatever concessions and tricks the definitional equality is using. Let’s decide whether we think ff and λx.fx\lambda x. f x are the same thing, and if we decide they are, it’d be nice if that held propositionally even if the type checker doesn’t do eta expansion (obviously if we decide they are not, the definitional equality can’t incorporate eta).

The univalence axiom does this from what I gather. We are no longer just inheriting the definitional equality, because we also have access to, “if two terms are weakly equivalent, they are identical,” (though not stated that way directly). And that turns out to be a satisfyingly strong axiom.

Posted by: Dan Doel on March 15, 2011 1:17 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

My understanding is that in Coq, pattern matching is also the primitive. Inductive definitions generate the “recursor” functions that end in _ind and _rec and _rect, but if you look at those functions they’re defined in terms of the match primitive, rather than vice versa.

We are easily able to prove within the theory that they return the same value for every possible input, but we are not able to prove that they are the same function.

My point was that that’s only a problem if you accept the notion that a “function” should be determined by its values. If we write two computer programs, one which returns its input unmodified and one which adds zero to its input, then the two programs will not (unless we have a clever optimizing compiler) be identical as code objects, even though they return the same value on all inputs.

Posted by: Mike Shulman on March 15, 2011 2:01 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

The Equations extension adds agda-style pattern matching to Coq. It uses the K-axiom. This has the effect that the definitions do not reduce inside the system, but instead we only obtain rewrite rules.

There is a prototype of Coq with K. Hugo Herbelin even has a prototype of Coq that “enforces the level of inductive types to be above the levels of the indices (a.k.a. real arguments). For instance, identity x y is at a level higher than the level of the type of x and y. This complies with the homotopy approach of type theory.”

Posted by: Bas Spitters on March 15, 2011 9:51 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

enforces the level of inductive types to be above the levels of the indices

I don’t understand; what does “level” mean here, and what does that have to do with homotopy? Is it about universes?

Posted by: Mike Shulman on March 15, 2011 3:58 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

My point was that that’s only a problem if you accept the notion that a “function” should be determined by its values. If we write two computer programs, one which returns its input unmodified and one which adds zero to its input, then the two programs will not (unless we have a clever optimizing compiler) be identical as code objects, even though they return the same value on all inputs.

Well, in that case, I’d probably retort that that’s exactly what functions are. They’re the abstraction we use when we’re interested in reasoning about mapping inputs to outputs, but not really about how exactly that is accomplished, or how many resources it takes. That turns out to be a pretty useful abstraction.

I can certainly imagine it being useful to reason about algorithms at a more detailed level, and could believe that a type that allowed us to investigate and prove things about the above forgotten properties would be useful.

But intensional functions are not it. All we can do with them is apply them, so the interesting information has been thrown away, except for the, “these aren’t provably equal,” bit. They’re identical in all the ways we can actually use them,* but we can’t prove the, “these are interchangeable,” statement, even though said statement is the only place where they fail to be interchangeable.

So it seems like the worst of both worlds. :)

[*] This fact is actually how observational type theory works. If you eliminate the identity types from intensional type theory, then extensionally equal functions are actually interchangeable everywhere in the theory. So you can interpret a theory with identity types that treat functions extensionally into said theory, and everything works out.

From what I gather, a similar thing is true of homotopy type theory. You take a theory with identity types, and interpret into a theory without identity types, interpreting the identities into witnesses of the appropriate weak equivalence. And since it’s naturally impossible to talk about sets except up to isomorphism (for instance), it all works out.

Posted by: Dan Doel on March 15, 2011 5:51 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

This is amusing; I feel like I’m on the wrong side of the argument here. I agree with you—I like extensional functions. But I had the impression that some type theorists, at least, like intensional ones; otherwise why doesn’t everyone assume function extensionality? Why isn’t it built into Coq and Agda?

One other response I can think of is that plain intensional functions carry no data other than their extensional parts and their “not provably equal”-ness, but one could theoretically add other data to intensional type theory that can distinguish other properties of functions, which one could not add to type theory with extensional function types.

Posted by: Mike Shulman on March 15, 2011 6:13 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

This is amusing; I feel like I’m on the wrong side of the argument here. I agree with you—I like extensional functions. But I had the impression that some type theorists, at least, like intensional ones; otherwise why doesn’t everyone assume function extensionality? Why isn’t it built into Coq and Agda?

Well, I frequently just postulate extensionality in Agda, because it’s the only way to prove what I want (and I don’t want to muck with setoids). But that’s not really a good solution, because it gums up the way computation works. A term like:

subst P (ext f g ...) v

can’t reduce, because (ext ...) is an extra, non-canonical inhabitant of f = g. One of the significant results of the OTT folk is figuring out that you can have a propositional equality that is substitutive, computes properly, and incorporates all the extensional axioms we want. Making homotopy type theory behave properly is a similar challenge that isn’t met by just adding axioms to Coq/Agda/….

But also, the idea of really separating the propositional and definitional equality is somewhat novel. There may be some prior work that inspired them, but if you asked most type theorists before OTT, they’d probably tell you you have two options:

  1. Intensional type theory, where we just reify the definitional equality into a type. The equality is weak, but the type checking is still decidable.
  2. Extensional type theory, where we add all our nice axioms, and then stick it back into the definitional equality, making it undecidable.

Plenty will probably still tell you that. And obviously the first tends to be preferable for most people because of the decidability thing.

I think Agda is waiting on the OTT guys to implement a proof of concept of their stuff in the form of Epigram 2 before they try overhauling the system to support it (and Coq tends to be much more conservative than Agda, so…). And while it has bursts of activity, it probably still has a ways to go (unless you want to mess with the pretty bare-bones, intermediate-language level system they currently have).

One other response I can think of is that plain intensional functions carry no data other than their extensional parts and their “not provably equal”-ness, but one could theoretically add other data to intensional type theory that can distinguish other properties of functions, which one could not add to type theory with extensional function types.

In theory, there’s probably no reason a theory couldn’t have both functions, which would be treated extensionally, and algorithms, which would be more intensional, but carry more interesting information about their computational nature. And there could be an operation that forgets the extra information, taking an algorithm to the function it computes.

In a sense, we already do this plenty in functional languages. We create term algebra representations for embedded languages, write programs in the embedded language (where an algorithm taking an A to a B is often something like a Kleisli arrow A -> M B, where the monad specifies the language; but it could also be more opaque like A ~> B), do what analyses we want, and interpret the result into appropriate functions. Making this distinction available for the terms of the language itself isn’t something you usually see, but it’d be interesting.

Posted by: Dan Doel on March 15, 2011 8:12 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Making this distinction available for the terms of the language itself isn’t something you usually see, but it’d be interesting.

It occurred to me that I have seen something like this before, but not as separate types (and as an aside, I think this argues for my usage of ‘extensional identity type’ :)). The relevant paper is: Intensionality, Extensionality and Proof Irrelevance in Modal Type Theory, by Frank Pfenning.

This is also the paper I was thinking of when I said you could get really strict with intensionality such that 1+121 + 1 \neq 2. In his type theory, he has three judgments:

  1. ee is an expression of type AA, e::Ae :: A
  2. ee is a value of type AA, e:Ae : A
  3. ee is a proof of type AA, e÷Ae \div A

So, whether we’re thinking of something intensionally, extensionally or irrelevantly depends not on the type, but on the context in which we’re considering it. We have quantifiers for all these typing connectives, so there’s Π x÷A.B\Pi_{x \div A}. B for quantifying over something we consider a proof. And there are inclusions between the categories. We can use an expression where a value is expected (I think), and a value where a proof is expected, but not going backwards.

Each of the above judgment types also has a corresponding equality:

  1. Intensional equality \equiv, which is straight syntactic equality, without even normalization
  2. Extensional equality ==, which does normalization, and (I believe) allows us to prove f=gf = g if they are pointwise identical
  3. Proof equality, where all terms of a type are equal, and is too trivial to have a special symbol

These are all actually judgmental equalities, so a more Martin-loef mindset is probably appropriate. Also, confusingly, there are multiple judgments for certain equalities. That is:

MN:AM \equiv N : A M=N::AM = N :: A

Are both essentially “MM is intensionally equal to NN” judgments. And proof equality does have a judgment of the second form M=N÷AM = N \div A. I think this is done to make the definition of each easier, so the M=NAM = N \star A form should probably be considered primary.

Anyhow, that’s the sketch. Obviously this doesn’t (yet) allow you to do much interesting stuff (resource analysis, etc.) on intensionally considered things. I’m honestly not too sure what the point of the intensional portion of the language is good for as is. But it’s conceivable that it could be extended with more interesting facilities, and it shows it can be done in a language that also allows you to work with things extensionally (and even irrelevantly). Another the good thing might be that since we’re achieving this by using the same types with different contexts, we might not lose any efficiency in our end product, which probably can’t be said with the embedded language interpreter stuff I mentioned in my last message.

And, for my parenthetical point at the beginning, I don’t know how many proofs the propositional equivalents of each of the judgmental equalities is supposed to have. It may well depend on which equality is being used to judge their equivalence. The reason x=y::Ax = y :: A is called intensional equality in the paper is because it treats xx and yy intensionally, x=y:Ax = y : A treats them extensionally, and x=y÷Ax = y \div A treats them proof irrelevantly. And how xx and yy are treated doesn’t have to do with AA, but the typing connective, so there is no type of ‘intensional functions’ that don’t incorporate point-wise equality.

Posted by: Dan Doel on March 15, 2011 8:53 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

There seems to be a close connection between OTT and the univalence axiom. Function extensionality tells us what the equality on function types ought to be; OTT says it might not be that to start with, but you can fix that by redefining it. Univalence tells us what the equality on the universe ought to be; if it isn’t that, can we fix it by redefining it? I have thought for a while that using either “homotopy-exactness” or “homotopy-setoids” we ought to be able to construct a univalent universe from a non-univalent one—although we would probably need to assume function extensionality. Does OTT force UIP, or could we use that to get function extensionality first?

Posted by: Mike Shulman on March 15, 2011 9:56 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Does OTT force UIP, or could we use that to get function extensionality first?

I must admit, I haven’t read through the derivation of function extensionality from univalence, so I don’t know how the proof works (I tried a little to figure it out on my own, but didn’t come up with any good ideas).

OTT satisfies UIP by construction, though. Essentially, one translates equality of sets and values into more mundane types by (meta-)induction over the structure of the types involved. And the result is always 0, 1, or some exponential over them, which means that the equality types are translated to types with at most one value (extensionally). The translation looks something like:

TEq : Type -> Type -> Type
TEq 0 0 = 1
TEq 0 1 = 0
...

and term equality is heterogeneous:

VEq : (A : Type) -> A -> (B : Type) -> Type
VEq 0 _ 0 _ = 1
VEq 1 _ 1 _ = 1
VEq 2 false 2 false = 1
...
VEq (Pi S F) f (Pi T G) g =
      (x : S) -> (y : T) -> VEq S x T y -> VEq (F x) (f x) (G y) (g y)
...
VEq U S U T = TEq S T
...

and the off-diagonal is 0. This translation just assumes that merely isomorphic types aren’t the same, though.

However, you can imagine a similar translation for univalence, where we define something more like:

TEq S T = <weak equivalence>

Then our translation no longer satisfies UIP. A recent talk on univalence by Thierry Coquand suggested (page 54) that such a translation is sensible, and is an extension of the OTT approach to proof-relevant identity types (whereas the original OTT approach translated everything to irrelevant types, isomorphic to 0 or 1).

Posted by: Dan Doel on March 16, 2011 1:17 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

The OTT translation of equality types also stops satisfying UIP when you add quotients and coinductive types to it in the obvious way.

The reason is that for a quotient type A/RA/R, the proof that two terms xx and yy are equal at A/RA/R is an element of R(x,y)R(x,y). So unless you only allow quotients by proof-irrelevant relations, UIP will fail.

For coinductive types, you want to take an intentional codata type, and take its setoid equality to be bisimulation. But the general pattern for doing this is to introduce a second codatatype of “proofs of bisimilarity”, which you cannot generally force to be proof-irrelevant. (It might be possible to use a double-negated proof of bisimilarity, which would be proof-irrelevant, but perhaps that would let Markov’s principle in…?)

Posted by: Neel Krishnaswami on March 16, 2011 11:55 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Interesting! From a homotopical perspective, “of course” quotients are going to give you types that aren’t sets. The natural thing to take a quotient by homotopically, though, is not an equivalence relation, but a groupoid with all higher coherences, since that’s the structure formed by the tower of identity types of a particular type.

I guess I don’t really know what quotients by “proof-relevant relations” look like in type theory; I’ve only seen quotient axioms written down in the presence of UIP. For instance, if p:R(x,y)p:R(x,y) and q:R(y,z)q:R(y,z), then we should have p¯:Id A/R(x¯,y¯)\bar{p}: Id_{A/R}(\bar{x},\bar{y}) and q¯:Id A/R(y¯,z¯)\bar{q}:Id_{A/R}(\bar{y},\bar{z}), and so we have a composite J q¯(x¯,y¯,p¯):Id A/R(x¯,z¯)J_{\bar{q}}(\bar{x},\bar{y},\bar{p}) : Id_{A/R}(\bar{x},\bar{z}); is there anything that enforces this to be c(p,q)¯\overline{c(p,q)} where cc witnesses the transitivity (“is the composition operation”) of RR? Even worse, if also r:R(z,w)r:R(z,w), then the triple composites in Id A/R(x¯,z¯)Id_{A/R}(\bar{x},\bar{z}) are provably (propositionally) equal (i.e. homotopic), whereas if RR is only a “proof-relevant relation” then c(p,c(q,r))c(p,c(q,r)) and c(c(p,q),r)c(c(p,q),r) need not have anything to do with each other, right? So it seems like there would be a lot of stuff appearing in the quotient that isn’t determined by the input data.

What you say about coinductive types is particularly intriguing. With UIP, I think of coinductive types as terminal coalgebras for endofunctors. Does what you say imply that somehow the “homotopical” terminal coalgebra is different from the “set-theoretic” one? Can you give an example of a particular coinductive type where this happens?

Posted by: Mike Shulman on March 16, 2011 3:46 PM | Permalink | PGP Sig | Reply to this

Re: Homotopy Type Theory, I

Regarding quotients, I haven’t really seen “proof-relevant” quotients, either. Martin Hofmann’s model of quotient types uses a proof-irrelevant universe of propositions, as does OTT (which derives from that work). But just looking at the translation, it doesn’t seem to use proof irrelevance except to establish UIP.

The coinduction thing is the first thing everyone trips over when using coinductive definitions in Coq. Basically, you can’t prove that a codatatype is a terminal coalgebra, because the universality property is too extensional.

Suppose you have a functor FF, and two elements xx and yy of the coinductive type νF\nu F. Coinduction tells us that if we can find an equivalence relation \sim which respects the coalgebra structure FF, then xx and yy are equal.

Now think about the introduction rule for νF\nu F, the principle of corecursion. This says that if you’ve got a type AA which is an FF-coalgebra (ie, you can produce a map f:AF(A)f : A \to F(A)), then you get a map ν(f):AνF\nu(f) : A \to \nu F.

But now we’re in trouble! Corecursion works fine in Coq, but coinduction doesn’t.

Concretely, take streams (ie, take the functor F(X)=2×XF(X) = 2 \times X). Now here are two ways of generating the stream of booleans ,,,,,,\top, \bot, \top, \bot, \top, \bot, \ldots. First, take the carrier to be 22, and the algebra to be the function f=λb.(b,¬b)f = \lambda b. (b, \not b), so that bs=ν(f)bs = \nu(f)\;\top. Second, take the carrier to be \mathbb{N}, and the algebra to be the function g=λn.(even?(n),n+1)g = \lambda n.\;(\mathit{even}?(n), n+1), so that bs=ν(g)0bs' = \nu(g)\;0.

Coinduction tells us that bs=bsbs = bs', but these are obviously not definitionally equal. Hence coinduction can’t be compatible with the intensional equality type.

The workaround for this is to give a second codatatype, which represents the type of proofs of bisimilarity, which lets you construct proofs of bisimilarity by corecursion. Then you can define a setoid whose carrier is the underlying codatatype and whose equivalence relation is this second type, to construct a setoid which validates coinduction.

Posted by: Neel Krishnaswami on March 16, 2011 5:00 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Thanks, but by “this happening” I meant specifically the codata type of “proofs of bisimilarity” not being proof-irrelevant. Although now that I think about it some more, I guess that would obviously suffer from the same flaw, and what you’d really want is a third codata type of “proofs of bisimilarity between proofs of bisimilarity”, and so on… so that what you really need at the end is a homotopical quotient axiom, which can deal with a coherent higher groupoid as input.

Posted by: Mike Shulman on March 16, 2011 7:42 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

That sounds promising!

Posted by: Mike Shulman on March 16, 2011 4:26 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Today I finally heard some first-hand accounts of this – apparently historic – Oberwolfach meeting from the Utrecht delegation. Jaap van Oosten gave today’s colloquium talk on homotopy type theory.

One of the questions in the question session was: is it already conceivable that either homotopy theory or type theory would concretely profit from transferring actual results through the correspondence between the two?

For instance somebody voiced the claim that Voevodsky gave a type-theoretic analog of the proof of the long exact sequence in homotopy groups induced from a fibration, and that this proof is “remarkably simple”. Now the homotopy-theoretic proof (any of them) is not very complicated either, but maybe there is something in its type-theoretic version that would be of interest for homotopy theory/homotopy theorists?

Posted by: Urs Schreiber on March 17, 2011 8:07 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Do you mean historic? If not, I don’t know what you mean by “historical”.

Anyway, I think that’s an interesting question! Nothing jumped out at me as of interest to homotopy theorists separately, but there certainly might be something. I’m not as good at judging the opposite direction, not being a “real” type theorist myself, but what I’ve gathered seems to suggest that it isn’t so much the results which are interesting to them, but the ideas and methods. Some of them seem to think that the univalence axiom solves some long-standing problems with identity types.

Posted by: Mike Shulman on March 18, 2011 4:57 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Do you mean historic?

Ah, yes. Thanks.

Posted by: Urs Schreiber on March 18, 2011 8:36 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

This is fascinating stuff, and I can’t wait to hear more about it. Maybe someone could help me clear up a point of confusion though. I’m a homotopy theorist, so I’m trying to think in terms of the concrete models I know.

So according to this interpretation, it seems I’m supposed to think of a dependent type as a kind of fibration: the fibers over a point are telling me the type associated to that point. But in classical homotopy theory, the fibers of a fibration are all homotopy equivalent. So if I think of the factorization system (anodynes, Kan fibrations) in simplicial sets as my semantic model, do I not see interesting dependent types?

On the other hand, if I change my model to (left anodynes, left fibrations), things do become interesting: the fibers of a left fibration look like a family of spaces of varying homotopy type. But now the base is something more general than just a mere homotopy type. I really need to think of it as an \infty-category. So it feels like I have to widen my allowable “types” to see these interesting dependent types.

Is this right, or am I missing the point?

Posted by: Eric Finster on March 18, 2011 5:04 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Welcome, Eric! I’m very happy to have more homotopy theorists taking an interest! And I’m glad you asked that question.

in classical homotopy theory, the fibers of a fibration are all homotopy equivalent.

Well… at least if the base space is connected! I suppose maybe in “classical homotopy theory” all spaces are connected (at least, some classical homotopy theorists seem to operate under that assumption), but we definitely don’t want to assume that here.

Moreover, even if the base space is connected, so that all the fibers are equivalent, it doesn’t make the fibration uninteresting, since there can still be nontrivial monodromy. The classifying space of a group is connected, but that doesn’t make the universal fibration EGBGE G \to B G uninteresting, even though its total space is furthermore contractible! (-:

Posted by: Mike Shulman on March 18, 2011 7:02 PM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Ha! Right, there’s always those pesky non-connected spaces. So in this sense, we do get indexed families of types, one for each component, which I guess kind of nicely corresponds to how one thinks of them set-theoretically.

I think I have a couple more questions, but perhaps I’ll save them until after I read the next article …

Posted by: Eric Finster on March 18, 2011 9:38 PM | Permalink | Reply to this
Read the post Homotopy Type Theory, II
Weblog: The n-Category Café
Excerpt: Continuing from last time, we define sets and logic in terms of homotopy type theory.
Tracked: March 18, 2011 8:15 PM

Re: Homotopy Type Theory, I

When B is constant (independent of x), the dependent sum reduces to the binary product A×B, and the dependent product reduces to the function type B -> A.

So why are they called dependent sum and product instead of dependent product and exponential? (Of course, with this terminology, I can’t see any way to construct a dependent sum.)

Posted by: Aaron Denney on March 20, 2011 12:24 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Some people do name them that way (leading to confusion). And alternate notation like (x:A)×B[x](x : A) \times B[x] and (x:A)B[x](x : A) \to B[x] fits with that. The latter is even typically used in programming languages based on dependent type theory because it’s more ASCII than Π\Pi.

However, there are a lot of similarities with sums and products that suggest that naming, and so I think that’s what people tend to use. Dependent sum and product types are natural generalizations of finitary sums and products to type-indexed products. So, if we think about the definition of the former we have…

Given objects (types) A nA_n, the sum (product) Σ NA n\Sigma_N A_n (Π NA n\Pi_N A_n) is an object together with injections ι n:A nΣ NA n\iota_n : A_n \to \Sigma_N A_n (projections π n:Π NA nA n\pi_n : \Pi_N A_n \to A_n) that is couniversal (-co) among such objects.

To get dependent sums and products, we replace the finite collection of objects and morphisms with indexed families of morphisms and objects, roughly speaking.

Or, to use some fancier category theory, finite sums and products form an adjunction with a diagonal functor like:

Σ NΔ NΠ N\Sigma_N \dashv \Delta_N \dashv \Pi_N

and the dependent types case has a similar adjunction that looks like:

Σ ff *Π f\Sigma_f \dashv f^\ast \dashv \Pi_f

where f *f^\ast reindexes families along ff. (This sort of adjunction shows up other places, too, like categorical logic:

weakening\exists \dashv weakening \dashv \forall

which can themselves be seen as infinitary versions of:

Δ 2\vee \dashv \Delta_2 \dashv \wedge

similar to dependent sums/products vs. finite sums/products.)

And although we can recover ×\times from Σ\Sigma and \to from Π\Pi, we can also recover ++ from Σ\Sigma and ×\times from Π\Pi, by indexing over a finite set:

A+B=Σ b:Bool(ifbthenAelseB)A + B = \Sigma_{b : Bool} (if b then A else B) A×B=Π b:Bool(ifbthenAelseB)A \times B = \Pi_{b : Bool} (if b then A else B)

or for general finite sums/products:

Σ NA n=Σ n:NA n\Sigma_N A_n = \Sigma_{n : N} A_n

where NN is taken to be the appropriate finite type.

And of course, this naming scheme coheres well with arbitrary sums and products in set theory, as well.

Posted by: Dan Doel on March 20, 2011 1:45 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

Because in SetSet, A×Bsimeq aABA\times B simeq \coprod_{a\in A} B. But now if we replace BB by B aB_a (i.e. depending on the elements of AA), then aAB a\coprod_{a\in A} B_a is not simply A×BA\times B, but something more akin to a bundle. But there is still a map aAB aA\coprod_{a\in A} B_a \to A, sending bB ab\in B_a to aa.

And for dependent product, recall that the set A BA^B of maps from BB to AA is isomorphic to bBA\prod_{b\in B} A. But if one is dealing with families of sets it is a little more complicated. Have a look at dependent product at the nLab (indeed the objection you raised is mentioned there). This page discusses the real definition of dependent sum and product.

Posted by: David Roberts on March 20, 2011 1:46 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

The binary sum and the binary product are both special cases of the dependent sum; the binary product and the exponential are both special cases of the dependent product.

A+B= k:{0,1}(k?B:A) A + B = \sum_{k\colon \{0, 1\}} (k ? B : A) A×B= k:BA A \times B = \sum_{k\colon B} A

A×B= k:{0,1}(k?B:A) A \times B = \sum_{k\colon \{0, 1\}} (k ? B : A) A B= k:BA A^B = \sum_{k\colon B} A

I like to think of a curvilinear hierarchy from binary sums to dependent sums to binary products to dependent products to exponentials.

Posted by: Toby Bartels on March 20, 2011 6:11 AM | Permalink | Reply to this

Re: Homotopy Type Theory, I

I wrote the previous comment some time ago, so it's a bit late for a correction. But it's short and succinct, so one might want to refer to it; and its error is gross. So let me correct it:

A+B= k:{0,1}(k?B:A) A + B = \sum_{k\colon \{0,1\}} (k ? B : A ) A×B= k:BA A \times B = \sum_{k\colon B} A A×B= k:{0,1}(k?B:A) A \times B = \prod_{k\colon \{0,1\}} (k ? B : A ) A B= k:BA A ^ B = \prod_{k\colon B} A

(where p?X:Yp ? X : Y is if pp then XX else YY).

Posted by: Toby Bartels on June 30, 2013 8:07 PM | Permalink | Reply to this
Read the post Homotopy Type Theory, IV
Weblog: The n-Category Café
Excerpt: Voevodsky's "univalence axiom" for universes in homotopy type theory, and the equivalent principle of "equivalence induction."
Tracked: April 7, 2011 5:21 AM

Post a New Comment