Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

March 3, 2010

In Praise of Dependent Types

Posted by Mike Shulman

I’ve written before about “structural” foundations for mathematics, which can also be thought of as “typed” foundations, in the sense that every object comes with a type, and typing restrictions prevent us from even asking certain nonsensical questions. For instance, whether the cyclic group of order 17 is equal to the square root of 2 is such a nonsensical question, since the first is a group while the second is a real number, and only things of the same type can be compared for equality.

The structural axiom systems we’ve talked about, such as ETCS and SEAR, are presented as first-order theories whose objects of study are things with names such as “sets,” “functions,” and “relations.” Although such presentations look a bit different (and hopefully less syntactically imposing) than what is usually called type theory, it’s not too hard to write down more traditional-looking “type theories” that capture essentially the same models as ETCS and SEAR. However, these theories are missing an important aspect that is present in many type theories, namely dependent types.

If your introduction to type theory has been through topos theory and categorical logic, as mine was, it may take some time to really appreciate dependent types. At least, it’s taken me until now. The Elephant spends all of one section of one chapter (D4.4) on dependent type theory as the internal logic of locally cartesian closed categories, closing with the remark that the generalization from topoi to such categories “…has been won at the cost of a very considerable increase in the complexity of the calculus itself, and so we shall not pursue it further.” However, while dependent types are not necessary in any theory containing powersets (which, I think, includes all the foundational theories of interest to most mathematicians), I’m coming to believe that they are really a very convenient, natural, and ubiquitous notion.

Dependent types are really everywhere in mathematics. Wherever we speak about “a family of sets” indexed by some other set, that’s a dependent type. For instance, let CC be a small category and consider the notion of a CC-diagram in SetSet, i.e. a functor F:CSetF\colon C\to Set. This consists of a family of sets F 0(c)F_0(c) for each cob(C)c\in ob(C), together with a family of functions F 1(f):F 0(c)F 0(c)F_1(f)\colon F_0(c)\to F_0(c') for each f:ccf\colon c\to c' in CC, satisfying functoriality axioms. In other words, F 0F_0 is a dependent type which depends on specifying an element of the set ob(C)ob(C).

Dependent types are also everywhere in natural language. One standard example is that for any month mm, there is a type (or set) D(m)D(m) of the days in that month. Since different months have different numbers of days, the cardinality of D(m)D(m) is different for different mm. Of course, because of leap years, DD actually depends not only on mm but also on a year yy. David gave a couple of nice examples here of the need for dependent types in expressing quantifiers in everyday language.

In a material set theory such as ZFC, where everything is a set, including the elements of other sets, we can talk about “sets of sets” instead of “families of sets.” This isn’t quite a dependent type, though, since (in the functor example) we need each set F 0(c)F_0(c) to be associated to the element cc of the previously given set ob(C)ob(C). But we can instead consider F 0F_0 to be the set of ordered pairs (c,F 0(c))(c,F_0(c)), which again is possible since the components of an ordered pair can themselves be sets. Another way of saying this is that F 0F_0 is a “function” from ob(C)ob(C) to the proper class of all sets.

In a structural set theory such as ETCS or SEAR, dependent types have to be expressed differently, since the elements of sets and ordered pair are not themselves sets. The standard trick is to model F 0F_0 by a single set which is intuitively the disjoint union cob(C)F 0(c)\coprod_{c\in ob(C)} F_0(c), equipped with the obvious function F 0ob(C)F_0 \to ob(C). The particular sets F 0(c)F_0(c) can then be recovered (up to isomorphism, of course) as the fibers of this projection.

However, there’s something a bit unsatisfying about this; shouldn’t the notion of a family of sets exist prior to the assertion that any such family has a coproduct? The alternative adopted by DTT is to simply take “dependent types” as one of the basic notions of our theory. (For now, consider “type” to be a synonym for “set”.) Thus, for every type AA we have a basic notion of “AA-indexed family of sets/types.” If BB is a type depending on AA, then for each aAa\in A we have a type B(a)B(a). Of course, we can also have further types dependent on B(a)B(a), and so on. We can then assert, as an axiom (or “type constructor”), the possibility of taking the coproduct of any dependent type—this is usually called a dependent sum and written Σ aAB(a)\Sigma_{a\in A} B(a). An element of Σ aAB(a)\Sigma_{a\in A} B(a) consists of an element aAa\in A together with an element bB(a)b\in B(a).

Adding dependent types doesn’t make the theory any more expressive than ETCS or SEAR, since as long as we have dependent sums, we can always interpret a dependent type B(a)B(a) by Σ aAB(a)\Sigma_{a\in A} B(a) equipped with its projection to AA. So if your overriding goal is parsimony of notions, then you’d reject dependent types as unnecessary. But the point I’m making is that in mathematics and in everyday language, dependent types are what we really talk about, and their encoding in ZFC or ETCS is just that—an encoding. And once you start looking at them in their own right, the theory of dependent types is really quite beautiful, and has other important applications; let me try to give you a feel for a few.

First of all, there is the dual notion of a dependent sum, called a dependent product Π aAB(a)\Pi_{a\in A} B(a). An element of Π aAB(a)\Pi_{a\in A} B(a) consists of a function which assigns to each aAa\in A an element of B(a)B(a). This is exactly the sort of thing that the axiom of choice says will always exist whenever each B(a)B(a) is nonempty. Dependent sums and products are dual because when a type dependent on AA is interpeted as a projection to AA, i.e. an object of the slice category Set/ASet/A (or with some more exotic category replacing SetSet), then they are represented by left and right adjoints, respectively, to the pullback functor A *:SetSet/AA^*\colon Set \to Set/A.

Dependent products include, as a particular case, function spaces, since if BB is a type not dependent on AA (i.e. B(a)B(a) is independent of aAa\in A), an element of Π aAB\Pi_{a\in A} B is just a function ABA\to B. Similarly, dependent sums include binary cartesian products, since if BB doesn’t depend on AA, an element of Σ aAB\Sigma_{a\in A} B is just a pair (a,b)(a,b) with aAa\in A and bBb\in B. So dependent sums and products together encode all the structure of a locally cartesian closed category.

Dependent types can also be used to represent predicate logic. Recall that we like to think of a truth value as a (1)(-1)-category, or equivalently as a subsingleton set. Operations on truth values, like “and” and “or” and “implies,” can then be represented by operations in the poset of subsingletons. This is all well and good for propositional logic, but for predicate logic we need to talk about logical formulas with free variables, like (x+y=y+x)(x+y = y+x) for real numbers xx and yy. But hey, if for each fixed x,yx,y\in \mathbb{R} the formula (x+y=y+x)(x+y = y+x) is a truth value, i.e. a subsingleton, then the whole formula (x+y=y+x)(x+y = y+x) is a dependent type which depends on x,yx,y\in \mathbb{R}, each of whose individual types is a subsingleton.

Under this interpretation, called “propositions as types,” the operations on dependent types relate to quantifiers in predicate logic. For instance, if φ(a)\varphi(a) is a proposition dependent on aAa\in A, then an element of the dependent product Π aAφ(a)\Pi_{a\in A} \varphi(a) consists of a function assigning to each aAa\in A an element of φ(a)\varphi(a), i.e. an assertion that φ(a)\varphi(a) is true. In other words, Π aAφ(a)\Pi_{a\in A} \varphi(a) is the subsingleton representing the universal quantifier aA.φ(a)\forall a\in A.\, \varphi(a). If we have a particular a 0Aa_0\in A and an element fΠ aAφ(a)f\in \Pi_{a\in A} \varphi(a), then the evaluation f(a 0)f(a_0) shows us that φ(a 0)\varphi(a_0) is true. It’s no coincidence that we use the same word for “applying a function” and “applying a theorem”!

The propositions-as-types interpretation is very convenient if you’re implementing a computer system for doing mathematics, since it means that you can essentially write one piece of code for dealing with dependent types, and hey-presto you’ve also got code that works for predicate logic. One could argue that this really recovers whatever parsimony was lost by introducing dependent types as a basic notion, since whatever the system you use, you’ve got to have predicate logic somehow.

It also admits of vast and wonderful generalizations, for once we’re interpreting propositions as types, who says the types always have to be subsingletons? We can instead interpret a proposition φ\varphi by “the set of all proofs of φ\varphi” or “the set of all reasons why φ\varphi is true”—a set which contains strictly more information than just whether φ\varphi is true. You might have already noticed that dependent sums already take us out of the world of subsingletons: if φ(a)\varphi(a) is a proposition dependent on aa, then Σ aAφ(a)\Sigma_{a\in A}\varphi(a) isn’t just a subsingleton representing the proposition aA.φ(a)\exists a\in A. \varphi(a) (which we might naively expect); rather, its the set of all pairs (a,t)(a,t) where aAa\in A and tφ(a)t\in\varphi(a) is an assertion/proof/reason why φ(a)\varphi(a) is true. In other words, it’s the set of witnesses to the assertion aA.φ(a)\exists a\in A. \varphi(a), which is strictly more informative.

Another nice thing about dependent types is that they can be used to eliminate evil. The problem with a naive interpretation of “do no evil,” i.e. “never talk about two objects of a category being equal,” is that even in order to compose morphisms f:abf\colon a\to b and g:cdg\colon c\to d in a category, we need to know that bb and cc are the same object; it’s not enough to know that they’re isomorphic. But with dependent types, we can say that a category consists of a type ObOb of objects, together with a dependent type Hom(x,y)Hom(x,y), where x,yObx,y\in Ob, and composition operations Hom(x,y)×Hom(y,z)Hom(x,z)Hom(x,y)\times Hom(y,z)\to Hom(x,z). In other words, when we compose ff and gg, the assertion that the target of ff equals the source of gg is a typing assertion, not a statement about equality of two separately given objects. I believe this really matches the way we talk about categories in ordinary mathematical language; we always only consider a morphism in some category as a morphism in some particular hom-set. Saying that ff is a morphism from aa to bb isn’t an assertion that s(f)=as(f) = a and t(f)=bt(f)=b for some functions s,t:ArrObs,t:Arr\to Ob, rather it’s a typing assertion saying to what set ff belongs. We can then eliminate evil by simply saying that the type ObOb has no equality predicate, while each type Hom(x,y)Hom(x,y) does. This also leads to what I call 2-categorical logic, and, when taken to the limit, to (∞,1)-categorical logic which we’ve been discussing a bit here.

Finally, one can even argue that dependent types may be the future of computer programming. In general, proponents of strongly statically typed languages such as C++, Java, ML, and Haskell claim that unlike in dynamically typed languages such as Perl, Python, and PHP, static typing means that more errors can be caught at compile-time. And catching errors at compile-time is really what writing reliable code is all about—a “bug” is, by definition, an error in programming that escapes detection until run-time (the user’s run-time). For instance, in a dynamically typed language, we might have a function multiply̲matrices(A,B)multiply\underline{ }matrices(A,B) defined in one place, and somewhere else we could call that function like multiply̲matrices("hello,world",15)multiply\underline{ }matrices("hello, world", -15) (maybe I was having an off-day when I wrote that). The compiler would notice nothing wrong until we ran the program and the function multiply̲matricesmultiply\underline{ }matrices complained about being given a string and an integer instead of a pair of matrices (and in practice, you’d be lucky if its error message were that explicit). If the offending call to multiply̲matricesmultiply\underline{ }matrices were embedded deep in some complicated logic in such a way that it only got invoked on the second Tuesday of August in a leap year, then every so often our users would notice an inexplicable crash that we might have a lot of trouble tracking down. But in a statically typed language, the function would be declared as multiply̲matrices(matrixA,matrixB)multiply\underline{ }matrices(matrix A, matrix B), so that when the compiler encountered the call multiply̲matrices("hello,world",15)multiply\underline{ }matrices("hello, world", -15), it would complain to us, the programmers, about a typing error. We could then try to figure out what the heck we must have meant by multiply̲matrices("hello,world",15)multiply\underline{ }matrices("hello, world", -15), without causing any such headaches to our users.

None of that requires dependent types, but there’s a closely related (and, unfortunately, even more plausible) error that ordinary static typing won’t save us from. In order to multiply two matrices AA and BB, we need to know that the number of columns of AA equals the number of rows of BB. Presumably the function multiply̲matricesmultiply\underline{ }matrices will do some checking along these lines, but it can only do that at run-time, since the single type matrixmatrix has to include matrices of all sizes. So if it happens that buried somewhere in our complicated code we happened to call multiply̲matricesmultiply\underline{ }matrices on a pair of matrices whose dimensions don’t match (a much easier mistake to make than calling it on a string and an integer), then on the second Tuesday of August in leap years our users will be presented with the error “mismatch in matrix dimensions”.

However, if we’re programming in a dependently typed language, then you can have a type (matrixnm)(matrix n m) which depends on a pair of natural numbers nn and mm. (This is different from parametric polymorphism, in which you have a type like (ListX)(List X) which depends on another type XX; here the type (matrixnm)(matrix n m) depends on two values n,mn,m of type NatNat.) Then our multiplication function can be defined as multiply̲matrices((matrixnm)A,(matrixmk)B)multiply\underline{ }matrices( (matrix n m) A, (matrix m k) B), and when we call it elsewhere with a value AA of type (matrix24)(matrix 2 4) and BB of type (matrix73)(matrix 7 3), the compiler will complain about a type mismatch, sparing our users from experiencing the same error at run-time.

If we take this to extremes and incorporate propositions-as-types as well, then we can actually define a function whose type includes the assertion that the behavior of the function is correct. For instance, suppose we’re defining a function to compute the smallest prime factor of some natural number. In an ordinary statically typed language, this function might be typed as intintint \to int. A more precise typing involving dependent types would be Π n{k|0kn}\Pi_{n\in \mathbb{N}} \{ k | 0 \le k \le n\}, since the smallest prime factor of nn must be no bigger than nn. But with propositions as types, we can actually write a type for the function which is essentially “the type of all functions which take as input a natural number and output its smallest prime factor.” If our compiler does its type-checking job correctly, then if we manage to write a function with that type which compiles correctly, we will be guaranteed that the function does what we intended it to. This is what some people mean when they say that “all errors are type errors.”

Posted at March 3, 2010 6:18 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2178

52 Comments & 0 Trackbacks

Re: In Praise of Dependent Types

Technically, the matrix example does not require dependent types as such (although examples like it are probably the most widely used to illustrate dependent types). For instance, Haskell, with some extensions, can do it, despite not really having dependent types.

The price you have to pay is writing everything down twice. For the matrices, we might have:

datakind Nat = Z | S Nat

Matrix : Nat -> Nat -> *

where we have a kind of naturals (instead of a type), whose elements are at the type level, and thus can be used in type expressions. But if we wanted naturals at the value-level, there’d be a wholly separate:

datatype Nat = Z | S Nat

Or, if we have generalized algebraic datatypes, which allow some information to pass from the value level to the type leve, we might choose to index the value-level naturals by the type-level ones:

datatype Nat : Nat -> * where
Z : Nat Z
S : Nat n -> Nat (S n)

Natural : *
Natural = exists n. Nat n

The idea here being inspection of the GADTs at the value level refine the corresponding types, giving very similar effects to genuine dependent typing, while keeping a technical demarcation between values and types (which is useful from a compiler perspective, because it clearly delineates all the stuff you can check statically, and then erase). ATS, for instance, takes this tack, I believe.

Anyhow, I suppose that’s a bit of a digression, since that isn’t even necessary for the matrices. Only the datakind is. Where genuine dependent types really start to show their worth (as far as programming) are the areas in your final paragraph, where we want to encode complex specifications in the types of our functions. We might have:

sort:Π xs:ListΣ ys:ListSortedysxsyssort : \Pi_{xs : List \mathbb{N}} \Sigma_{ys : List \mathbb{N}} Sorted ys \wedge xs \cong ys

saying that given a list, we produce a list, together with proofs that it is sorted and a permutation of the original list. This is (I think) encodable into the digression scheme above (ATS has a verified sorting example), but the duplication gets more and more awkward (in my experience) the more you try to encode in the types.

Posted by: Dan Doel on March 3, 2010 8:13 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Yes, that’s true, you can deal with the matrix example without “real” dependent types, if you have enough type-level programming to be able to duplicate the type you want to depend on at the kind level. I used to think this sort of thing in Haskell was wonderfully clever. Now it looks to me more like a hack to get around the fact that you don’t have what you really want, namely dependent types.

Posted by: Mike Shulman on March 3, 2010 8:57 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Actually, I haven’t thought about this a lot, but is it clear that type-level naturals let you do everything that you could do with a dependent matrix type? For instance, with a dependent matrix type, you could have matrices whose dimensions are determined at run-time, as long as they are guaranteed to match up in the correct way, whereas it seems like with type-level naturals the dimensions of matrices would have to be fixed at compile-time. Am I wrong?

Posted by: Mike Shulman on March 4, 2010 12:59 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

In Haskell, it’s hard to say in general, because it’s a language that’s been grown one extension at a time. Your particular example is possible, because you can write polymorphic functions in Haskell. So you can write a function with a signature like:

(×):ijk.matrixijmatrixjkmatrixik(\times) : \forall i j k.\; \mathit{matrix}\;i\;j \to \mathit{matrix}\;j\;k \to \mathit{matrix}\;i\;k

As a matter of logical convention, the question about the distinction between type- and term-level natural numbers should really be turned the other way around: can term-level naturals emulate type-level naturals can?

The reason is that in the usual way of presenting first-order logic, making \mathbb{N} a type is a really peculiar decision! According to the Curry-Howard correspondence, types are propositions, and terms are proofs. If \mathbb{N} is a type, then that means it’s a true proposition whose proofs are 0,1,2,0, 1, 2, \ldots. Normally, we don’t care that much about the identity of proofs, so it’s a bit odd to introduce a type which is equivalent up to provability with true.

So if we look at usual first-order logic, it’s a small type theory with two sorts, oo for propositions, and \mathbb{N} for natural numbers. When we turn the Curry-Howard crank, we get a small programming language in which the natural numbers are not inhabitants of propositions. The induction principle corresponds to being able to write functions which are defined by primitive recursion over the natural numbers. So proofs and the data we quantify over kept strictly separate, even though they can interact.

Hongwei Xi et al.’s ATS language works according to this design principle. If I understand it properly, the motivation for this separation is a bit akin to the relationship between realizability and ML type theory – realizability lets you use any old computation (even using nonlogical features like state, IO, and control) to realize a proof, whereas MLTT requires proofs to be defined according to the typing rules.

A few more comments, in no particular order:

-*-*-*-

IMO, the idea that you can unify proofs and the data you quantify over is a real shock. It probably has some implications for the philosophy of mathematics, e.g. by giving logicists a purely logical notion of object.

-*-*-*-

The thing I find funny, though, is that one of the reasons that folks like you are interested in type theory – Martin Lof’s identity type – is actually one of the main obstacles to the widespread use of type theory for programming! For programming, we want equality to be extensional at function type, since we want to use simple-minded mathematics to prove programs correct. E.g., we may want to treat a family of endofunctions as a monoid, which works extensionally but not intensionally. We also want quotient types and coinductive types, which are similarly problematic in intensional type theory.

I suppose learning homotopy theory will be good for me. :)

Posted by: Neel Krishnaswami on March 4, 2010 9:10 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

The reason is that in the usual way of presenting first-order logic, making \mathbb{N} a type is a really peculiar decision! According to the Curry-Howard correspondence, types are propositions, and terms are proofs.

This is the sort of statement that turned me off to propositions-as-types for a long time. I think that when you explain it as I did, it makes perfect sense to say that propositions are (particular) types. But why would you insist that all types are propositions? I mean, it makes sense if what you want to study is the proof theory of intuitionistic propositional logic, which I’m sure is interesting, but if we want to do mathematics and write programs, surely we need types for which we care about the distinction between different terms?

You seem to be saying in your next paragraph that you want to call the types that are propositions “types” and the types that aren’t propositions “sorts.” Fine, but I find it much less confusing to say “proposition” and “type” respectively. In particular, the “types” in a programming language like Haskell are not propositions whose values are proofs; they are datatypes whose values are programs. So in that context it’s very natural to call the natural numbers a “type.”

Of course, there is still the question of whether term-level naturals (which, since types are datatypes, I think really are the most “natural” kind of natural) can emulate everything that type-level naturals can do. But if you really unify propositions and types, rather than keeping them existing at separate levels, then there doesn’t seem to really be any distinction between the two kinds of naturals. I find that to be in some sense the most appealing picture of all.

The thing I find funny, though, is that one of the reasons that folks like you are interested in type theory – Martin Lof’s identity type – is actually one of the main obstacles to the widespread use of type theory for programming!

That is funny, isn’t it! Maybe it means that programmers need to “get categorified”. (-:O

We also want quotient types and coinductive types, which are similarly problematic in intensional type theory.

The homotopy theorists also want quotient types, although they are “homotopical” quotients rather than ordinary ones. Or at least, I want them—I’m not sure whether I’ve managed to convince any of the other homotopy-type-theorists of their importance yet.

Why are coinductive types problematic intensionally?

Posted by: Mike Shulman on March 4, 2010 7:13 PM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

The problem with coinductive types is that the only useful equality on them is extensional, via bisimulation (which isn’t very surprising, since coinductive types are all about what you can observe about their values, not about how their values are built). For instance, if we have the coinductive type of extended natural numbers in Agda, and we have the following two definitions:

i : ¯\bar{\mathbb{N}}
i = suc i

j : ¯\bar{\mathbb{N}}
j = suc i

then, intensionally, iji \neq j, even though they appear to have the same definition. So, anything that requires appropriate equality over coinductive types (in an intensional theory) will have to mess around with extra equivalence relations/setoids, which is significantly less convenient than having the built-in equality be non-useless for such types.

Posted by: Dan Doel on March 4, 2010 8:43 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

That intensional equality for inductive types matches the usual desired extensional equality is related to the axiom of choice for such types. (Example: the axiom of countable choice for the inductive type of natural numbers.) If you want countable choice to be optional, then you may want to use a language in which all equality must be given explicitly, and then inductive types are as problematic as coinductive types in this respect.

The same thing comes up with dependent products vs coproducts (except that now the ‘co’ is on the other side). If you start with types in which intensional equality matches the desired extensional equality, then dependent sums (including binary products) also have this property, while dependent products (including exponential types) do not.

Of course, you can add extensionality rules (‘extensional type theory’) that states by fiat that extensionally equal terms of product types are intensionally equal. Presumably one could do this for coinductive types too. This is very natural from a mathematical perspective (although it justifies even stronger forms of the axiom of choice), but not from a computational one; in particular, this destroys the ability to detect all typing errors at runtime (type-matching may become undecidable).

Posted by: Toby Bartels on March 5, 2010 12:22 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

There is some hope for this in the form of observational equality. It provides an equality that can be used similarly to the equality in intensional type theory (the one’s I’m familiar with, at least; it has an eliminator similar to the J rule, for instance), but it actually allows for extensionality. And the resulting type theory is still decidable. Most of the papers on it don’t mention coinductive types, but Conor McBride wrote one on the topic recently.

Unfortunately, the language that incorporates all that (Epigram 2) had stalled for a while, although there’s been some renewed activity recently. Slow is better than nothing, of course.

Posted by: Dan Doel on March 5, 2010 1:15 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Hi Toby,

Building on Dan’s comment about OTT, one way out of this dilemma is to take the distinction between convertibility and propositional equality a bit more seriously. Both the extensional and intensional variants of MLTT identify convertibility with equality. However, for the purposes of mathematics and program proof, what we really want is for *propositional* equality to be extensional – we don’t really care whether convertibility is extensional or not, as long as a proof that e=ee = e' is always sufficient to deduce a proof of A(e)A(e') from a proof of A(e)A(e).

And of course in practice it is possible to do all our work in setoids. The trouble, of course, is that we have to manage the equivalence relations manually, which is rather annoying – especially when you have basically no desire to *ever* use raw types. So Martin Hofmann (one of the guys who originated the groupoid model of MLTT) suggested that we might want to design a type theory with extensional equality types whose interpretation is a subcategory of a model of intensional type theory, corresponding to just the setoids. This might allow both decidable type checking and useful extensionality principles.

I understand OTT to be an attempt to implement this idea – though of course, doing it is harder than suggesting it. For example, the structure of the equality types in this kind of type theory is very different from the identity type.

Posted by: Neel Krishnaswami on March 5, 2010 8:47 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Presumably the axiom of choice is only implied if you insist on using types to be propositions as well? I mean, in the usual model of dependent type theory in SetSet (or in any Π-W-pretopos, for that matter), you have extensional equality, but perfectly good inductive types, whether or not you assume any choice.

Posted by: Mike Shulman on March 5, 2010 8:39 PM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

Presumably the axiom of choice [for an inductive type] is only implied if you insist on using types to be propositions as well?

Something in that direction, yes. Hence my weasel words ‘is related to’ and ‘you may want’.

You don't necessarily need full propositions as types. You just need some notion of operation (or pre-function) that satisfies an axiom of choice, in that any proof of x,y,P(x,y)\forall x, \exists y, P(x,y) yields an operation ff such that x,P(x,f(x))\forall x, P(x,f(x)) holds. And you need some reasonable notion of identity (or pre-equality) proposition such that operations preserve identity, in that f(x)f(x) is identical to f(y)f(y) whenever xx is identitical to yy. (In a sufficiently impredicative theory, you can define a suitable identity by quantifying over operations: let xx be identical to yy if P(x)P(y)P(x) \rightarrow P(y) holds for every PropProp-valued operation PP.)

So operations don't have to come directly from propositions as types, nor does identity have to have anything to do, a priori, with the usual equality on natural numbers and the like. As long as you have some features with the properties above, then you can prove (by induction) that equal natural numbers are in fact identical, so that an operation defined on natural numbers is in fact a function, making the automatic choice operation into a choice function.

I think that you know this, but I thought that it would be worth it to spell it out.

Posted by: Toby Bartels on March 7, 2010 5:10 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

The point is, I think, that a typical type former in dependent type theory has a simple introduction rule—which asserts the presence of a morphism into the type being constructed—but a complicated elimination rule which is defined by lifting properties against the introduction morphism. You might think that what happens is that your category of intensional types is embedded in some larger ambient category where all these constructions can be performed extensionally; and that the type formers in the intensional type theory are obtained by first doing them extensionally in the metatheory and then reflecting back down via some kind of “fibrant replacement”. This is pretty much exactly what happens in the Logical Framework presentation of Martin-Löf type theory (see, for example, the so-called “funsplit” version of the elimination rule for dependent products in the book of Nordström, et al.)

The thing is that for product types and coinductive types, the fibrant reflection process is really going in the wrong direction. At some level you would like the things in your type theory to be fibrant (for the Leibniz property) and cofibrant (so they are freely generated). For constructors like dependent sums, you expect them to take cofibrant things to cofibrant things, so that after forming a dependent sum you only need to take a fibrant replacement to get back inside the cofibrant-fibrant part. Conversely, when you form a dependent product space X => Y you would expect that to be fibrant, but not cofibrant; and so what you really need to take at this point is a cofibrant replacement. Similarly for coinductive types.

If you follow this idea to its conclusion, then you’re led to posit that dependent products and coinductive types should have a simple elimination rule—telling you how to map out of the type you are constructing—and a complicated introduction rule defined by a lifting property against the elimination morphism. The problem which seems to be largely intractable is working out what this lifting property should be. If you could figure this out, then I think you would get the “right” (i.e., extensional) equalities on dependent products and coinductive types. I believe Peter Hancock started work on this sort of thing with Martin-Löf in the 70s (“epistemological semantics for dependent products” or something like that) but don’t think it ever got to a fully satisfactory stage.

Posted by: Richard on March 5, 2010 11:30 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

I think that the contrast between simple introduction rules and simple elimination rules is clearest with the example of binary products. In an extensional theory, binary products are both sum-like (inductive), being a special case of dependent sums, and product-like (coinductive), being a dependent product over a 22-element type; but in an intensional theory, you get (potentially) two versions.

The simple introduction rule for A×BA \times B is Γx:A;Γy:BΓ(x,y):A×B \frac { \Gamma \vdash x\colon A;\; \Gamma \vdash y\colon B } { \Gamma \vdash (x,y)\colon A \times B } and the corresponding complicated elimination rule is Γ,z:A×BC:Type;Γ,x:A,y:Bw:C[(x,y)/z]Γ,z:A×Bconv ×(x,yw):C \frac { \Gamma, z\colon A \times B \vdash C\colon Type;\; \Gamma, x\colon A, y\colon B \vdash w\colon C[(x,y)/z] } { \Gamma, z\colon A \times B \vdash conv_{\times} (x,y \mapsto w)\colon C } where Γ\Gamma is any context in which AA and BB are types.

The simple elimination rules for A×BA \times B are Γz:A×BΓπz:A \frac { \Gamma \vdash z\colon A \times B } { \Gamma \vdash \pi z\colon A } and Γz:A×BΓρz:B \frac { \Gamma \vdash z\colon A \times B } { \Gamma \vdash \rho z\colon B } and I don't know how to write down a corresponding complicated introduction rule.

Of course, one prefers simple rules, so naturally we are happy to adopt the simple introduction rule. But then the complicated elimination rule is stronger than the simple elimination rules, unless one has enough extensionality. (It is easy to define πz\pi z and ρz\rho z as conv ×(x,yx)conv_{\times} (x,y \mapsto x) and conv ×(x,yy)conv_{\times} (x,y \mapsto y); but if we conversely define conv ×(x,yw)\conv_{\times} (x,y \mapsto w) as w[πz/x,ρz/y]w[\pi z/x, \rho z/y], we find that its type is C[(πz,ρz)/z]C[(\pi z,\rho z)/z], rather than CC as expected. So we need some sort of extensionality for products, if not the full eta-rule.)

I knew all this before, but your explanation in terms of (co)fibrant replacements helps me to understand it on another level; thanks.

Posted by: Toby Bartels on March 7, 2010 5:07 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

This reminds me of something I’ve thought about off and on in the past (although, I’m afraid Richard’s comment is about 70% meaningless to me; I’ve got learning to do).

One can distinguish between ‘strong’ and ‘weak’ inductive eliminators. Weak eliminators are what some folks refer to as ‘catamorphisms’. For Peano numerals, it’s:

cata :Π R:Type.R(RR)R cata_{\mathbb{N}} : \Pi_{R : Type}. R \to (R \to R) \to \mathbb{N} \to R

the strong eliminator is, of course, typed like:

ind :Π P:Type.P0(Π n:.PnP(n+1))Π n:.Pn ind_{\mathbb{N}} : \Pi_{P : \mathbb{N} \to Type}. P 0 \to (\Pi_{n : \mathbb{N}}. P n \to P (n + 1)) \to \Pi_{n : \mathbb{N}}. P n

Operationally, this type is interesting, because it is not just an embellishment on the catamorphism. Instead it looks more like what is known as a ‘paramorphism’, which in the non-dependently typed case would be:

para :R.R(RR)R para_{\mathbb{N}} : \forall R. R \to (\mathbb{N} \to R \to R) \to \mathbb{N} \to R

Which, notably, may be able to compute more efficiently if it’s implemented directly (for instance, computing the predecessor function using the catamorphism is O(n), while with an efficient paramorphism, it’s O(1)).

So, turning to coinductive types, we have introduction via ‘anamorphism’, which is the dual of catamorphism.

ana:S.(SFS)νF ana : \forall S. (S \to F S) \to \nu F

But, there’s also a dual to paramorphisms, apomorphisms:

apo:S.(SF(νF+S))νF apo : \forall S. (S \to F (\nu F + S)) \to \nu F

So, I’ve wondered if there could be a ‘strong’ introduction rule that’s a fancier version of apo (and perhaps Π\Pi types could have a similar rule). However, I haven’t been able to make much sense of the idea as yet.

Posted by: Dan Doel on March 7, 2010 6:20 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

The answer is a somewhat qualified “yes”. What can be done requires (something like) the scheme I described with the value level naturals. For instance, presumably you can determine from the matrices what the type-level naturals are (meaning the matrices themselves are GADTs). But even if not, if we have the GADT naturals, we can do something like:

main = do {
(t :: exists m n. (Nat m, Nat n, Matrix m n)) <- read
open t as {
(m, n, mat) in ...
}
}

‘read’ can generate the matrix (and its dimensions) at runtime, which is why it hides their types as an existential. But the Nats it gives mirror the structure of the type, carrying the information at runtime, allowing us to write useful functions on the result.

However, I agree, this sort of thing is basically a hack to allow you to get really close to having dependent types without really having them, and I don’t think there’s a particularly compelling answer* to, “why don’t we just use real dependent types?” I can understand not reworking an existing language to have dependent types (like Haskell/GHC; or ATS, which was an outgrowth of Dependent ML, which had the same kind of type-value phase distinction), as it’s a more significant shift. But making new languages that use this same distinction just seems like making it needlessly awkward.

* One argument I have heard is that Haskell-like languages have something distinct to offer, in that their ‘forall’ and ‘exists’ quantifiers represent a kind of parametricity that is lost when you move to dependent types. In something like Haskell:

n.Matrixnn\exists n. Matrix n n

‘n’ is genuinely unknown, and it doesn’t even exist at runtime, while

Σn:Nat.Matrixnn\Sigma n : Nat. Matrix n n

actually carries the natural with it, and it can be inspected. We may add ways in which we can still program without knowing beforehand what ‘n’ is (which will look like we are recovering ‘n’ using other information), but it is actually forgotten, and this is genuinely different than dependent types.

However, it seems to me a language could have both, and that’d be great. Perhaps dependent types tend to get the spotlight, and lead to people not thinking about the parametricity angle, but they’re both useful, and writing hacks with one scheme to simulate the other seems sub-optimal.

Posted by: Dan Doel on March 4, 2010 9:25 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

(t :: exists m n. (Nat m, Nat n, Matrix m n)) <- read

Very clever, I didn’t think of that. Thanks!

In something like Haskell: n.Matrixnn\exists n.Matrix n n, ‘nn’ is genuinely unknown, and it doesn’t even exist at runtime, while Σn:Nat.Matrixnn\Sigma n:Nat.Matrix n n actually carries the natural with it, and it can be inspected.

Can you give a concrete example of this distinction, and in what situations you would want the former rather than the latter?

Posted by: Mike Shulman on March 4, 2010 6:49 PM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

Can you give a concrete example of this distinction, and in what situations you would want the former rather than the latter?

Well, from a purely performance-oriented perspective, you have static guarantees that you can erase the ‘n’ in the exists case. Haskell programs don’t carry any types around at runtime (aside from implementation-specific caveats).

However, on a more theoretical side, the quantifiers in a Haskell-alike language have nice properties that pi types in general don’t have. One can prove that they are parametric (ala Reynolds’ abstraction theorem), and Wadler, for instance, uses this fact to get his free theorems where, knowing only the type of a function, you get an equational law that it obeys.

For a connection to category theory, if FF and GG are covariant functors/type constructors, then any function with the Haskell type:

a.FaGa\forall a. F a \to G a

is a natural transformation from FF to GG. But presumably

Π n:FnGn\Pi_{n : \mathbb{N}} F n \to G n

doesn’t give similar guarantees. A Haskell-like existential quantifier should have similar abstraction properties, although I don’t really know what they would correspond to in category theory (although, for instance, a.Faa\exists a. F a a is supposed to be a coend of FF). One use of existential types is in modeling modules and abstract types. Dependent sums, by contrast, don’t give the right kind of abstraction for that. Sigma types are transparent, where existential types are opaque (I don’t know if I can explain it well, but these lectures go into it. See the ones on dependent types, existential types, and translucent sums).

I don’t actually know if these parametricity results hold up when you add GADTs, but if they do, then that’s the sort of thing you lose by moving to real dependent types. Of course, one might still be able to prove parametricity results about pi types that quantify over types:

Π a:*FaGa\Pi_{a : *} F a \to G a

but, with a separation like Haskell’s even the type-level naturals are still types, and so you potentially get parametricity results you wouldn’t if you moved some of your constructs to genuine dependent types. I’m afraid I don’t really have examples to say whether this is a noticeable loss or not.

On a final note, there are systems that try to handle both types of quantification. Erasure pure type systems for instance tag pis, lambdas and applications with erasibeality annotations, and presumably some sort of parametricity theorem holds for the eraseable pis. So something like that might be able to get you the best of both worlds.

Posted by: Dan Doel on March 4, 2010 8:22 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Thanks for trying to explain, but I still don’t understand the difference between a.FaGa\forall a. F a \to G a and Π a:TypeFaGa\Pi_{a:Type} F a \to G a. (It seems to me that Π n:FnGn\Pi_{n:\mathbb{N}} F n \to G n is a whole different beast from either one.) The lectures you linked to use a lot of words that I don’t understand; can you bring it down to the level of a mathematician? (-:

Posted by: Mike Shulman on March 5, 2010 8:19 PM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

Off the top of my head, I think the existential case may be easier to argue, and this relates, I think, to wren’s example. Suppose we want a specification of an abstract set-of-integers datatype. It might look like:

abstract {
Set : Type
empty : Set
insert : Int -> Set -> Set
...
}

Now, we have two candidates for implementation. Existentials:

t.t×(Inttt)×\exists t. t \times (Int \to t \to t) \times \ldots

and dependent sums, with a Σ\Sigma instead of an \exists. And the implementations are about the same, too. Suppose we use sorted lists:

ListSet : t.t×(Inttt)×\exists t. t \times (Int \to t \to t) \times \ldots
ListSet = hide (List Int) ([], sortedInsert, …)

And smilarly for the dependent sum implementation. Now, suppose we write some code that needs a set-of-integers type. And we decide to use the implementation in ListSet:

foo x y z = open ListSet in ... do stuff with empty, insert, etc.

Now, in the case of the existential, the parametricity guarantees require us to program to the ADT specification, because what ‘t’ is for ListSet is statically unknown outside of the ‘hide …’ expression. It is opaque. On the other hand, dependent sums are transparent, so because we know that we are using the ListSet implementation, we can see that the type used to implement the sets is ‘List Int’. This also means that we are able to (accidentally or on purpose) use the ADT operations on arbitrary lists of integers, even ones that may violate invariants upheld and assumed by the set implementation. The existential, by contrast, statically ensures that this can’t happen.

One can, I suppose. delay this, by accepting the set implementation as a parameter. But there must be someone, somewhere that chooses what implementation to use, and there are fewer static assurances on what he can do if we’re using dependent sums for this purpose instead of existentials.

I’m afraid I don’t have a similar example for universals off the top of my head. I’ll have to think about it.

Posted by: Dan Doel on March 5, 2010 10:33 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

I’ve been thinking about it, and there may be no similar argument for pi types. The key difference between the existential and sigma types above is that the former are what are known as weak sums, while the latter are strong. The difference is in the eliminator. Weak sums get:

weak:Π(R:*).(Π(x:A).BxR)((x:A).Bx)Rweak : \Pi (R : *). (\Pi (x : A). B x \to R) \to (\exists (x : A). B x) \to R

while strong sums get:

strong:Π(C:(Σ(x:A).Bx)*).(Π(x:A)(w:Bx).C(x,w))Π(p:Σ(x:A).Bx).Cpstrong : \Pi (C : (\Sigma (x : A). B x) \to *). (\Pi (x : A) (w : B x). C (x, w)) \to \Pi (p : \Sigma (x : A). B x). C p

Alternately, one can look at strong sums having the two eliminators:

π 1:(Σ(x:A).Bx)A\pi_1 : (\Sigma (x : A). B x) \to A

π 2:Π(p:Σ(x:A).Bx).B(π 1p)\pi_2 : \Pi (p : \Sigma (x : A). B x). B (\pi_1 p)

strongstrong and the two projections are interdefinable.

Anyhow, strong sums, as is obvious from the two π\pis, allow one to usefully project out the first component. We can use the contents of the second part with types that reference the first part. We can attempt to do this with weak sums:

π 1=weakA(λxy.x)\pi_1 = weak A (\lambda x y. x) π 2p=weak(B(π 1p))(λxy.y)p\pi_2 p = weak (B (\pi_1 p)) (\lambda x y. y) p

But the latter will give an error, because yy has type BxB x not B(π 1p)B (\pi_1 p), which are not the same in that scope. This holds even if we know how pp is built, so although we can, in our set-of-integers example, do:

T=π 1ListSetT = \pi_1 ListSet

and know that T=ListIntT = List Int, we cannot also project out the ADT operations, and use them incorrectly with arbitrary lists. The only way to use the abstract type with the weak sum is to do:

weakR(λT(empty,insert,).)weak R (\lambda T (empty, insert, \ldots). \, \ldots)

which enforces the abstraction. The other bad news is that adding strong sums to a language turns weak sums into strong sums via:

spinach=weak(Σ(x:A).Bx)(λxy.(x,y))spinach = weak (\Sigma (x : A). B x) (\lambda x y. (x, y)) π 1p=π 1(spinachp)\pi_{1\exists} p = \pi_1 (spinach p) π 2p=π 2(spinachp)\pi_{2\exists} p = \pi_2 (spinach p)

unless we have some fancy typing rules that disallow weak elimination into strong sums.

However, I am only aware of the above distinction making sense for sums, or more generally inductive types (strong induction principles vs. weak structurally recursive eliminators). I haven’t seen a similar distinction between strong and weak dependent products. In fact, dependent products are used in the calculus of constructions (for instance) to encode weak sums:

(x:A).Bx=Π(R:*).(Π(x:A).BxR)R\exists (x : A). B x = \Pi (R : *). (\Pi (x : A). B x \to R) \to R

A straight-forward Church encoding. I don’t think there is any difference in the pi types of a language with strong inductive types and one without, so there is no corresponding loss of abstraction.

The erasure pure type systems I mentioned do offer a way to extend the sort of parametricity from type-based to quantifier based, though. In those type systems, there are two different types:

x:.T\forall x : \mathbb{N}. T Πx:.T\Pi x : \mathbb{N}. T

The latter is a normal dependent product. The former is like a dependent product, but xx is statically guaranteed to not appear in any places where it might be computationally relevant; the behavior of the function is independent of what xx is. This seems (to me) to be a genuine extension of the parametricity results one gets when quantifying over types. For a situation where this would matter, consider here, an inductively defined universe in Agda. I’ve defined two functions involving types in the universe. One is natural/parametric, and the other is not, but note that they both have the same type.

By contrast, below that, you’ll find a Haskell translation. The punchline is that the parametric function is typeable with a type that the non-parametric function is not. This is because the universe definition has to be split into a type part and a value part, and the natural function can quantify over the type part, with no need for the value part. However there’s no need to make this split type-directed (which results in the duplication you see in the Haskell); it could instead depend on the quantifier, in which case you could write code more like the Agda, but nevertheless statically determining certain parametricity conditions.

Posted by: Dan Doel on March 7, 2010 1:07 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Okay, I think maybe I finally get it. Making the connection to strong vs weak sums, which I’ve seen before, is helpful.

However, I’m still not entirely clear on what it is, specifically, that you can do with the strong sum that you want to forbid with the weak sum. You talk about projecting out the ADT operations, so that you can use them with lists in ways that ListSet doesn’t want you to, but what does that actually buy you that wouldn’t be equally well obtained by having differently named functions explicitly operating on lists which do the same thing? It doesn’t seem to me like there should be any way to apply the projected-out ADT operations illegally to ListSets while still thinking of them as sets, since any context that claims to be working with sets qua sets should be parametrized over implementations of sets. What am I missing?

Posted by: Mike Shulman on March 7, 2010 4:46 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Well, the context is figuring out a type system that captures what we want modules to accomplish, or having a type system that allows us to statically enforce the “abstract” part of “abstract datatype”, and prevent mistakes in that regard.

Of course we don’t have to do this. We can just say, by convention, “only lists that we’ve constructed with these functions count as sets.” But we can just as well say, in Scheme, “this function only accepts integers; don’t call it with anything else.” That doesn’t prevent us from mistakenly calling it with a list, though.

The idea with existential types would be to package up the operations into a box that then enforces the abstraction. In Hakell, when I want a set of Ints, I just say

import Data.IntSet

and despite knowing what implementation I’m using, I can only use operations that the module allows. In the case of Haskell modules, this is accomplished by not exporting constructors, so I can’t write my own functions on the underlying structure of the type, but it can also be accomplished by hiding the type in an opaque wrapper, and bundling it with operations on that opaque type. In either case, I don’t need to parameterize all my code by a set implementation to get the abstraction (which is just turning the code inside-out, and using universal quantification to enforce the abstraction instead of existential quantification).

I’m no longer certain if strong vs. weak sums is all their is to it, though (I think parametricity is distinct, and may also be part (perhaps the more important part) of enforcing the abstraction; see my reply to Neel down-thread. This conversation is helping me flesh out some ideas in this space; sorry if that makes my ramblings confusing).

Posted by: Dan Doel on March 7, 2010 4:56 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

I didn’t explain myself clearly; I didn’t mean for “thinking of” to imply a mere programmer convention but to actually refer to the precise typing. But I think that possibly I was confused. I’m still confused, but I’m getting a better sense for what the questions are, at least.

Posted by: Mike Shulman on March 8, 2010 2:08 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Oh. In that case, yes, I suppose dependent types let you achieve the same goals in a different manner. We could, for instance, write something like:

ListSet:Type ListSet : Type ListSet=Σ(l:List).Sorted(l) ListSet = \Sigma (l : List \mathbb{N}). Sorted(l) insert:ListSetListSet insert : \mathbb{N} \to ListSet \to ListSet

in which we use dependent types to express all the invariants that are expected to hold. Then we can use any list that we can prove fits the invariants with the relevant functions.

It might come down to taste. This is certainly a valid path to take. By contrast, the existential solution is more or less trying to create a situation where you have a bunch of operations that preserve the relevant invariants by construction (though, that they do so may not be machine verified as it is in the dependent types case), and then statically ensuring that you can only use those operations within in the confines of the trusted API, so you can’t muck things up.

The latter is naturally popular in non-dependently typed languages, because the former is (more or less) impossible. But I’m not ready to suggest we throw abstract datatypes in the waste bin because we can bundle data structures with proofs that they satisfy all the expected invariants, so it’d be nice to have a language that incorporates both options. Having proper existential types is ‘just’ first-class support for that.

Posted by: Dan Doel on March 8, 2010 2:55 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Here’s a concrete example, though it’s perhaps a bit meta. Consider for the moment a compiler for some functional language. In order to deal with capture of non-local names we use closures. So for example, the code (let y = 1 in \x -> x + y) will be compiled down to a pair of an environment saying y=1 and a function which takes the environment as an additional argument and has no free variables.

If the compiler is correct, then it will ensure that we only ever pass the correct environment to any given code pointer. But how do we guarantee the compiler is correct? If we have existential types in the target language, then we can give closures the type ∃A.(A, A→whatever). By parametricity, the only thing we can pass in as the first argument to the function is the environment it is paired with.

But, with Σ-types (i.e., Σ(A:*).Σ(env:A).(f:A→whatever)), it’s possible to project out the type A and then (perhaps) construct some other value of that type and pass that in place of the real environment. The compiler may still be correct, but we can’t rely on type checking to convince us that it is.

Posted by: wren ng thornton on March 5, 2010 12:38 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Putting aside for the moment how strange it seems to me, as a mathematician, to have a function that you’re only allowed to apply to one argument (how is that different from just the value of that function on its unique argument?), I also didn’t follow this:

But, with Σ\Sigma-types (i.e., Σ(A:*).Σ(env:A).(f:Awhatever)\Sigma (A:*).\Sigma(env:A).(f:A\to whatever)), it’s possible to project out the type AA and then (perhaps) construct some other value of that type and pass that in place of the real environment.

How could you possibly construct another value of type AA when all you know about AA is that it’s a type and that envenv is a term of type AA? I mean, for all you know, AA might be the unit type and envenv its unique element. It seems to me that the same argument which shows that given a term of the type A.(A,Awhatever)\exists A. (A, A\to whatever), all you can do is apply the second component to the first, should also apply to the type Σ(A:*).Σ(env:A).(f:Awhatever)\Sigma (A:*).\Sigma(env:A).(f:A\to whatever).

Posted by: Mike Shulman on March 5, 2010 8:24 PM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

(how is that different from just the value of that function on its unique argument?)

It’s not, that’s the point :) Sometimes when programming the existential version of things is easier to work with (or ensure correctness of) than equivalent expressions.

To pick a mathier example, consider an abstract type defined by a set of operations, e.g. (nil,cons,uncons) of type ∃A.(A,Int→A→A,A→1+Int*A). By using the existential we can hide the concrete implementation and ensure that only the specified operations are used. This is related to the question of evil, but from an operational side: we’re free to substitute anything isomorphic with A, and clients can’t tell they’re not equal.

How could you possibly construct another value of type A when all you know about A is that it’s a type and that env is a term of type A?

But that’s not all we know, with Σ-types we actually know which type A is. Which means we can do something awful like: let (A,env,f) = … in if A == Int then f 42 else f env.

The difference is in the typing rules for projections. Given a type A:* and a value x:B(A) we can construct the existential [A,x] : ∃a.B(a) and the product (A,x) : ΣA:*.B(A). The projection π2[A,x] has type B(a) where a remains abstract and cannot be refined (because for all we know A is not consistent with that refinement), whereas the projection π2(A,x) has type B(A) where we know the actual type A. Σ-types only generalize products not existential quantification.

Posted by: wren ng thornton on March 7, 2010 9:32 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

This is related to the question of evil, but from an operational side: we’re free to substitute anything isomorphic with A, and clients can’t tell they’re not equal.

I think this is the key point: parametricity lets us give a nonstandard, more extensional interpretation, of the equality proposition in our type theory. That is, for reasons of modularity in programming, we want to be able say (internally in our type theory) that an implementation of natural numbers in unary is equal to an implementation in binary, because we want to be able to swap equivalent implementations with each other.

This explains why strong sums are incompatible with parametricity. Suppose that Peano(X)\mathit{Peano}(X) means that the type XX is a model of the Peano axioms. Then an implementation of the natural numbers is a sum type α.Peano(α)\exists \alpha.\; \mathit{Peano}(\alpha). Two different terms of this type might be t 1=(unary,pf1)t_1 = (\mathit{unary}, pf1) and t 2=(binary,pf2)t_2 = (\mathit{binary}, pf2).

We would like to be able to say that these two terms t 1t_1 and t 2t_2 are equal to each other. However, equality is substitutive, so that equality means that with strong sums we would be able to conclude that π 1(t 1)=π 1(t 2)\pi_1(t_1) = \pi_1(t_2). And then you could use this derived type equality to make s 1(0 2)s_1(0_2) (ie, using the unary successor on the binary 0) a well-typed expression – which is obviously unsound.

However, weak sums are compatible with this equality, since the elimination rule for it doesn’t let you construct the unsound equality of types.

Note that there may be an alternate route that gives equality type coercions more operational significance, which would be sound, in this example anyway. However, to work out this suggestion I think you’d have to be fluent in both groupoid/homotopic models of type theory as well as type-theoretic technology like focalization and logical relations. This is a teeny-tiny set. :(

Posted by: Neel Krishnaswami on March 7, 2010 1:20 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

We would like to be able to say that these two terms t 1t_1 and t 2t_2 are equal to each other. However, equality is substitutive, so that equality means that with strong sums we would be able to conclude that π 1(t 1)=π 1(t 2)\pi_1(t_1) = \pi_1(t_2). And then you could use this derived type equality to make s 1(0 2)s_1(0_2) (ie, using the unary successor on the binary 0) a well-typed expression – which is obviously unsound.

This seems to me to require stronger guarantees than just that sums are weak (which means perhaps I am still confused, and weak sums are not all there is to the opacity of existentials). For instance, I was experimenting with the Church encoding of weak sums in Agda, and I was able to write a π 1\pi_1 using only weak elimination such that:

π 1[T,e]=T\pi_1[T,e] = T

And that fact is recognizable and usable. So if we expect all α.Peano(α)\exists \alpha . Peano( \alpha ) to be equal, that is a problem even with weak sums, because we can get:

[Unary,pf1]=[Binary,pf2] [Unary, pf1] = [Binary, pf2] Unary=π 1[Unary,pf1]=π 1[Binary,pf2]=Binary Unary = \pi_1[Unary,pf1] = \pi_1[Binary,pf2] = Binary

and that’s wrong regardless of whether we can project out pf1 and pf2. So if we want that equality, it seems as though we need

weak(λTe.T)[T,e] weak (\lambda T e. T) [T,e]

to not actually compute to T, or we need to disallow that expression somehow.

Come to think of it, this may be a way in which parametricity genuinely fails for products and sums (whether strong or weak) in a dependent type theory. Namely, regardless of whether we have type case, there can be functions:

Π(A:Type).Type \Pi(A : Type). Type

which are not parametric in AA. Going out on a limb, parametricity for functions of that type corresponds to something like:

Π(A:Type).K 1AK TypeA \Pi(A : Type). K_1 A \to K_Type A

being natural in AA, where K TK_T is the constant functor for TT. But:

f A=λu.A f_A = \lambda u. A

is not natural, because:

λu.A=K Typeg.f Af B.K 1g=λu.B,g:AB \lambda u. A = K_Type g . f_A \neq f_B . K_1 g = \lambda u. B, g : A \to B

and, in fact, I think the only parametric such ff are

f=const(λu.T) f = const (\lambda u. T) Presumably there’s a corresponding failure of parametricity for sums in a dependent type theory, whether those sums are strong or weak.
Posted by: Dan Doel on March 7, 2010 4:15 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Yes, you’re right – we can’t use the Martin-Lof identity type (whose only constructor is Refl) to interpret equality of types if we want parametricity axioms to be sound.

If we look at parametricity in terms of logical relations, it says that there is a relation between Unary\mathit{Unary} and Binary\mathit{Binary}, which Peano(Unary)\mathit{Peano}(\mathit{Unary}) and Peano(Binary)\mathit{Peano}(\mathit{Binary}) preserve. The fundamental theorem then says that all terms of the language preserve these relations. If we want this in dependent type theory, then clearly the proofs of the equality type at existentials must contain this data in it!

The point (I think) is that all this is completely type-directed, so some exotic variant of definitional equality “ought” to be able to shuffle all this information around from where we define it to where we use it, so that we don’t need to manage it explicitly. Of course, proving the soundness of any such algorithm is going to require some fancy semantics.

Posted by: Neel Krishnaswami on March 8, 2010 12:08 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Two different terms of this type might be t 1=(unary,pf1)t_1=(unary,pf1) and t 2=(binary,pf2)t_2=(binary,pf2). We would like to be able to say that these two terms t 1t_1 and t 2t_2 are equal to each other.

Huh. That seems to me bizarre and perverse if equality means equality, since they clearly aren’t equal, only isomorphic.

Of course, in the homotopical/groupoid world we should expect the “equality” relation on the type TypeType to mean isomorphism/equivalence, so in that world they should be “equal.” But actually, unaryunary and binarybinary are themselves isomorphic, so they should also be equal as elements of TypeType in this world. But it doesn’t seem like that should make s 1(0 2)s_1(0_2) well-typed, since you have to eliminate on the isomorphism p:Id Type(unary,binary)p : Id_{Type}(unary, binary) in order to make the types agree, so you could have s 1(J(0 2,p)):unarys_1(J(0_2,p)) :unary or J(s 1,p)(0 2):binaryJ(s_1,p)(0_2) : binary.

Is that related to what you meant by “giving equality type coercions more operational significance”?

Posted by: Mike Shulman on March 8, 2010 1:46 AM | Permalink | PGP Sig | Reply to this

Re: In Praise of Dependent Types

I don’t think it’s unaryunary and binarybinary that should be equal. They’re clearly different types if you can actually look at them.

What should be equal is every element of α.Peano(α)\exists \alpha. Peano(\alpha), because the only things we should be able to tell about them (presumably) is that they contain some, unknowable type that is a (initial? depends on what exactly Peano says) Peano algebra.

That in a dependent type theory we can project out the type component and know what it is, and thus derive unary=binaryunary = binary, is a bug (from this perspective) in dependent type theory.

Posted by: Dan Doel on March 8, 2010 2:08 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

I don’t think it’s unary and binary that should be equal. They’re clearly different types if you can actually look at them.

But they’re isomorphic, aren’t they? If they’re both models of the Peano axioms, it seems that they’d better be isomorphic. So if “equal” means “isomorphic,” then they are equal.

Posted by: Mike Shulman on March 8, 2010 2:18 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Here’s another example that should hopefully be more enlightening:

α.α \exists \alpha. \alpha

One way of looking at this is that it is the type containing all values, because we can use any value (coupled with its type) to create a value of α.α\exists \alpha. \alpha. However, if it’s a properly parametric existential, it should be isomorphic to the unit type with a single constructor. This is because we have no way of inspecting the values to determine any differences between them. All functions with types like:

(α.α)T (\exists \alpha. \alpha) \to T

are of the form (assuming totality, so we don’t have to worry about bottoms like in Haskell, which mucks this situation up with seq, as well):

constt const \; t

Because, more or less, we have no way of verifying that [,5][\mathbb{N}, 5] is a different element than [,7][\mathbb{N}, 7], or even [Unit,unit][Unit, unit]. So, we really have no way of verifying that these aren’t all sent to the same, single element of α.α\exists \alpha. \alpha. However, naturally, this doesn’t mean that \mathbb{N} is isomorphic to UnitUnit. The existential is just hiding the information that our ‘implementation’ types are not isomorphic. For instance, if we had:

Peano(X)=X×(XX) Peano'(X) = X \times (X \to X)

then the two elements of α.Peano(α)\exists \alpha. Peano'(\alpha):

[,(0,suc)] [\mathbb{N}, (0, suc)] [Unit,(unit,constunit)] [Unit, (unit, const \; unit)]

would be expected to be equal in the same way. All we know is that we have some type that contains at least one element, and is closed under some unary operation. But since we’ve provided no way to observe the effects of applying the operation, we can’t tell the difference between these two implementations.

Weak sums in a dependent type theory don’t quite get us this far, because even with weak elimination, we can project out the type component, and notice that [,0][\mathbb{N}, 0] is different than [Unit,unit][Unit, unit], because \mathbb{N} is different from UnitUnit. For an existential to be parametric, this distinction shouldn’t be possible.

Posted by: Dan Doel on March 8, 2010 3:09 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Hi Mike,

Yes, eliminating the isomorphism is what I mean by giving equality coercions more significance, since an isomorphism can carry computational content. I think this might offer a way of implementing extensionality properties (what we want) on top of an intensional base theory (what we can implement). Hence my interest in groupoids – I need an intensional-enough model to start from. :)

However, in this particular case, I don’t mean that – I mean parametricity, which is an even stronger requirement! So, suppose we know that unary\mathit{unary} and binary\mathit{binary} are isomorphic (i.e., not necessarily equal). However, what we want to do is to say that (unary,pf1)(\mathit{unary}, pf1) and (binary,pf2)(\mathit{binary}, pf2) are indeed equal, since these two terms behave the same in all possible contexts. That is, we want to say that unobservable differences don’t matter for equality, so hiding the implementation behind an abstraction boundary lets us equate the terms. In other words, we want to go from the categoricity theorem for the Peano axioms to the equality of the elements of the existential type α.Peano(α)\exists \alpha. \mathit{Peano}(\alpha).

This is weird to modern eyes, but it’s not unnatural – in fact, it’s very similar to the argument Dedekind made in “Was Sind und Was Sollen die Zahlen?” There, he defined a notion of formally infinite system, showed that it was inhabited, and then appealed to an operation of “abstraction” which erased unobservable differences to make any possible implementations the same. Set-theoretically, his argument fails, but it does make sense in terms of parametricity.

Posted by: Neel Krishnaswami on March 8, 2010 10:31 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Okay, now I think I see what you’re trying to do. I still maintain, however, that it’s perverse and bizarre, and perhaps doomed to failure for those reasons. (-: I think (and I think many of the people around here would agree with me, if they’re still listening to this discussion) that one of the fundamental insights of modern mathematics, and of category theory in particular, is that one shouldn’t try to force things to be equal which are naturally only isomorphic. Some of us believe that Dedekind’s sort of “abstraction” has actually found its true expression in “structural set theory,” where it doesn’t even make sense to ask whether two sets are equal, only isomorphic.

Posted by: Mike Shulman on March 8, 2010 8:23 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

I’m not sure equality of sets/types is the point. The equality that’s being imposed isn’t unary=binaryunary = binary, for instance. It’s that values of type α.Peano(α)\exists \alpha. Peano(\alpha) are equal, and that those values contain types is somewhat incidental to parametricity, at least in a dependently typed theory. For instance, if we consider an extended existential:

n:. \exists n : \mathbb{N}. \mathbb{N}

then parametricity would (I believe) imply that:

m,m,n:.[m,n]=[m,n] \forall m, m', n : \mathbb{N}. [m,n] = [m',n]

So, n:.\exists n : \mathbb{N}. \mathbb{N} is isomorphic to \mathbb{N}, whereas in the case of the dependent sum, this is well known to be ×\mathbb{N} \times \mathbb{N}.

So, going back to the Peano case, α.Peano(α)\exists \alpha. Peano(\alpha) is the type whose values are implementations of the Peano axioms. All elements are ‘equal’ in that the Peano axoims fully determine everything we could observe about each element. But, I don’t think it even matters if we can find an inhabitant of

Id[unary,pf1][binary,pf2] Id [unary, pf1] [binary, pf2]

for some (intensional or extensional) identity type IdId. What matters is that the type enforces the intended abstraction, and that there is no operation in the language that lets us see a difference between the two. Dependent sums don’t accomplish this because:

SSSSZ:π 1[unary,pf1] SSSSZ : \pi_1 [unary, pf1] 100:π 1[binary,pf2] 100 : \pi_1 [binary, pf2]

but not vice versa. For an example where this would actually matter, we should be able to implement coinductive types as:

νF=s.s×(sFs) \nu F = \exists s. s \times (s \to F s)

But, if we use dependent sums, there are potentially a lot of values that should be observationally equal (even if not provably), but aren’t, like, for νX.1+X\nu X. 1 + X:

inf Unit=[Unit,unit,λu.i 2u] \inf_{Unit} = [Unit, unit, \lambda u. i_2 u ] inf =[,0,λn.i 2n] \inf_{\mathbb{N}} = [\mathbb{N}, 0, \lambda n. i_2 n ]

* n:.\exists n : \mathbb{N}. \mathbb{N} should be isomorphic to R.(n:.R)R\forall R. (\forall n : \mathbb{N}. \mathbb{N} \to R) \to R (or something like it), but the nn there is computationally irrelevant/erasable, so we’re left with R.(R)R\forall R. (\mathbb{N} \to R) \to R, which is isomorphic to \mathbb{N}.

The above shows that in a sense, values of type α.Peano(α)\exists \alpha. Peano(\alpha) don’t really contain types at all (unless PeanoPeano specifies that it does, of course). Their values only contain values and functions over some unknown, unknowable type that satisfy the Peano axioms. This is true in a real sense if your purpose for introducing these parametricity conditions is for erasure; α.Peano(α)\exists \alpha. Peano(\alpha) will have its first component erased at runtime.

Posted by: Dan Doel on March 11, 2010 8:38 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

I still haven’t figured out what “parametricity” means, but to my intuition as a mathematician, it sounds like you’re saying that you want to have something (an implementation of the Peano axioms) which contains a type (since the zero element and successor function have to act on some type) but also doesn’t contain a type. And mathematically, I don’t see how that can mean anything. If you want to hide the type that it contains, that’s one thing, but the type still has to be there.

In membership-based set theory like ZFC, I could ask for a thing 00 and a set ss with the property that there exists some set NN such that 0N0\in N and ss is a function NNN\to N, since in that case, “being a function from NN to NN” is a property that might or might not be possessed by some set ss. But it seems to me that the whole point of type theory is that “being of type NNN\to N” is not a property that might or might not be possessed by some thing ss, but that rather before you can be given ss at all, you have to be given its type. So I don’t understand what it means to talk about an implementation of the Peano axioms as consisting of 00 and ss “such that there exists” some type α\alpha on which they live.

Posted by: Mike Shulman on March 12, 2010 2:28 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

I’m not sure I’m capable of explaining parametricity any better than my previous attempts. I can link to papers by folks who almost certainly understand it better than I do. John Reynolds was, as far as I know, the first to formalize it in Types, Abstraction and Parametric Polymorphism. There’s also a part 2 that generalizes to categories other than SetSet (from what I gather, Reynolds proved that the second-order lambda calculus has no non-trivial classical set theoretic model after he’d defined parametricity in terms of said classical set theoretic understanding). Wadler’s Theorems for Free! is also a popular paper on the subject.

Parametricity was intended to capture the sort of polymorphism behind

map:a,b.(ab)ListaListbmap : \forall a, b. (a \to b) \to List a \to List b

that each instantiation map A,Bmap_{A,B} is in some sense the ‘same function’, or that ‘what it does’ is independent of the particular values of aa and bb. One can look at this as the values of the two variables being hidden inside the function. Existentials turn this inside out, and hide the specific value of the quantified variable from everyone outside, and forces any code you write to ignore what type is actually used to implement the existential.

The sense in which existentials might not really contain types is that, per the above, we’ve systematically ensured, with types, that our computations don’t make use of various pieces of information. The type component of an existential pair is one of these pieces of information, as are the type parameters and corresponding arguments for universals. So, once types have been checked, a smart compiler can simply erase all of the stuff we’ve systematically ensured that we don’t look at, because it won’t matter. This paper (and a related thesis) extends that hiding capability from just types to anything in the language, based on how terms are used instead of what type they have. So we can have a function whose type proves that it doesn’t look at some natural number it takes as a parameter, and thus that parameter may be erased. That paper talks mainly about “erasure”, but any time they say “erasability” you can instead think “parametricity” and so on.

Perhaps it was a mistake to suggest thinking that the values don’t really contain types (or whatever we’re abstracting over). If one thinks about them intensionally, presumably they do. Extensionally, one might question whether they actually do, since we cannot observe that they do. For instance, Wadler’s free theorem for

f:α.(αR) f : \forall \alpha. (\alpha \to R)

for any concrete RR can be used to derive:

x:A,y:B.f Ax=f By \forall x : A, y : B. f_A x = f_B y

But if we take R=β.βR = \exists \beta. \beta, this is the type of its ‘constructor’, which suggests that every inhabitant of β.β\exists \beta. \beta is in some sense ‘equal’. Now, I don’t know of any intensional systems that would give you that, and one could instead interpret this as a statement about what the typing rules hide from us, and allow us to observe. But from the latter perspective, what is the difference between a type whose values really hold a type and a value of that type, but don’t let us usefully see them, and a type which is only really inhabited by a featureless element? And the answer is, for some sufficiently extensional equivalence on β.β\exists \beta. \beta, there’s no difference up to isomorphism.

Posted by: Dan Doel on March 12, 2010 8:45 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Hi Mike,

Maybe an example of how to interpret the types of System F would be helpful.

First, start with some model of the untyped lambda calculus. Let’s call it VV. Because it’s a model of the lambda calculus, it’ll satisfy the isomorphism VVVV \simeq V \to V.

To interpret System F, we can interpret types via the category of PER(VV). A PER is a “partial equivalence relation” – i.e., a relation which is symmetric and transitive, but not necessarily reflexive. We’ll take the PERs as the objects, and take morphisms from AA to BB to be elements of VVV \to V which preserve relations. That is, ff is a morphism if for any A(v,v)A(v,v'), we have B(f(v),f(v))B(f(v), f(v')). (We also quotient so that we don’t care about what happens outside the graph of AA, and similar details.)

The idea is that the graph of the PER tells you the domain of the type, and the elements of the relation tell you how to equate the elements of the domain. (As an aside, note that this can be seen as a degenerate groupoid model, viewing a PER as a skeletal groupoid.) However, there is one property that PER(VV) has, which arbitrary families of groupoids don’t: the set of PERs is a set, and it is closed under meets. This means that we can interpret a type α.A(α)\forall \alpha.A(\alpha) by varying α\alpha over over all PERs, and then taking their meet.

This explains why the term for a universal product α.A(α)\forall \alpha.\;A(\alpha) doesn’t have to manipulate actual objects representing the type/set – the idea is that the actual computational object just has to sufficiently uniformly that it works regardless of the choice made. (Incidentally, this is related to an argument Carnap made back in the 30s about when impredicative definitions are permissible. It’s funny it was re-invented decades later to program computers.)

This is also why things like the map function Dan Doel mentions below are so interesting: they are individual terms which “contain the data” for a whole natural transformation (which is normally seen as a kind of indexed family). So universal products can give a different angle on various size issues related to category theory – for example, in System F an internal version of the Yoneda embedding can be written as an isomorphism

α,β.(αβ)(γ.(γα)(γβ))\forall \alpha, \beta.\; (\alpha \to \beta) \iff (\forall \gamma. (\gamma \to \alpha) \to (\gamma \to \beta))

For the left-to-right direction, the actual lambda-term would look like:

Λα,β.λf:(αβ).Λγ.λg:(γα),v:γ.f(gv)\Lambda \alpha, \beta.\;\lambda f:(\alpha \to \beta).\;\Lambda \gamma.\; \lambda g:(\gamma \to \alpha), v:\gamma.\; f(g v)

For the right-to-left direction, the actual lambda-term would look like:

Λα,β.λk:(γ.(γα)(γβ).k[α]id α\Lambda \alpha, \beta.\; \lambda k:(\forall \gamma. (\gamma \to \alpha) \to (\gamma \to \beta).\; k\;[\alpha]\;id_\alpha

Note the absence of hom-sets – obviously this is useful for computerized proof assistants, because a type theory doesn’t necessarily have a set theory lying around to express stuff like this.

(One detail I’ve slipped past is that the model construction I handwaved at doesn’t quite validate these kinds of isomorphisms – you need a slightly tighter interpretation of the universal product to achieve that. See Rosolini’s “Reflexive Graphs and Parametric Polymorphism,” or Birkedal and Mogelberg’s “Categorical models of Abadi-Plotkin’s Logic for Parametricity.” These are honestly a bit above my categorical pay grade – I know some concrete constructions on PERs, which I use as a map to these papers.)

Posted by: Neel Krishnaswami on March 12, 2010 10:16 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Thinking on this some more, it occurred to me that in the case of an existential type, we have:

π 1:(α.Peano(α))* \pi_1 : (\exists \alpha. Peano(\alpha)) \to *

But this isn’t ruled out by a lack of type case in Haskell, or other languages like it. It’s ruled out simply because it doesn’t have the (*,[],[])(*,[],[]) Π\Pi-formation rule. That rule is exactly what dependent typing adds, though, and lets us inspect parts of an existential that we want to be hidden. Further, the reason this extension is out of place is because the added functionality isn’t (guaranteed to be) parametric. I don’t actually know of a formal definition of parametricity of existentials, but presumably we can gain a little insight via:

(α.Pα)Rα.PαR (\exists \alpha. P \alpha) \to R \cong \forall \alpha. P \alpha \to R

where parametricity for the \exists should match up with parametricity for the isomorphic \forall. But, when R=TypeR = Type, we observed that

(λAP.A):Π(A:Type).PAType (\lambda A P. A) : \Pi (A : Type). P A \to Type

isn’t parametric. And this is exactly what corresponds to π 1\pi_1.

So, perhaps when moving to dependent types, retaining a existential/universal quantifiers that are subject to additional parametricity (/erasure) conditions are what is needed to avoid losing abstraction.

Posted by: Dan Doel on March 8, 2010 1:55 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

The loss of parametricity is one of the biggest problems with dependent types —in terms of being usable in useful languages. Trying to find a way to make parametricity abide with dependency is one of the big open questions in type theory since it’s very easy to become inconsistent.

I’ve actually been working on the problem lately and my working conclusion agrees with yours: we need to retain universal/existential quantifiers distinct from Π/Σ. Part of the problem, I think, is that non-dependent languages conflate a number of different things and call them all “products”; but when we move to the dependent setting that conflation falls apart and we end up with all these different versions of Σ. More particularly, while the ∀ of System F can be encoded by Π (and the ∃ by a convoluted impredicative construction), that’s just an encoding. The Π doesn’t give us the same reasoning ability and so does not, I think, actually include universal (parametric) quantification.

Posted by: wren ng thornton on March 8, 2010 7:48 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

we can say that a category consists of a type ObOb of objects, together with a dependent type Hom(x,y)Hom(x,y), where x,yObx,y\in Ob

(this is a repeat of a recent comment at the nforum, but probably more relvant here than to that discussion)

This is fine for large categories, but for small categories, we (can, and probably should) define them as being categories internal to SetSet, and having a set of objects and a set of arrows, we can compare arbitrary arrows for equality.

If one is given a small category via a set of objects (say, a type with an equality predicate), and dependent types as hom-sets, we can form the dependent sum and get an internal category.

Posted by: David Roberts on March 3, 2010 10:44 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

You’ve convinced me. I’m now ready to join up to the Society for Dependent Types, though I’m worried other members may be clingy.

Now, to what extent can modal logic be seen as a dependent type theory? We had some related discussion back here.

Models of first order modal theories will have a collection of worlds, where in each world there is a domain of individuals with certain properties. There is an accessibility relation between the worlds, so that I say PP is necessary at my world if it holds at every accessible world, and PP is possible if it holds at some accessible world. Different modal logics correspond to different accessibility relations, i.e., combinations of various properties: reflexive, symmetric, transitive, and so on.

So we were wondering about having a ‘metatype’ of worlds which was a groupoid (and we could allow different groupoids covering the worlds), and a typed logic at each world. So then there’s a type Dog(w)Dog(w), corresponding to the dogs at world ww, i.e., Dog(w)Dog(w) is a dependent type.

Posted by: David Corfield on March 4, 2010 9:17 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

I wonder what is the ‘typed modal logic’ referred to by Awodey and Kishida on p. 22 of Topology and Modality.

Posted by: David Corfield on March 5, 2010 11:40 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

So then there’s a type Dog(w)Dog(w), corresponding to the dogs at world ww, i.e., Dog(w)Dog(w) is a dependent type.

Makes sense to me. Then you should be able to define modal operators by quantifying over the accessibility relation on worlds.

Posted by: Mike Shulman on March 5, 2010 8:28 PM | Permalink | Reply to this

You lie!; Re: In Praise of Dependent Types

Great thread. Thank you! Now I need to re-examine my monograph-in-progress on belief revision in the dynamics of intelligent agents making public announcements, where agents may be various types of liars. “We can instead interpret a proposition φ by ‘the set of all proofs of φ’ or ‘the set of all reasons why φ is true’—a set which contains strictly more information than just whether φ is true. You might have already noticed that dependent sums already take us out of the world of subsingletons: if φ(a) is a proposition dependent on a, then Σ a∈Aφ(a) isn’t just a subsingleton representing the proposition ∃a∈A.φ(a) (which we might naively expect); rather, its the set of all pairs (a,t) where a∈A and t∈φ(a) is an assertion/proof/reason why φ(a) is true. In other words, it’s the set of witnesses to the assertion ∃a∈A.φ(a), which is strictly more informative.”

The reasoning of an agent revising its belief about who is what kind of liar is a considerably deep problem (once we leave conventional puzzles in propositional logic behind), which I was tackling through Model Theory and Bisimulation. But the use of sets of witnesses to assertions may be a more “natural” approach, in the sense of how people really do it. Algorithms for belief revision are attempts to deal with combinatorial explosion. It is not feasible for each agent to model each other agent, including each other’s agent of each other’s agent, and so on recursively, to “Indra’s net” countable infinity. Can a quantum computer do this liar-identification belief revision feasibly?

Posted by: Jonathan Vos Post on March 4, 2010 6:20 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

Vladimir Voevodsky was (is?) also a fan of dependent type systems:

http://video.ias.edu/webfm_send/734

Posted by: tryggth on March 5, 2010 4:55 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Vladimir Voevodsky was (is?) also a fan of dependent type systems

Yep, see also his web page.

Posted by: Mike Shulman on March 5, 2010 8:27 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

All this discussion about programming languages is interesting (at least to me), but I think it may be obscuring the main point I wanted to make, which is that dependent types are really part of the natural way we talk about mathematics, and even about everyday life. Considerations like “extensional/intensional equality” and “parametricity” are interesting and important to people trying to design programming languages, but for purposes of the ordinary mathematician, or the ordinary philosopher, I don’t think they’re really that relevant. Mathematicians and everyday people are used to treating everything as extensional, and dependent types can work perfectly well that way, as a mathematical formalism on the same level as ordinary first-order logic, but one which is (I believe) more expressive and closer to the usual way we talk and think. And as Neel mentioned, the possibility of unifying proofs and data ought to have some interesting implications for the philosophy of mathematics.

Posted by: Mike Shulman on March 5, 2010 8:33 PM | Permalink | Reply to this

Re: In Praise of Dependent Types

For the record, I agree with this. The stuff on parametricity was kind of a tangent that ended up turning into a bigger deal than I initially intended.

I think the hom sets example was a good one. Admittedly, a fair amount of my thinking about category theory has probably been colored by thinking about it and implementing it in type theoretic/functional language settings. But it seems quite natural for hom to be a a family of types, and for composition to be defined appropriately in that manner. I don’t think it would even occur to me in my usual setting for composition to be modeled as some partial function C 1×C 1C 1C_1 \times C_1 \to C_1, which tests for equality of certain components of its arguments. But that’s how it’s often defined in category theory texts.

So, absolutely, dependent types are great and useful, and mathematicians should jump on the bandwagon. :)

Posted by: Dan Doel on March 22, 2010 6:37 AM | Permalink | Reply to this

Re: In Praise of Dependent Types

Dependent types are already used all over math :) Think about a vector field on a manifold. It sends a point in the manifold to a vector in the tangent space at that point: F:(p:M)T pMF : (p:M) \rightarrow T_p M. Or think about the type of the limit functor in category theory.

Posted by: Jules on September 16, 2015 10:04 PM | Permalink | Reply to this

Post a New Comment