Propositions as Some Types and Algebraic Nonalgebraicity
Posted by Mike Shulman
Perhaps the aspect of homotopy type theory which causes the most confusion for newcomers (at least, those without a background in type theory) is its treatment of logic. With that in mind, I want to share some general thoughts about all the ways in which foundational systems can treat logic, and why I think HoTT makes the choice that it does. This even turns out to have interesting implications for the difference between “algebraic” and “non-algebraic” structures.
Naively, one might say that in mathematics we do two kinds of things: (A) define and construct things and (B) prove things about them. There are also, of course, all other sorts of less precise mathematical activity, such as looking for patterns and making conjectures, but in terms of formal activity that a foundational system has to be able to express, I think everything can be reduced to those two. Roughly, (A) deals with things (like sets, or homotopy types), whereas (B) deals with propositions. However, there are several ways that a foundational system can treat the relationship between “things” and “propositions”.
Propositions are basic. This is the approach of classical first-order logic, in which traditional set theories such as ZFC are formalized. With this approach, activity (A) is not a basic thing, but is reduced to activity (B). Our ability to perform certain basic constructions is not part of the formal system, but is expressed through axioms which are propositions. The rules of logic tell us how to derive new propositions from old ones. In ZFC, we never construct a set directly; rather, we prove a statement to the effect that a set with such-and-such properties exists (and, perhaps, is unique).
Things and propositions on an equal footing. This approach is sometimes called logic-enriched type theory; it is used in much writing on categorical logic. With this approach, both things (called types) and propositions (or more precisely, predicates over types) are basic objects of the theory. There are rules of type theory, which tell us how to construct new types from old ones (products, disjoint unions, function-types, etc.), and also rules of logic, which tell us how to derive new propositions from old ones. Of central importance, and perhaps most difficult to get used to for someone coming from a traditional mathematical background, is that the operation of forming the type from the types and , and the operation of forming the proposition from the propositions and , are on the same ontological footing (as opposed to in ZFC, where the existence of a set called the “cartesian product” of two other sets is a proposition that must be proven).
Things are basic. In this approach, propositions are regarded as particular things (that is, “sets”, or usually “types”). When we regard a proposition as a type, the elements of that type are regarded as proofs or witnesses of that proposition; thus proving a proposition is identified with exhibiting an element of a type. Hence activity (B) is not a basic thing, but is reduced to activity (A). What makes this possible is the Curry-Howard correspondence, which observes that the basic operations on propositions (and, or, implies, …) correspond very closely to the basic operations on types (product, coproduct, function-type, …).
If we choose this approach, we have to make a further choice: which types are regarded as propositions? This affects the exact way in which the Curry-Howard correspondence is realized.
Any type can be a proposition. This is usually called propositions as types (although perhaps “types as propositions” would be more appropriate). In this case, we can reuse all type-operations as proposition-operations without change: the proposition is the type , the proposition is the type , etc. Once again the type-theoretic and logical operations are on the same ontological footing, but now it is because they are exactly the same thing.
Only some types can be propositions, or propositions as some types. Usually, the restriction imposed is that a type can be regarded as a proposition if (and only if) it has at most one element. In other words, a proposition is something that may be true (if it contains an element), but there is no distinction between different “ways” in which it can be true. In this case, we can only reuse a type-operation as a proposition-operation if it preserves the property of having at most one element. Cartesian product does, so can be , but disjoint union does not, so must be some sort of “truncation” of (see bracket type).
(As an aside, I should mention that any of the above approaches can be used as a “meta-theory” to formalize any of the others as an “object theory”. In particular, it is very natural to consider a theory in first-order logic as sitting inside a typed system: this is perhaps less clear for a theory like ZFC which has only one type (the type of “sets”), but becomes important for theories with more than one type. However, we can also regard meta-theoretic reasoning about type theories (such as “in any type theory satisfying (blank) we can construct a type (blah) such that …”) as taking place in an ambient first-order logic, or an ambient logic-enriched type theory, or whatever we prefer.)
But which is the best approach? Each has its advantages. Propositions-are-basic has a certain economy and a long tradition, and enables the application of many techniques from first-order logic. Logic-enriched type theory is probably the most flexible, decoupling propositions from things; this seems the best way to talk about the logic of strong-subobjects in a quasitopos, or the logic of a hyperdoctrine or tripos. Things-are-basic has a pleasing parsimony of basic notions, which is a help when formalizing it in a computer, and also keeps witnesses around for later use rather than forgetting them once theorems are proven. (Afficionados of each of these should feel free to mention further advantages of each in the comments.)
If we were to set out, from scratch, to try to formulate a foundational theory in which the basic objects are -groupoids, we might a priori choose any one of the above approaches. But I want to argue that propositions-as-some-types, which is the approach chosen by homotopy type theory, is the most natural choice in this setting. The fundamental observation is that the “types with at most one element” which arise in propositions-as-some-types are just the first rung on an infinite ladder: they are the -truncated types (-groupoids), called h-props. Why, then, should they be treated specially, as they are with propositions-are-basic or logic-enriched-type-theory?
In other words, homotopy theory tells us that the distinction between activities (A) and (B) is not binary, but many-layered. Activity (B) is the special case of constructing (-1)-truncated things, whereas we can also consider 0-truncated things, 1-truncated things, and so on, up to and including things that aren’t truncated at all. If we wanted to capture that with a logic-enriched sort of approach, we would need infinitely many basic notions, one for each level of truncation or lack thereof.
The homotopical view of identity types as path types also points us towards propositions-as-some-types. Recall that given a type in homotopy type theory with two points , the “identity type” represents the type of paths from to . In general, this is not (-1)-truncated: there can be more than one distinct such path. However, if happens to be 0-truncated (an h-set), then is (-1)-truncated, and under this view coincides with the classical proposition “”. If propositions were not identified with (-1)-truncated types, then we would have the general construction of identity/path types, and we would also have, in the particular case when is 0-truncated, classical equality propositions of the form “”, which in that case happen to be equivalent to inhabitation of the identity type. This would be unappealing and redundant.
There’s (at least) one more advantage to propositions-as-some-types in homotopy type theory, but to explain it I have to digress a bit. Consider the following two statements.
The axiom of choice: If for every there exists a such that some property holds, then there is a function such that for every we have .
If for every there exists a unique such that some property holds, then there is a function such that for every we have . This is variously called the axiom of unique choice, the axiom of non-choice, or the function comprehension principle.
Under propositions-are-basic, whether these statements hold depends on the formal system we set up inside first-order logic. In membership-based set theories like ZF, the axiom of choice is an axiom that we can assume or not, whereas the axiom of non-choice is almost always just true, essentially by the definition of a “function” as a certain kind of set of ordered pairs.
In logic-enriched type theory, however, even the axiom of unique choice is not automatically true. In fact, in the logic of strong-subobjects in a quasitopos that is not a topos, the axiom of non-choice is false! In that case, for a map , the truth of the assertion “for every there exists a unique such that ” means that is both monic and epic, but this is not enough to make it split epic (since then it would be an isomorphism).
By contrast, under propositions-as-types, even the axiom of choice is automatically true! This is a classical observation dating back to Martin-Löf. The reason is that under the Curry-Howard correspondence, the quantifier “there exists …” corresponds to the type-former “the type of all such that …”, and thus under propositions-as-types, they are identical. Therefore, asserting the hypothesis of AC, “for every there exists a such that …”, is the same as giving a function (since the quantifier “for every” corresponds to a dependent function space) mapping each to a with some property — which is exactly the conclusion of AC.
Finally, under propositions-as-some-types, we generally have the axiom of unique choice, but not necessarily the axiom of choice. Here, the quantifier “there exists …” is the (-1)-truncation of “the type of all such that …”, so that the propositions-as-types argument for AC doesn’t go through. However, if there is at most one such , then generally, the defining property of the (-1)-truncation will allow us to define a map from the type “there exists …” (which is (-1)-truncated by definition) to “the type of all such that …” (which is (-1)-truncated by the assumption that is unique) making them equivalent, and hence allowing us to construct the desired function.
Another way to express this is as follows. We cannot in general use “the type of all such that …” to represent the proposition “there exists …”, since it is not (-1)-truncated. However, if in some particular case it turns out to be (-1)-truncated, then we can use it in this way, and thereby retain all the computational advantages of propositions-as-types. For example, for a function in extensional type theory (where types behave like sets), the “type of all inverses of ” is (-1)-truncated, and so represents the proposition “ is an isomorphism” without needing a truncation applied to it.
The extension of this example to the homotopical world is the genius of Voevodsky’s definition of the type (which is shared by the equivalent types and ) which I discussed back here: it contains within it the data of a homotopy inverse to , yet it is always (-1)-truncated. Therefore, if we prove that some map is an equivalence, we are proving a proposition — and yet, knowing that a map is an equivalence always gives us a specified homotopy inverse for it, since one can be extracted from any inhabitant of . (What the homotopy inverse is may depend on the inhabitant of , i.e. on “how” we know that is an equivalence. However, since is (-1)-truncated, the space of such homotopy inverses is contractible, so up to homotopy, the dependence is trivial.)
This is essentially a “homotopical axiom of unique choice”. Note that it is a real departure from mathematics based on set theory that this can hold, and yet the ordinary axiom of choice may not. For instance, in traditional set-theoretic foundations, the statement “every fully faithful and essentially surjective functor between (1-)groupoids is an equivalence” implies the axiom of choice (unless we take “functor” to mean anafunctor). But in homotopy type theory, this statement is just true, whereas the axiom of choice we can assume or not as we please.
What does this have to do with algebraic and non-algebraic structures? Classically, we say that a definition (often, a definition of higher category) is algebraic if its operations are given by specified functions, and non-algebraic if the outputs of its operations are merely characterized and asserted to exist. A good example of the latter is a (complete) Segal space, a model for -categories consisting of a simplicial -groupoid such that each Segal map is an equivalence (plus the “completeness” condition). Suppose we were to formalize this in homotopy type theory, so that each were a type. (We don’t yet know a good way to deal internally in type theory with the fact that there are infinitely many ’s, but let’s assume for now that we find some solution to that problem.) Then part of the definition would, quite naturally, be an assertion that .
However, since knowing that a map is an equivalence in homotopy type theory gives us a specified homotopy inverse to it, such a definition would automatically come with composition operations In other words, when we work in a system satisfying the homotopical axiom of unique choice, the difference between “algebraic” and “non-algebraic” structure vanishes: any non-algebraic structure comes with a specified algebraicization, which is algorithmically derivable from any proof that it satisfies the “non-algebraic” definition.
Re: Propositions as Some Types and Algebraic Nonalgebraicity
Hi Mike,
One of the deficiencies of pure type theory (the props-as-types view) is that its handling of real numbers is poor, from a computational point of view.
Computationally, you can think of a real number as a stream of rationals forming a Cauchy sequence, quotiented in some appropriate way. This construction works, and the theorems we want to prove generally work quite nicely (as Bishop shows in his book).
However, speaking as a computer programmer, it would be nice to have an operation that computes an approximation of a real number. For example, we may wish to print the answer after doing a computation! That is, I want to find rationals which are close to any given real number. This is easy enough to show in constructive set theory, since we can easily prove:
That is, for every real and rational epsilon, there is a rational that is within epsilon of . (I am using to denote the embedding of the rationals into the reals). Basically we can just chase down the Cauchy sequence until we get a rational that’s good enough.
However, this proof cannot be turned into a term of the corresponding type of Martin-Lof type theory: The reason is that such an inhabitant of this type must be a function, and so must give the same rational, no matter which Cauchy sequence was used to represent the given real. This is really obnoxious!
Can HoTT give a more helpful account of computing with higher-type objects like real numbers?