Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 12, 2012

Propositions as Some Types and Algebraic Nonalgebraicity

Posted by Mike Shulman

Perhaps the aspect of homotopy type theory which causes the most confusion for newcomers (at least, those without a background in type theory) is its treatment of logic. With that in mind, I want to share some general thoughts about all the ways in which foundational systems can treat logic, and why I think HoTT makes the choice that it does. This even turns out to have interesting implications for the difference between “algebraic” and “non-algebraic” structures.

Naively, one might say that in mathematics we do two kinds of things: (A) define and construct things and (B) prove things about them. There are also, of course, all other sorts of less precise mathematical activity, such as looking for patterns and making conjectures, but in terms of formal activity that a foundational system has to be able to express, I think everything can be reduced to those two. Roughly, (A) deals with things (like sets, or homotopy types), whereas (B) deals with propositions. However, there are several ways that a foundational system can treat the relationship between “things” and “propositions”.

  1. Propositions are basic. This is the approach of classical first-order logic, in which traditional set theories such as ZFC are formalized. With this approach, activity (A) is not a basic thing, but is reduced to activity (B). Our ability to perform certain basic constructions is not part of the formal system, but is expressed through axioms which are propositions. The rules of logic tell us how to derive new propositions from old ones. In ZFC, we never construct a set directly; rather, we prove a statement to the effect that a set with such-and-such properties exists (and, perhaps, is unique).

  2. Things and propositions on an equal footing. This approach is sometimes called logic-enriched type theory; it is used in much writing on categorical logic. With this approach, both things (called types) and propositions (or more precisely, predicates over types) are basic objects of the theory. There are rules of type theory, which tell us how to construct new types from old ones (products, disjoint unions, function-types, etc.), and also rules of logic, which tell us how to derive new propositions from old ones. Of central importance, and perhaps most difficult to get used to for someone coming from a traditional mathematical background, is that the operation of forming the type A×BA\times B from the types AA and BB, and the operation of forming the proposition (PandQ)(P \;and\; Q) from the propositions PP and QQ, are on the same ontological footing (as opposed to in ZFC, where the existence of a set called the “cartesian product” of two other sets is a proposition that must be proven).

  3. Things are basic. In this approach, propositions are regarded as particular things (that is, “sets”, or usually “types”). When we regard a proposition as a type, the elements of that type are regarded as proofs or witnesses of that proposition; thus proving a proposition is identified with exhibiting an element of a type. Hence activity (B) is not a basic thing, but is reduced to activity (A). What makes this possible is the Curry-Howard correspondence, which observes that the basic operations on propositions (and, or, implies, …) correspond very closely to the basic operations on types (product, coproduct, function-type, …).

    If we choose this approach, we have to make a further choice: which types are regarded as propositions? This affects the exact way in which the Curry-Howard correspondence is realized.

    1. Any type can be a proposition. This is usually called propositions as types (although perhaps “types as propositions” would be more appropriate). In this case, we can reuse all type-operations as proposition-operations without change: the proposition (PandQ)(P \;and\; Q) is the type P×QP\times Q, the proposition (PorQ)(P \;or\; Q) is the type PQP\sqcup Q, etc. Once again the type-theoretic and logical operations are on the same ontological footing, but now it is because they are exactly the same thing.

    2. Only some types can be propositions, or propositions as some types. Usually, the restriction imposed is that a type can be regarded as a proposition if (and only if) it has at most one element. In other words, a proposition is something that may be true (if it contains an element), but there is no distinction between different “ways” in which it can be true. In this case, we can only reuse a type-operation as a proposition-operation if it preserves the property of having at most one element. Cartesian product does, so (PandQ)(P \;and\; Q) can be P×QP\times Q, but disjoint union does not, so (PorQ)(P \;or\; Q) must be some sort of “truncation” of PQP\sqcup Q (see bracket type).

(As an aside, I should mention that any of the above approaches can be used as a “meta-theory” to formalize any of the others as an “object theory”. In particular, it is very natural to consider a theory in first-order logic as sitting inside a typed system: this is perhaps less clear for a theory like ZFC which has only one type (the type of “sets”), but becomes important for theories with more than one type. However, we can also regard meta-theoretic reasoning about type theories (such as “in any type theory satisfying (blank) we can construct a type (blah) such that …”) as taking place in an ambient first-order logic, or an ambient logic-enriched type theory, or whatever we prefer.)

But which is the best approach? Each has its advantages. Propositions-are-basic has a certain economy and a long tradition, and enables the application of many techniques from first-order logic. Logic-enriched type theory is probably the most flexible, decoupling propositions from things; this seems the best way to talk about the logic of strong-subobjects in a quasitopos, or the logic of a hyperdoctrine or tripos. Things-are-basic has a pleasing parsimony of basic notions, which is a help when formalizing it in a computer, and also keeps witnesses around for later use rather than forgetting them once theorems are proven. (Afficionados of each of these should feel free to mention further advantages of each in the comments.)

If we were to set out, from scratch, to try to formulate a foundational theory in which the basic objects are \infty-groupoids, we might a priori choose any one of the above approaches. But I want to argue that propositions-as-some-types, which is the approach chosen by homotopy type theory, is the most natural choice in this setting. The fundamental observation is that the “types with at most one element” which arise in propositions-as-some-types are just the first rung on an infinite ladder: they are the (1)(-1)-truncated types (\infty-groupoids), called h-props. Why, then, should they be treated specially, as they are with propositions-are-basic or logic-enriched-type-theory?

In other words, homotopy theory tells us that the distinction between activities (A) and (B) is not binary, but many-layered. Activity (B) is the special case of constructing (-1)-truncated things, whereas we can also consider 0-truncated things, 1-truncated things, and so on, up to and including things that aren’t truncated at all. If we wanted to capture that with a logic-enriched sort of approach, we would need infinitely many basic notions, one for each level of truncation or lack thereof.

The homotopical view of identity types as path types also points us towards propositions-as-some-types. Recall that given a type AA in homotopy type theory with two points x,y:Ax,y\colon A, the “identity type” Id A(x,y)Id_A(x,y) represents the type of paths from xx to yy. In general, this is not (-1)-truncated: there can be more than one distinct such path. However, if AA happens to be 0-truncated (an h-set), then Id A(x,y)Id_A(x,y) is (-1)-truncated, and under this view coincides with the classical proposition “x=yx=y”. If propositions were not identified with (-1)-truncated types, then we would have the general construction of identity/path types, and we would also have, in the particular case when AA is 0-truncated, classical equality propositions of the form “x=yx=y”, which in that case happen to be equivalent to inhabitation of the identity type. This would be unappealing and redundant.

There’s (at least) one more advantage to propositions-as-some-types in homotopy type theory, but to explain it I have to digress a bit. Consider the following two statements.

  • The axiom of choice: If for every xAx\in A there exists a yBy\in B such that some property P(x,y)P(x,y) holds, then there is a function f:ABf\colon A\to B such that for every xAx\in A we have P(x,f(x))P(x,f(x)).

  • If for every xAx\in A there exists a unique yBy\in B such that some property P(x,y)P(x,y) holds, then there is a function f:ABf\colon A\to B such that for every xAx\in A we have P(x,f(x))P(x,f(x)). This is variously called the axiom of unique choice, the axiom of non-choice, or the function comprehension principle.

Under propositions-are-basic, whether these statements hold depends on the formal system we set up inside first-order logic. In membership-based set theories like ZF, the axiom of choice is an axiom that we can assume or not, whereas the axiom of non-choice is almost always just true, essentially by the definition of a “function” as a certain kind of set of ordered pairs.

In logic-enriched type theory, however, even the axiom of unique choice is not automatically true. In fact, in the logic of strong-subobjects in a quasitopos that is not a topos, the axiom of non-choice is false! In that case, for a map p:BAp\colon B\to A, the truth of the assertion “for every x:Ax\colon A there exists a unique y:By\colon B such that p(y)=xp(y)=x” means that pp is both monic and epic, but this is not enough to make it split epic (since then it would be an isomorphism).

By contrast, under propositions-as-types, even the axiom of choice is automatically true! This is a classical observation dating back to Martin-Löf. The reason is that under the Curry-Howard correspondence, the quantifier “there exists yBy\in B …” corresponds to the type-former “the type of all yBy\in B such that …”, and thus under propositions-as-types, they are identical. Therefore, asserting the hypothesis of AC, “for every xAx\in A there exists a yBy\in B such that …”, is the same as giving a function (since the quantifier “for every” corresponds to a dependent function space) mapping each xAx\in A to a yBy\in B with some property — which is exactly the conclusion of AC.

Finally, under propositions-as-some-types, we generally have the axiom of unique choice, but not necessarily the axiom of choice. Here, the quantifier “there exists yBy\in B …” is the (-1)-truncation of “the type of all yBy\in B such that …”, so that the propositions-as-types argument for AC doesn’t go through. However, if there is at most one such yy, then generally, the defining property of the (-1)-truncation will allow us to define a map from the type “there exists yBy\in B …” (which is (-1)-truncated by definition) to “the type of all yBy\in B such that …” (which is (-1)-truncated by the assumption that yy is unique) making them equivalent, and hence allowing us to construct the desired function.

Another way to express this is as follows. We cannot in general use “the type of all yBy\in B such that …” to represent the proposition “there exists yBy\in B …”, since it is not (-1)-truncated. However, if in some particular case it turns out to be (-1)-truncated, then we can use it in this way, and thereby retain all the computational advantages of propositions-as-types. For example, for a function f:ABf\colon A\to B in extensional type theory (where types behave like sets), the “type of all inverses of ff” is (-1)-truncated, and so represents the proposition “ff is an isomorphism” without needing a truncation applied to it.

The extension of this example to the homotopical world is the genius of Voevodsky’s definition of the type isEquiv(f)isEquiv(f) (which is shared by the equivalent types isAdjointEquiv(f)isAdjointEquiv(f) and ishIso(f)ishIso(f)) which I discussed back here: it contains within it the data of a homotopy inverse to ff, yet it is always (-1)-truncated. Therefore, if we prove that some map is an equivalence, we are proving a proposition — and yet, knowing that a map is an equivalence always gives us a specified homotopy inverse for it, since one can be extracted from any inhabitant of isEquiv(f)isEquiv(f). (What the homotopy inverse is may depend on the inhabitant of isEquiv(f)isEquiv(f), i.e. on “how” we know that ff is an equivalence. However, since isEquiv(f)isEquiv(f) is (-1)-truncated, the space of such homotopy inverses is contractible, so up to homotopy, the dependence is trivial.)

This is essentially a “homotopical axiom of unique choice”. Note that it is a real departure from mathematics based on set theory that this can hold, and yet the ordinary axiom of choice may not. For instance, in traditional set-theoretic foundations, the statement “every fully faithful and essentially surjective functor between (1-)groupoids is an equivalence” implies the axiom of choice (unless we take “functor” to mean anafunctor). But in homotopy type theory, this statement is just true, whereas the axiom of choice we can assume or not as we please.

What does this have to do with algebraic and non-algebraic structures? Classically, we say that a definition (often, a definition of higher category) is algebraic if its operations are given by specified functions, and non-algebraic if the outputs of its operations are merely characterized and asserted to exist. A good example of the latter is a (complete) Segal space, a model for (,1)(\infty,1)-categories consisting of a simplicial \infty-groupoid XX such that each Segal map segal n:X nX 1× X 0X 1× X 0× X 0X 1 segal_n\colon X_n \to X_1 \times_{X_0} X_1 \times_{X_0} \cdots \times_{X_0} X_1 is an equivalence (plus the “completeness” condition). Suppose we were to formalize this in homotopy type theory, so that each X nX_n were a type. (We don’t yet know a good way to deal internally in type theory with the fact that there are infinitely many X nX_n’s, but let’s assume for now that we find some solution to that problem.) Then part of the definition would, quite naturally, be an assertion that isEquiv(segal n)isEquiv(segal_n).

However, since knowing that a map is an equivalence in homotopy type theory gives us a specified homotopy inverse to it, such a definition would automatically come with composition operations X 1× X 0X 1× X 0× X 0X 1segal n 1X ndiag nX 1. X_1 \times_{X_0} X_1 \times_{X_0} \cdots \times_{X_0} X_1 \xrightarrow{segal_n^{-1}} X_n \xrightarrow{diag_n} X_1. In other words, when we work in a system satisfying the homotopical axiom of unique choice, the difference between “algebraic” and “non-algebraic” structure vanishes: any non-algebraic structure comes with a specified algebraicization, which is algorithmically derivable from any proof that it satisfies the “non-algebraic” definition.

Posted at January 12, 2012 12:09 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2486

13 Comments & 1 Trackback

Re: Propositions as Some Types and Algebraic Nonalgebraicity

Hi Mike,

One of the deficiencies of pure type theory (the props-as-types view) is that its handling of real numbers is poor, from a computational point of view.

Computationally, you can think of a real number as a stream of rationals forming a Cauchy sequence, quotiented in some appropriate way. This construction works, and the theorems we want to prove generally work quite nicely (as Bishop shows in his book).

However, speaking as a computer programmer, it would be nice to have an operation that computes an approximation of a real number. For example, we may wish to print the answer after doing a computation! That is, I want to find rationals which are close to any given real number. This is easy enough to show in constructive set theory, since we can easily prove:

x,ϵ.yQ.|xι(y)|ι(ϵ) \forall x \in \mathbb{R}, \epsilon \in \mathbb{Q}.\; \exists y \in {Q}.\; |x - \iota(y)| \leq \iota(\epsilon)

That is, for every real xx and rational epsilon, there is a rational that is within epsilon of xx. (I am using ι:\iota : \mathbb{Q} \to \mathbb{R} to denote the embedding of the rationals into the reals). Basically we can just chase down the Cauchy sequence until we get a rational that’s good enough.

However, this proof cannot be turned into a term of the corresponding ΠΣ\Pi\Sigma type of Martin-Lof type theory: Πx:,ϵ:.ΣyQ.|xι(y)|ι(ϵ) \Pi x : \mathbb{R}, \epsilon : \mathbb{Q}.\; \Sigma y \in {Q}.\; |x - \iota(y)| \leq \iota(\epsilon) The reason is that such an inhabitant of this type must be a function, and so must give the same rational, no matter which Cauchy sequence was used to represent the given real. This is really obnoxious!

Can HoTT give a more helpful account of computing with higher-type objects like real numbers?

Posted by: Neel Krishnaswami on January 12, 2012 9:00 AM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

Neel, I’m really glad you asked that question!

To give a little more background, for any bystanders who may be confused: the issue is that any function out of \mathbb{R} which is definable in constructive type theory must be continuous. One way to think of this is that anything definable directly in type theory must be computable, since type theory is essentially a programming language—and any “computable function” of reals must be continuous, since it can use only finitely much input (e.g. finitely many decimal places) to produce any given finite amount of output. Another way to think of it is that type theory admits models in arbitrary toposes, and there are toposes in whose internal intuitionistic logic “Brouwer’s theorem”, that all functions on the reals are continuous, holds (such as the topos of sheaves on \mathbb{R}).

This sometimes annoys classical mathematicians when it gets mentioned, since they expect any “foundation for mathematics” to be able to define discontinuous functions. But actually, it just means that type theory is more expressive than traditional foundations: you can always add excluded middle or choice as axioms and use them to define all the functions you want, and prove things about them mathematically to your heart’s content. The only thing is that functions defined in that way won’t compute, in the sense that you won’t be able to write “f(3)f(3)” and have the computer churn until it gives you an answer. That’s obviously an unreasonable thing to expect when ff is an arbitrary mathematical function, since it might not be computable! The virtue of type theory is that if ff is a computable function, it can be defined without using PEM or choice in such a way that the computer can compute it. This is a feature which a traditional foundation like set theory lacks entirely.

From this point of view, one might then say, why is it a problem that the “rational approximation” function can’t be defined computably? Why not just define it using PEM and AC? The answer, I believe, is that Neel is coming from the perspective that we want to write computer programs in type theory (such as Coq or Agda), and compile them to things that actually run. Hence the need to print the answer (or, more precisely, an approximation of the answer) after doing a computation!

I’ve been wondering about this question ever since Russell O’Connor brought it up on the Coq-club mailing list a couple of months ago. His argument was that this means we should consider the real numbers as a setoid, i.e. the type of Cauchy sequences equipped with the equivalence relation of “converging to the same thing”, rather than actually taking the quotient. That way, we are free to also define functions which don’t respect the equivalence relation (and hence can be “discontinuous”, or even ill-defined, as functions on real numbers).

It could be that that’s the right answer—that when “programming” we should use the reals as a setoid, but when doing “math” we should use them as a quotient—but if so, I wouldn’t be really happy about it. I’d like to see programming and math moving closer together, not apart.

The most interesting thoughts I’ve had so far are along the following lines. Think of the effective topos. This is a (non-Grothendieck) topos in which “all functions \mathbb{N}\to \mathbb{N} are computable”, where \mathbb{N} denotes the natural numbers object—and hence also all functions on the (internally defined) reals are continuous, and so on.

But it also contains the classical topos SetSet as a subtopos (a left-exact-reflective subcategory) of “codiscrete” objects. In other words, for every object XEffX\in Eff there is a “codiscrete reflection” X\sharp X, which is the “set of global points of XX” equipped with the “codiscrete computational structure”, i.e. for which “all functions are computable”.

So one could describe the process of “displaying the result of a computation” inside EffEff as follows. Given an (automatically computable, hence continuous) function f:f\colon \mathbb{R}\to \mathbb{R}, and an input x:x\colon \mathbb{R}, we compute f(x):f(x) \colon \mathbb{R}, then take its image in \sharp \mathbb{R}. Denoting by CC the type of Cauchy sequences, we have an epimorphism CC \to \mathbb{R} (the quotient map), and therefore also an epimorphism C\sharp C \to \sharp \mathbb{R}. But codiscrete objects satisfy the axiom of choice (since they are equivalent to the classical topos SetSet), so there is a section C\sharp \mathbb{R} \to \sharp C. Thus, after “externalizing” with \sharp, we can extract a representing Cauchy sequence and use it to compute a rational approximation.

The problem is, how do we represent this computationally in practice? I don’t know enough about the internals of this sort of thing to say for sure, but I speculate that maybe one could do something of the following sort. When we write programs in a language like Coq or Agda, usually we “extract” them to a (somewhat) more traditional programming language like ML, removing at that stage all the proofs and other computationally irrelevent data, before compiling to machine code and using the resulting program. If we wanted to include quotients in our type theory, such as we would use to construct \mathbb{R} from CC, then I presume that since a traditional programming language has no such notion, for extraction we would have to extract the quotient type \mathbb{R} to the type CC it is a quotient of. Similarly, we would extract any function defined on \mathbb{R} to the corresponding function defined on CC, and extract any \mathbb{R}-valued function to the corresponding CC-valued function (since any computationally defined \mathbb{R}-valued function would have to be given by specifying representatives in CC—is that right?). We would then extract \sharp to the identity functor, and the section C\sharp \mathbb{R} \to \sharp C, obtained in the type theory from AC for codiscrete types, would extract to the identity function on CC. Thus, our “calculate and display the result” function C\mathbb{R} \to \sharp C would extract to exactly what we want.

Does that make any sort of sense at all? Obviously, some sort of restriction would have to be imposed on what can be extracted, since using AC for codiscrete objects we could define all sorts of uncomputable functions. In a sense, once could say that the idea is to use quotients in the type theory, but “compile them to setoids” internally.

Posted by: Mike Shulman on January 12, 2012 5:42 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

With your indulgence I’d like to suggest two things:

Firstly, that the setoid of Cauchy Sequences up to Cauchy Equivalence (or of Continued Fractions up to Terminal Mutation, or … ) are all ways analysts actually do think about real numbers, when they do need to think about what real numbers are; that we have to think about these things when trying to build Coq proofs about analytic theorems therefore isn’t a divergence from ordinary mathematical thought, it’s just something that has to happen when you try understand a connected continuum in terms of such totally-disconnected things as Inductive and CoInductive Constructions seem to give.

Secondly, about digits and computations: what’s going on when you give a calculator a bunch of digits and it returns a bunch of digits to you — say, you tell it you want e 0.75754e^0.75754, goodness knows why, and it probably tells you it likes 2.1330255 — what’s happened can be thought of as one of two things: you may construe that the exact exponential of 0.75754 is very close to 2.1330255, or you may construe that there’s a number very close to 0.75754 whose exponential is exactly 2.1330255. Actually we know that both of these and more is true, due to the continuity and better of the expoenential function; but it is, of course, wrong to say that if a number starts out “0.75754” then its exponential function starts out “2.1330255”; even worse, if a number starts out 0.693, we can’t say that its exponential starts out 1.999, nor even 1.99, nor 1.9, nor 1 — just because closeness of reals doesn’t mean similarity of digit expansions; this is perhaps just another way of saying what Neel complained about, and Mike re-phrased by saying that digits are not continuous, and it shouldn’t be very alarming that they’re not. Another way of saying this, that apeals to me anyway, is that it only makes sense to ask what are the base-bb digits of a number if we already know whether it be bb-adic or not; happily, it IS known that all bb-adic numbers have non-bb-adic logarithms, so that the bad examples like 0.693 can all be sensibly computed to any desired precision, for any desired precision of argument.

But getting back to the two ways of construing the you-calculator dialogue, (and I feel like mentioning again the paper of Braverman and Yampolsky, particularly see just what they mean by saying that certain Julia Sets are “computable”), you’ve between you constructed a point, a ten-adic point, very close to the graph of the exponential function, probably at a distance smaller than 0.0000005; and I think this is more like what is actually represented by a digit calculation: it’s a discrete process with a discrete argument, but it comes with a promise that good approximations result in good approximations. If you wanted to be very careful, you’d produce a certificate or formal proof of goodness, too.

And now I have a question of my own: Apart from Cauchy Sequences and various representations of the Cantor Set, I also think of Dedekind Cuts as being good models of the real line; certainly, given a point in the Cantor Set I can trivially produce a fine Cauchy Sequence together with proofs to witness Cauchyness; and given a Dedekind Cut, I’m sure I can produce a Cauchy Sequence directly (e.g. what’s usually called “bisection search”), or again a point in the Cantor Set; but I don’t think I can take a Cauchy Sequence and computably produce a Dedekind cut for it, in the sense that given a rational rr (or a ten-adic or what) I can’t discern what ϵ\epsilon I need to decide whether rr is an under- or over-estimate. Does that sound right?

Posted by: Jesse C. McKeown on January 13, 2012 7:39 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

This is just a placeholder to let you know that I really appreciate this response. There’s a lot to think about here, and so it will take me a little while to process it. (You can estimate how much I have to think about from the fact that I was surprised to learn that Set is a reflective subcategory of Eff, though it seems “obvious” now that you’ve told me.)

Posted by: Neel Krishnaswami on January 14, 2012 11:08 AM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

the setoid of Cauchy Sequences up to Cauchy Equivalence … are all ways analysts actually do think about real numbers… therefore isn’t a divergence from ordinary mathematical thought

I absolutely agree that we have to think about Cauchy sequences when working with real numbers. But I don’t think this says anything about whether we should consider the real numbers to be a setoid or a quotient set. We work with quotient sets all the time in mathematics, but almost always the way we do it is by working with representatives. In fact, that’s basically the only way there is to work with quotient sets.

or you may construe that there’s a number very close to 0.75754 whose exponential is exactly 2.1330255

Well, that’s true in the case of the exponential for these particular numbers, but it won’t be true for an arbitary function. For instance, if I ask my calculator for the exponential of -9999999 and it tells me “0”, I can’t conclude from that that there is a number very closed to -9999999 whose exponential is exactly 0.

I’m not sure what this has to do with the question, though.

given a Dedekind Cut, I’m sure I can produce a Cauchy Sequence directly (e.g. what’s usually called “bisection search”), or again a point in the Cantor Set; but I don’t think I can take a Cauchy Sequence and computably produce a Dedekind cut for it

Actually, I think that’s backwards; constructively, the Cauchy reals inject into the Dedekind reals but not every Dedekind real is necessarily a Cauchy real. I think the problem is that to produce a Cauchy sequence from a Dedekind cut with “bisection search”, you need some sort of decidability for the Dedekind cut. Either LEM or countable choice suffices to show that every Dedekind real is a Cauchy real. But in the other direction, you can define (L,R)(L,R) where LL is the set of all rational numbers that the Cauchy sequence is eventually above, and RR those that it is eventually below.

One place the difference is discussed is in the discussion prior to D4.7.12 in Sketches of an Elephant. The nLab page Cauchy real number talks about some generalizations of Cauchy sequences that can hit all the Dedekind cuts.

Posted by: Mike Shulman on January 15, 2012 5:18 AM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

OK, exaggerating with the analytic functions with relatively large derivative was I… But I have to say, your calculator has no business telling you that e 999999e^{-999999} is 00; if it can’t at least come up with 10 43429410^{-434294}, it should have the decency to say “too small”! On the other hand, my main point was that digits calculated from digits (if necessary) aren’t the most useful thing out there, that it makes more sense to think of these calculations as starting out with “is this point close to the graph of my function”; for example, there are (interesting, useful!) functions whose graphs are computable in the sense Braverman and Yampolsky discuss, but that aren’t themselves computable; Thomae’s example comes to mind.

But now, by your suggesting that “Dedekind cut” may be prefixed with “decidable”, I can tell that the unprefixed word means something to you it didn’t mean to me — to be precise, I was expecting, (would have defined) Dedekind cuts to be decidable from the get-go; but I see that if you mean only a monotone map from rationals to truth values, then things will get much differenter indeed. I suppose, on the other side of things, that decidable Dedekind cuts will have much less in the way of completeness to them.

Posted by: Jesse C. McKeown on January 15, 2012 8:36 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

But I have to say, your calculator has no business telling you that e 999999e^{−999999} is 0

I don’t know about you, but all the calculators I’ve ever used round results to the precision that they can express. Rounding e 9999999e^{-9999999} to 0 is no different from rounding e 0.75754e^{0.75754} to 2.1330255. I just checked with the calculator application on my computer, and that’s what it does. (Actually, it gives 0 already for e 30e^{-30}.)

the unprefixed word means something to you it didn’t mean to me

The standard constructive definition of Dedekind cuts can be found on the nLab.

Posted by: Mike Shulman on January 15, 2012 10:59 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

when we work in a system satisfying the homotopical axiom of unique choice, the difference between “algebraic” and “non-algebraic” structure vanishes: any non-algebraic structure comes with a specified algebraicization, which is algorithmically derivable from any proof that it satisfies the “non-algebraic” definition.

That’s a nice observation.

I didn’t have this clear picture, but I used to wonder about what should be an example of this general observation: the “identity types are weak ω\omega-groupoids”-statement.

Namely, what is remarkable here is that on the one hand we realize that HoTT types are modeled by Kan complexes, hence by non-algebraic \infty-groupoids. On the other hand, when we analyze a given Kan complex from within the HoTT language via its higher identity types, then it comes back to us as an algebraic weak ω\omega-groupoid.

Posted by: Urs Schreiber on January 12, 2012 9:19 AM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

HoTT types are modeled by Kan complexes

I would say, rather, HoTT types can be modeled by Kan complexes. But it is, indeed, interesting that passage through HoTT “algebraifies” (and even “globularizes”) the notion of \infty-groupoid.

Posted by: Mike Shulman on January 12, 2012 4:55 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

HoTT types are modeled by Kan complexes

I would say, rather, HoTT types can be modeled by Kan complexes.

Not sure if I understand the language issue. We say “The category CC is a model of the theory TT”. So its objects do model the types of TT. Saying this does not exclude that some other objects in some other category also model the types of TT, does it?

But let’s not get absorbed by such kind of discussions! Let me ask a more substantial question instead:

can you see if your observation can be turned into an actual algorithm that gives us algebraic globular models for (n,n)(n,n)-categories? (here with finite first entry so that we don’t run into the problem that we don’t know how to code infinite structures in the type theory).

I am being reminded of Todd Trimble’s algorithm that made him produce the algebraic axioms for 4-categories. It is clear that bare hands of humans won’t get much beyond that. But maybe in HoTT one can produce an algorithm that spits out these coherence diagrams for algebraic nn-categories. By first coding a geometric definition of (n,n)(n,n)-category (say by nn-fold nn-truncated Segal spaces, as you indicate and using truncation to stop the recursion at some point).

Posted by: Urs Schreiber on January 12, 2012 9:38 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

Not sure if I understand the language issue.

Hmm, now I’m not sure if I do either. Maybe my English teacher would say that “A models B” and “B is modeled by A” are equivalent active+passive forms of the same sentence. But somehow to me, “B is modeled by A” carries the connotation that A is the standard, canonical, or only model of B, whereas “A models B” does not. Maybe I’m weird.

can you see if your observation can be turned into an actual algorithm that gives us algebraic globular models for (n,n)-categories?

Good question. I guess I would speculate that just as there is a contractible globular operad which acts on the globular tower of identity types of any type, there is a contractible n-globular operad that acts on such algebraicized Segal thingies.

Posted by: Mike Shulman on January 13, 2012 8:28 AM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

On the language issue: I think it’s that “Noun1 verbs noun2” very often carries the additional implicature that noun2 is the only thing which noun1 verbs, or at least the main such thing — a typical example being “whales eat krill.”

So “M models T” may suggest that “T is the main theory that M models”; while “T is modeled by M” may suggest that “M is the main structure that models T”.

Posted by: Peter LeFanu Lumsdaine on January 16, 2012 7:33 PM | Permalink | Reply to this

Re: Propositions as Some Types and Algebraic Nonalgebraicity

Ah, this post tallies very nicely with some intuitions of the participants at a workshop I’m attending in Oberwolfach, that the usual logical representation of mathematics reduces two activities to one. I’ll see if I can sell your (2) or (3.2) to them.

Posted by: David Corfield on January 13, 2012 6:40 PM | Permalink | Reply to this
Read the post Freedom From Logic
Weblog: The n-Category Café
Excerpt: Towards a way of doing homotopy type theory informally, and a new opinion regarding propositions as types.
Tracked: November 10, 2012 2:57 AM

Post a New Comment