## February 6, 2013

### The Title of This Post is False

#### Posted by David Corfield

Continuing our enquiry into what homotopy type theory can do for the formalisation of natural language, if we could make good sense of the phenomena of self-reference, that should pique the interest of those discontent with predicate logic. We won’t be explicitly involving the homotopy aspect of the type theory here, and yet who knows whether it might assert itself, as it did last time on the subject of ‘the’.

So, let’s take a look at the classic liar paradox:

This proposition is false.

Now, what rules of type formation would allow such a proposition-as-type,

$Id_{Prop}(This \: proposition, \bottom)?$

Well, section 4.14 of Aarne Rante’s book Type-Theoretical Grammar discusses indexicals:

$\frac{A: Type \qquad a : A}{this(A, a): A}.$

So to say ‘this proposition’ we had better already have established a type of propositions, $Prop$, as well as a term of that type which we intend to refer to as ‘this proposition’. The idea of the paradox is that we have such a term. ‘This proposition is false’ is supposed to be the term of type $Prop$:

$Id_{Prop}(This \: proposition, \bottom) : Prop$

But we can’t form this term without forming ‘this proposition’. We seem to need

this($Prop$, this proposition): $Prop$,

but we can’t form this since we haven’t formed ‘this proposition’. Is it as simple as that – incorrect term formation?

Was there a whiff of fixed-pointness here, ringing inductive or coinductive bells? What of the Gödel sentence which states of itself that it cannot be proved?

Posted at February 6, 2013 10:01 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2596

### Re: The Title of This Post is False

This has nothing to do with type theory or homotopy-theoretic foundations. You can perform exactly the same argument in ordinary first-order logic equipped with Russell’s descriptive operator ɩ, or in a number of other formal systems. They typically resolve the paradox in exactly the same way as you did, by observing that there is no way of forming “this proposition”.

The usual attempts at self-referential propositions involve fixed-point operators. A naive attempt to define your proposition would be as the fixed point of negation “fix (λ p : Prop . ¬p)”. But now we have to argue that such a fixed point exists, which it does not, so people typically change the rules of the game (by changing the logic, or negation, or in some other way).

Gödel’s argument goes through thanks to the arithmetic fixed-point theorem where the fixed point crucially passes through a level of encoding that makes everything magically tick. Think of the encoding as a layer of encapsulation that “freezes” the infinite descend into an abyss of self-reference. A programmer would say that we “suspend the recursive call by thunking it”.

So, no homotopy type theory here folks.

Posted by: Andrej Bauer on February 6, 2013 11:12 AM | Permalink | Reply to this

### Re: The Title of This Post is False

Thanks for your thoughts. Am I detecting something more than just a ‘you don’t need type theory for this’ point in what you wrote?

Even were it the case that other resolutions are possible, still we might be interested in what type theory, which we may like for a host of other reasons, has to say about self-reference.

Posted by: David Corfield on February 6, 2013 12:12 PM | Permalink | Reply to this

### Re: The Title of This Post is False

You are detecting a slight dissatisfaction with the idea idea that (homotopy) type theory is the new panacea. At the very least you should put in proper historical context a type-theoretic account of standard material which has been studied very satisfactorily to death in other settings. A youngling who is just learning type theory and is (rightfully) fascinated by it might think, after reading your post, that type theory is bringing something new to the subject of self-reference and paradoxical statements. I see absolutely nothing new here.

Posted by: Andrej Bauer on February 6, 2013 1:22 PM | Permalink | Reply to this

### Re: The Title of This Post is False

If that’s the impression I gave, thanks for redressing the balance. I approach these themes as someone historically suspicious of analytic philosophy’s formalizations. Just yesterday, someone in my department gave a talk on why ‘Logic is not mathematical’, arguing to this conclusion because Fregean-style logic does not handle mass nouns, self-reference and anaphoric expressions well. In the past, I would have cheered this along, but seeing the glimmer of hope that dependent type theory may not fare too badly here, I wanted to see what was possible.

I’m not sure who your ‘younglings’ are, but I do know if they come from philosophy they are very unlikely to have heard of dependent type theory. You have only to search the main online encyclopedia in English-language philosophy to see that little is generally known.

Posted by: David Corfield on February 6, 2013 3:23 PM | Permalink | Reply to this

### Re: The Title of This Post is False

I think Andrej might be a little concerned because recursive (i.e., self-referential) types have been a central object of study in programming language semantics from its origin, and while the type-theoretic approach certainly clarifies traditional accounts of self-reference, it does not really contradict them.

A recursive type $\mu \alpha.\;A(\alpha)$ is a type where $\alpha$ is the recursive self-reference. If you place no restrictions on how the self-referential variable $\alpha$ can occur in $A(\alpha)$, it is easy to show that all types are inhabited, by showing that a fixed-point combinator $(A \to A) \to A$ is typeable. Here’s a sketch:

Let $X$ be some type, and let $A \triangleq \mu \alpha.\; (\alpha \to X)$.

Next, define $h : (X \to X) \to A \to X$ as the term $\lambda f:X \to X, a:A.\; f(a a)$. To see that this is well-typed, note that since $a$ has the type $A \triangleq mu \alpha. \alpha \to X$, it also has the type $A \to X$, and so it can be applied to itself.

Now that we have $h$, we can define the fixed point combinator $C : (X \to X) \to X$ as $\lambda f.\; (h\;f)\;(h\;f)$. Again, to see that this is well-typed, note that $h\;f$ has the type $A \to X$, and so by folding the recursive type up, it also has the type $A$.

This fixed-point combinator is, essentially, a proof term for Curry’s paradox. And indeed, you’ll find that most of the paradoxes of self-reference have a corresponding fixed point combinator in the typed lambda calculus.

Here are some salient observations about this. None of them are particularly novel, but I think the type-theoretic view makes them especially clear.

1. The fact that the Liar involves falsehood is a red herring. Curry’s paradox doesn’t care what the type $X$ is, and so the paradox goes through even in minimal logic.

2. The derivation of the fixed-point combinator from recursive types was completely syntactic. As a result, every model of logic which validates (a) contraction (i.e., $A \implies A \wedge A$), (b) modus ponens, and (c) unrestricted self-reference is inconsistent.

3. Inconsistency in a logic corresponds to Turing-completeness. This is what originally motivated the study of recursive types in language semantics: models of the untyped lambda calculus (which is Turing complete) would need to satisfy the isomorphism $V \simeq V \to V$.

4. However, eliminating unrestricted contraction does block Curry’s paradox. In linear logic (without exponentials), it is possible to allow unrestricted self-reference without inconistency. See, for instance, Kazushige Terui’s Light Affine Set Theory, in which he gives a set theory using linear logic as its ambient logic, for which naive, unrestricted comprehension is sound.

Posted by: Neel Krishnaswami on February 7, 2013 10:04 AM | Permalink | Reply to this

### Re: The Title of This Post is False

Thanks for bringing this up, Neel! Personally, I think your points 3 and 4 suggest one general way in which type theory does have something new to contribute to a discussion about self-reference and paradox. (I mean “new” to someone familiar only with set theory, not “new” in any absolute sense, since type theory has been around for quite a while now.) Namely, another big difference between type theory and set theory that hasn’t come up much in recent posts is that type theory is also a programming language: every expression, and in particular every proof, can be interpreted as a program and “executed”. And if we phrase your point 3 a little differently, it says that logical inconsistency is about nontermination of proofs.

I think this idea is intuitive even from a set-theoretic perspective (though not precise). Consider Russell’s paradox $R = \{ x \,|\, x\notin x \}$. In order to decide whether $R\in R$, we ask whether $R\notin R$, which requires us to know whether $R\in R$, which prompts us to ask whether $R\notin R$, in an “infinite spiral” which will never terminate.

Moreover, as you know (but others may not), the standard way that one proves consistency of a type theory is by showing that the execution of all proofs does terminate. More specifically, one shows that

1. Each step in the execution of a program preserves its type (“preservation”);
2. If a program has not yet yielded a “value” (a canonical form of its type), then it can be executed further (“progress”); and
3. The execution of every program terminates.

Therefore, if there were a proof (i.e. a program) of type False, it could be executed until it reached a value. But the type False is the positive type with no generators, hence has no canonical forms and no values, so no such program can exist.

Of course, by Gödel’s incompleteness theorem, this proof can’t be carried out in the same type theory: you need a metatheory strong enough enough to carry out the proof of termination. Such proofs are generally by induction, because roughly speaking, the way to prove termination is to assign some kind of “invariant” to each program in such a way that each execution step decreases the invariant with respect to some well-founded ordering. (I think this gives a nice “explanation” of why “logical strength is measured by the length of the well-founded inductions we can carry out”, which is roughly the same sort of thing that ZF-theorists have discovered to be true in using large (well-ordered) cardinals as a yardstick for consistency strength.)

One way to break such a consistency proof is to introduce “axioms” that don’t compute. This marks an important stylistic difference between type theory and set theory: in set theory where everything is already phrased in terms of axioms, one is free (from a technical perspective) to introduce arbitrary new axioms, with only the problem of convincing the reader that they are reasonable. In type theory, by contrast, one generally avoids asserting “axioms” since they cause the execution of proofs as programs to get stuck. Instead, we extend type theories by introducing new type-forming operations, which are always expected to adhere to certain patterns ensuring that programs can still be executed. (This is one of the biggest things we don’t understand yet about homotopy type theory: how to phrase univalence and higher inductive types in this sort of way.)

Another way to break such a consistency proof is, as you said, to introduce constructions that cause nontermination. Recursive types are one of these. Another is a self-containing universe “$Type:Type$”, which leads to Girard’s paradox (a type-theoretic version of Burali-Forti’s paradox) and Coquand’s paradoxes (type-theoretic versions of the Russell–Cantor paradox).

I think all this suggests a different philosophical perspective on self-reference and inconsistency, namely:

The problem is not self-reference; the problem is non-termination.

In other words, the devices we use to ensure consistency, such as replacing general recursive types with inductive and coinductive ones, or replacing a self-containing universe with a hierarchy of universes ($Type_0:Type_1$, $Type_1:Type_2$, …), should be viewed merely as technical devices to ensure termination, not fundamental or conceptual restrictions on the theory.

In particular, there are plenty of “self-referential” programs/proofs which are perfectly okay (in that their execution terminates), but whose termination is not “detected” by these devices. It’s well-known that there are general recursive programs which terminate on any input, but are not describable using the sort of higher-order primitive recursion allowed by an ordinary natural numbers type. (Indeed, the Halting Problem implies that this must be the case.) And the examples of NF and linear set theory show that a self-containing universe is not intrinsically inconsistent either; by imposing different restrictions on the allowable proofs about a self-containing universe, we can still ensure termination/consistency.

In particular, we can resolve the “problem of universes” in mathematical foundations, in a way which I find particularly attractive as a category theorist. Namely, we can believe (at least at a philosophical level) that there “really is” a set of all sets, or a type of all types, or a category of all categories. We just have to ensure somehow that when talking about these things, we only write down terminating proofs.

One convenient way to do this, of course, is by assigning a number to each reference to the universe, in such a way that whenever we use the fact that the universe contains itself, the “containing” copy has a larger number. From a practical perspective, this is just universe polymorphism with typical ambiguity. But philosophically it is different: rather than the universe levels “really being there” despite our desire not to have to talk about them, there is “really only one universe” while we are using a technical numbering device to ensure that our proofs terminate. The numbers assigned to each universe occurrence are not an intrinsic part of the proof, any more than the well-founded invariants assigned to different parts of a computer program by an automated termination-checker are an intrinsic part of the code.

Posted by: Mike Shulman on February 8, 2013 12:11 AM | Permalink | Reply to this

### Re: The Title of This Post is False

“The fact that the Liar involves falsehood is a red herring. Curry’s paradox doesn’t care what the type X is, and so the paradox goes through even in minimal logic.”

Is the statement

This statement is true.

a paradox? Can’t you just say that the statement is true? If it is not a paradox, then your system should allow “this statement is true” while not allowing “this statement is false”.

Posted by: Jeffery Winkler on February 12, 2013 10:12 PM | Permalink | Reply to this

### Re: The Title of This Post is False

It’s not a paradox, but it also doesn’t have a definite truth value. You’re free to make it either true or false, without producing a contradiction.

Posted by: Walt on February 12, 2013 10:58 PM | Permalink | Reply to this

### Re: The Title of This Post is False

If you have contraction, modus ponens, and unrestricted self-reference, you will be able to give a proof of “this statement is true”, because you can give a proof of the derivability of any proposition. The system is inconsistent!

To say something more useful than that, you need to restrict one of those three features. The usual thing people do is to keep contraction and modus ponens, and limit the kinds of self-referential expressions that are allowed.

The general recipe for doing this is to observe that in a proposition $\mu \alpha.\; A(\alpha)$, we are essentially trying taking the fixed point of a function on truth values $\lambda \alpha. A(\alpha)$. This means that you can take your favorite fixed point theorem from mathematics, and turn it into a restriction on the kinds of recursive definition that you want to allow.

Logicians tend to turn by reflex to the Knaster-Tarski theorem, which says that any monotone functional $f : L \to L$ on a complete lattice $L$ has a set of fixed points which form a complete lattice. (In particular, $f$ has a least and a greatest fixed point.) Now:

1. Note that the booleans are a complete lattice (choosing true above than false, say),
2. “this statement is true” corresponds to the proposition $\mu \alpha.\;\alpha$, and the identity function $\lambda \alpha.\;\alpha$ is trivially monotone,
3. and therefore $\mu \alpha.\;\alpha$ has a least and a greatest fixed point.

If you choose to interpret $\mu$ as a least fixed point, then the truth value of $\mu \alpha.\;\alpha$ is false. If you choose to interpret it as a greatest fixed point then its truth value is true. Both of these are consistent choices, just as Walt notes.

If your model of truth values is richer than the booleans, you might find yourself using other fixed point theorems (such as Banach’s theorem, which comes up a lot in temporal logic).

Posted by: Neel Krishnaswami on February 15, 2013 11:22 AM | Permalink | Reply to this

### Re: The Title of This Post is False

How would we encode the infinite family of sentences:

$(S_i)$: for all $k \gt i$, $S_k$ is false

Posted by: Stuart Presnell on February 6, 2013 1:01 PM | Permalink | Reply to this

### Re: The Title of This Post is False

No it does not help at all. Yablo’s paradox just obsucres the self reference. While each statement in the sequence is not self-referential, the sequence itself is.

Posted by: Andrej on February 6, 2013 1:23 PM | Permalink | Reply to this

### Re: The Title of This Post is False

I like Yablo's Paradox, because it makes clear what self reference really means. (Or maybe, with Yablo, we think that it doesn't involve self reference, in which case we see what is really important instead.)

Everybody agrees that

$S_1$: $S_1$ is false.

is self-referential. Not much harder is that

$S_1$: $S_2$ is true.

$S_2$: $S_1$ is false.

is self-referential. And of course

$S_1$: $S_2$ is true.

$S_2$: $S_3$ is true.

$S_3$: $S_1$ is false.

is self-referential. And so on. So a loop of any length is self-referential.

Now, Yablo's paradox takes a limit of these paradoxes that has no loops. But it is still ill-founded. An ill-founded finite relation must have loops, but an ill-founded infinite relation need not have them.

So to avoid self reference (or whatever you want to call this feature of potential paradox), you need the reference relation between sentences (where $S$ is related to $T$ iff $S$ is referred to by $T$) to be well-founded.

(This is potentially very interesting, because much of proof theory is about which proof systems are able to prove which relations are well-founded.)

Posted by: Toby Bartels on June 25, 2013 3:43 AM | Permalink | Reply to this

### Re: The Title of This Post is False

I wrote in small part:

Sorry, it's not actually a limit of the ones that I wrote down, but that's not essential to the point.

Posted by: Toby Bartels on June 25, 2013 8:45 PM | Permalink | Reply to this

### Re: The Title of This Post is False

On a similar topic, I came up with an interesting type-theoretical variant of Loeb’s theorem:

Consider some sort of type theory. There is a set $\mathrm{Type}$ of types in this type theory and a dependent family of sets $\mathrm{Term} : \mathrm{Type} \to \mathrm{Set}$ for the terms of each type. Moreover, this type theory should be sophisticated enough to describe itself. Therefore, there should also be a $\mathrm{IntType} \in \mathrm{Type}$ and a $\mathrm{IntTerm}$ with $T : \mathrm{IntType} \vdash \mathrm{IntTerm} (T) : \mathrm{Set}$. Now, for any $T \in \mathrm{Type}$, there is a corresponding $\mathrm{Int} (T) \in \mathrm{Term} (\mathrm{IntType})$, and for every $t \in \mathrm{Term} (T)$, there is a corresponding $\mathrm{Int} (t) \in \mathrm{Term} (\mathrm{IntTerm} (\mathrm{Int} (T)))$. Now the theorem is this: Given a $A \in \mathrm{Type}$, and given a function inside the type theory $f \in \mathrm{Term} (\mathrm{Int} (A) \to A)$, there is a term $\mathrm{fix} (f) \in \mathrm{Term} (A)$ such that $\mathrm{fix} (f)$ can be reduced to $f (\mathrm{Int} (\mathrm{fix} (f)))$.

Posted by: Itai Bar-Natan on February 6, 2013 1:42 PM | Permalink | Reply to this

Post a New Comment