## September 27, 2011

### The Inconsistency of Arithmetic

#### Posted by John Baez

Faster-than-light neutrinos? Boring… let’s see something really revolutionary.

Edward Nelson, a math professor at Princeton, is writing a book called Elements in which he claims to prove the inconsistency of Peano arithmetic.

It’s a long shot, but I can’t resist saying a bit about it.

On the Foundations of Mathematics mailing list, he wrote:

I am writing up a proof that Peano arithmetic (P), and even a small fragment of primitive-recursive arithmetic (PRA), are inconsistent. This is posted as a Work in Progress at http://www.math.princeton.edu/~nelson/books.html

A short outline of the book is at:

http://www.math.princeton.edu/~nelson/papers/outline.pdf

The outline begins with a formalist critique of finitism, making the case that there are tacit infinitary assumptions underlying finitism. Then the outline describes how inconsistency will be proved. It concludes with remarks on how to do modern mathematics within a consistent theory.

Thanks to David Roberts and Andres Caicedo for pointing this out on Google+.

I have no idea if Nelson’s proof is correct! He has, however, done good mathematics in the past. For example, in his Ph.D. thesis he made major progress in constructive quantum field theory, by proving the self-adjointness of Hamiltonians for the $\phi^4$ theory in 1+1 dimensions (with a spatial cutoff, I believe).

His first sentence: “The aim of this work is to show that contemporary mathematics, including Peano arithmetic, is inconsistent, to construct firm foundations for mathematics, and to begin building on those foundations” reminds me of Gödel’s Theorem. Is it a fool’s quest to try to make an consistent arithmetic?

I replied, trying to avoid nuances that would confuse nonmathematicians (so give me a break):

Most logicians don’t think the problem is “making a consistent arithmetic” - unlike Nelson, they believe they arithmetic we have now is already consistent. The problem is making a consistent system of arithmetic that can prove itself consistent.

Gödel’s theorem says that given certain technical conditions, any system of arithmetic that can prove itself consistent must be inconsistent.

So, the only way out is to develop a system of arithmetic that doesn’t obey those ‘certain technical conditions’. And since Nelson is no fool, this is what he’s trying to do.

One of those ‘certain technical conditions’ is the idea that the numbers 0, 1, 2, 3, 4, … obey the principle of mathematical induction. Namely, if

A) some property holds for the number 0,

and

B) given that this property holds for some number n, it holds for the next number n+1,

then it holds for all numbers. Nelson doubts the principle of mathematical induction, for reasons he explains in his book, so I’m sure his new system will eliminate or modify this principle.

Needless to say, this is a radical step. But vastly more radical is his claim that he can prove ordinary arithmetic is inconsistent. Almost no mathematicians believe that. I bet he’s making a mistake somewhere, but if he’s right he’ll achieve eternal glory.

In his book Elements, in the chapter Against Finitism, Nelson explains his doubts about the principle of mathematical induction:

Induction is justified by appeal to the finitary credo: for every number $x$ there exists a numeral $\mathrm{d}$ such that $x$ is $\mathrm{d}$. It is necessary to make this precise; as someone once said, it depends on what you mean by “is”. We cannot express it as a formula of arithmetic because “there exists” in “there exists a numeral d” is a metamathematical existence assertion, not an arithmetical formula beginning with $\exists$.

Let $A$ be a formula of $P$ with just one free variable $x$ such that $\vdash_P \exists x \, A$. By the least number principle (which is a form of induction), there is a least number $x$ satisfying $A$. One might, thinking that every number is a numeral, assert that there exists a numeral $\mathrm{d}$ such that $\vdash_P A_x(\mathrm{d})$.

[Here $A_x(\mathrm{d})$ is $A$ with the free variable $x$ replaced by the numeral $\mathrm{d}$.]

But, as I learned from Simon Kochen, this does not work. The rank of a formula is the number of occurrences of $\exists$ in it. Let $B$ be the formula asserting that there is no arithmetized proof of a contradiction all of whose formulas are of rank at most $x$. Then each $B_x(\mathrm{d})$ can be proved in $P$ by introducing a truth definition for formulas of rank at most $\mathrm{d}$. But if $P$ is consistent then $\forall x B$ is not a theorem of $P$, by Gödel’s second theorem. Now let $A$ be $B \implies \forall x B$. Then $\exists x A$ is a theorem of $P$ since it is equivalent to the tautology $\forall x B \implies \forall x B$, but (if $P$ is consistent) there is no numeral $\mathrm{d}$ such that $\vdash_P A_x(\mathrm{d})$.

The finitary credo can be formulated precisely using the concept of the standard model of arithmetic: for every element $\xi$ of $\mathbb{N}$ there exists a numeral $\mathrm{d}$ such that it can be proved that $\mathrm{d}$ is equal to the name of $\xi$; but this brings us back to set theory. The finitary credo has an infinitary foundation.

The use of induction goes far beyond the application to numerals. It is used to create new kinds of numbers (exponential, superexponential, and so forth) in the belief that they already exist in a completed infinity. If there were a completed in finity $\mathbb{N}$ consisting of all numbers, then the axioms of $P$ would be true assertions about numbers and $P$ would be consistent.

It is not a priori obvious that $P$ can express combinatorics, but this is well known thanks to Gödel’s great paper on incompleteness. As a consequence, exponentiation $\uparrow$ and superexponentiation $\Uparrow$ can be defined in $P$ so that we have

(20) $x \uparrow 0 = S0$

(21) $x \uparrow Sy = x \cdot (x \uparrow y)$

(22) $x \Uparrow 0 = S 0$

(23) $x \Uparrow Sy$ = $x \uparrow (x \Uparrow y)$

and similarly for primitive-recursive functions in general. For the schema of primitive recursion to be coherent, it is necessary that the values of the functions always reduce to a numeral, since they are defined only for 0 and iterated successors of numbers for which they have been previously defined. Finitists believe that primitive recursions always terminate; for example, that applying (15)-(18)

[namely, the axioms of Peano arithmetic, less mathematical induction]

and (20)-(23) a sufficient number of times,

$S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0$

reduces to a numeral. But the putative number of times these rules must be applied can only be expressed by means of a superexponential expression — the argument is circular.

The problem is not simply that some primitive recursions are too long. The problem is structural: there is a sharp division between two classes of primitive recursions. This results from work of Bellantoni and Cook; see also Bellantoni’s thesis. They indicate that their work was strongly inuenced by previous work of Leivant, who subsequently gave a more elegant form to the characterization, and it is appropriate to call the result the BCL theorem.

While a number is being constructed by recursion, it is only potential, and when the recursion is complete, it is actual. What Bellantoni and Cook, and Leivant, do is restrict recursions so that they occur only on actual numbers. Then the theorem is that the class of functions computable in this way is precisely the class of polynomial-time functions. This is an astonishing result, since the characterization is qualitative and conceptually motivated, saying nothing whatever about polynomials, Bellantoni, Cook, and Leivant have revealed a profound difference between polynomial-time recursions and all other recursions. The recursions constructed by the BCL schema enjoy a different ontological status from recursions in general. In the former, recursions are performed only on objects that have already been constructed. In the latter, for example in a superexponential recursion, one counts chickens before they are hatched (and the chicks that they produce as well), all in the fond hope that this will eventually take place in the completed infinity by and by.

Not only is induction as a general axiom schema lacking any justification other than an appeal to $\mathbb{N}$ as a completed infinity, but its application to specific variable-free primitive-recursive terms lacks a cogent justification. We shall exhibit a superexponential recursion and prove that it does not terminate, thus disproving Church’s Thesis from below and demonstrating that finitism is untenable.

Note, in case it’s not obvious, that this is not a proof of the inconsistency of Peano Arithmetic. Nonetheless, exhibiting a recursion that doesn’t terminate sounds fascinating in its own right! I hadn’t been aware of this work of Bellantoni, Cook, and Leivant.

He outlines his claimed inconsistency proof later, in Section 18 — but alas, it’s too technical for me to follow. It uses the fact (or claim) that Robinson arithmetic “proves the quasitautological consistency of its relativization”. It uses the “Hilbert–Ackermann theorem” stating that “a quasitautologically consistent open theory is consistent”. It also uses Chaitin’s incompleteness theorem, and Kritchmann and Raz’s “stunning proof without diagonalization or self-reference of Gödel’s second incompleteness theorem”. The only bit of this I understand is Chaitin’s incompleteness theorem.

Can anyone take a stab at explaining some of these ideas? You can see most of the proof sketch in his outline.

I should admit that Nelson and I had the same Ph.D. thesis advisor, so I probably take his ideas more seriously than if he were some random unknown guy. On the other hand, he turned me down when I asked him to supervise my undergraduate thesis, so it’s not like we’re best buddies or anything.

Posted at September 27, 2011 6:59 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2441

### Re: The Inconsistency of Arithmetic

Over on Google+, Terry Tao writes:

I have read through the outline. Even though it is too sketchy to count as a full proof, I think I can reconstruct enough of the argument to figure out where the error in reasoning is going to be. Basically, in order for Chaitin’s theorem (10) to hold, the Kolmogorov complexity of the consistent theory T has to be less than $\ell$. But when one arithmetises (10) at a given rank and level on page 5, the complexity of the associated theory will depend on the complexity of that rank and level; because there are going to be more than $2^\ell$ ranks and levels involved in the iterative argument, at some point the complexity must exceed $\ell$, at which point Chaitin’s theorem cannot be arithmetised for this value of $\ell$.

(One can try to outrun this issue by arithmetising using the full strength of $\mathrm{Q}_0^*$, rather than a restricted version of this language in which the rank and level are bounded; but then one would need the consistency of $\mathrm{Q}_0^*$ to be provable inside $\mathrm{Q}_0^*$, which is not possible by the second incompleteness theorem.)

I suppose it is possible that this obstruction could be evaded by a suitably clever trick, but personally I think that the FTL neutrino confirmation will arrive first.

Posted by: John Baez on September 27, 2011 8:44 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Thanks, John, for alerting me to this discussion. And I apologize for turning down your proposed senior thesis topic.

So far as I know, the concept of the “Kolmogorov complexity of a theory”, as opposed to the Kolmogorov complexity of a number, is undefined. Certainly it does not occur in Chaitin’s theorem or in the Kritchman-Raz proof.

I work in a fixed theory Q_0^*. As Tao remarks, this theory cannot prove its own consistency, by the second incompleteness theorem. But this is not necessary. The virtue of the Kritchman-Raz proof of that theorem is that one needs only consider proofs of fixed rank and level, and finitary reasoning leads to a contradiction.

Posted by: Edward Nelson on September 28, 2011 5:34 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

The obvious way to define the complexity of a recursive theory is to say it is equal to the complexity of the number which codes it.

Posted by: Monroe Eskew on September 28, 2011 6:13 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Let me try to clarify one last time. The one-line summary is that your assertion that one is working in a fixed theory $Q_0^*$ is incorrect; the argument works instead in multiple restricted versions of this theory, and this makes a huge difference. More on this below.

Here is the statement of Chaitin’s theorem:

(1) Given a theory $T$, there exists an $l$ with the property that, if $T$ is consistent, then there does not exist an $x$ such that $T$ can prove $K(x)\gt l$.

The key point here is that $l$ depends on $T$. (This is not a major concern in either Chaitin’s work or Kritchman-Raz’s work, who really are working just with a single theory $T$, but is absolutely critical in the your argument, due to the multiple restricted versions of $Q_0^*$ that are in play.) To emphasise this, let me write $l(T)$ instead of $l$, thus:

(2) Given a theory $T$, there exists an $l(T)$ with the property that, if $T$ is consistent, then there does not exist an $x$ such that $T$ can prove $K(x)\gt l(T)$.

The dependence of $l$ on $T$ is apparent in the textbook proof of Chaitin’s theorem, including the one given in Nelson’s outline; one has to build a Turing machine of length less than $l$ that is capable of checking proofs in $T$, and so $l$ needs to exceed the number of bits needed to build a proof verifier in $T$. I’m going to call this the Kolmogorov complexity of $T$; this isn’t a term that is, strictly speaking, in the literature, but I think it is a clear analogue to the notion of a Kolmogorov complexity of a number.

So, $l(T)$ depends on $T$. Now, what $T$ is being used to define $l(T)$? From what you have said, it seems that you would like to take $T$ to equal $Q_0^*$, so that $l$ is $l(Q_0^*)$. As such, Chaitin’s theorem (2) says

(3) If $Q_0^*$ is consistent, then there does not exist an $x$ such that $Q_0^*$ can prove $K(x) \gt l(Q_0^*)$.

However, we cannot use this instantiation (3) of Chaitin’s theorem in your argument, because (as we both agree) we cannot use the hypothesis $Q_0^*$ is consistent. Instead, it appears that your argument is instead trying to use the assertion

(4) If $Q_0^*$ at a given rank $\rho$ and level $\lambda$ is consistent, then there does not exist an $x$ such that $Q_0^*$ at that rank $\rho$ and level $\lambda$ can prove $K(x) \gt l(Q_0^*)$.

However, (4) is not a valid case of Chaitin’s theorem (2). Instead, Chaitin’s theorem gives

(5) If $Q_0^*$ at a given rank $\rho$ and level $\lambda$ is consistent, then there does not exist an $x$ such that $Q_0^*$ at that rank $\rho$ and level $\lambda$ can prove $K(x) \gt l(Q_0^*(\rho,\lambda))$,

where I am using $Q_0^*(\rho,\lambda)$ to denote the restriction of $Q_0^*$ to proofs of rank at most $\rho$ and level at most $\lambda$.

The difficulty is that the quantity

$l( Q_0^*(\rho,\lambda) )$

can be much larger than the quantity

$l( Q_0^* ),$

despite the fact that the former theory is a restricted subtheory of the latter.

This is because in order to build a proof verifier for $Q_0^*$ at rank $\rho$ and level $\lambda$, one must at some point encode $\rho$ and $\lambda$. (An unrestricted proof verifier for all of $Q_0^*$ is useless for this purpose unless one is allowed to assume that $Q_0^*$ is consistent.) Thus (barring some unusual encoding trick), the length of the proof verifier must at least equal the Kolmogorov complexity of $\rho$ or $\lambda$. But in the argument, one is playing with at least $2^{l(Q_0^*)+1}$ different values of $\rho$ and $\lambda$, so the Kolmogorov complexity of $\rho$ or $\lambda$ is guaranteed to exceed $l(Q_0^*)$ for at least one such choice of $\rho$ or $\lambda$. As such, (5) does not necessarily imply (4).

Basically, I think the conceptual error here is to believe that the quantity $l=l(T)$ provided by Chaitin’s theorem is monotone in the sense that if a given $l$ works for a theory $T$, then it would also work for all restricted subtheories of $T$. This is not the case, because a subtheory can in fact be much more complicated than the original theory in the sense that it requires a much longer proof verifier.

Posted by: Terence Tao on September 28, 2011 7:37 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I agree with all your comments up to (3). But I don’t use (3). I use the Chaitin machine, or proof verifier, for the fixed theory Q_0^*. If \mu = 1 in the surprise examnation paradox, then the Chaitin machine finds proofs of both K(x)>\ell and K(x)\le\ell
*at a specific rank and level*. Then by finitary reasoning, this gives a quantifier-free proof of 0\ne0 in Q_0, and Q_0^* proves that this is impossible. And so on for higher values of \mu.

So I disagree with your parenthetical statement that “An unrestricted proof verifier for all of Q_0^* is useless for this purpose unless one is allowed to assume that Q_0^* is consistent.” It is useful because the proofs in the Kritchman-Raz argument are all of specific rank and level, together with the very special fact that Q_0^*, which is locally relativizable in Q_0, proves the quasitautological consistency of Q_0. The Hilbert-Ackermann algorithm plays an esential role.

I am engaged in writing up a much more detailed outline. This will take a few days, but then it will be posted at http://math.princeton.edu/~nelson/papers/inconsistency.pdf

Posted by: Edward Nelson on September 28, 2011 8:46 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

It is true that if one takes Chaitin’s theorem as a black box, all the proofs in the Kritchman-Raz argument are bounded in rank and level. But in the proof of Chaitin’s theorem, if one is using an unrestricted proof verifier, then the Chaitin argument can create proofs of unbounded rank and level, at which point one needs consistency of the full theory $Q_0^*$ and not just a bounded-rank-and-level restriction thereof, in order to derive a contradiction.

Suppose for instance that $\mu=1$ (I am omitting bars for sake of readability). I accept that the Kritchman-Raz argument then shows that there is a $\xi$ for which there is bounded rank-and-level proof of $K(\xi) \gt \ell$, where we will be taking $\ell = \ell(Q_0^*)$.

The next step, as I understand your argument, is to try to run Chaitin’s argument to find an $\eta$ for which

(1) there is a bounded rank-and-level proof that $K(\eta) \leq \ell$; and

(2) there is a bounded rank-and-level proof that $K(\eta) \gt \ell$,

thus contradicting (7) in your outline. Note that this $\eta$ will likely be different from $\xi$.

Now, how does Chaitin find this $\eta$? He builds a program to search for pairs $(\eta, \pi)$ such that $\pi$ is a proof that $K(\eta) \gt \ell$, returning $\eta$ when the first such pair is found. But here it becomes crucial to decide whether this program is going to use an unrestricted proof verifier (so that it accepts proofs of unbounded rank and level) or a restricted one (so that it only accepts proofs of rank at most $\rho$ and level at most $\lambda$).

If you use an unrestricted proof verifier, then one obtains (1), but does not obtain (2); instead, one only obtains

(3) there is a proof (of unbounded rank and level) that $K(\eta) \gt \ell$,

and (1) and (3) do not lead to a contradiction unless one knows that the entire theory is consistent.

If you use a restricted proof verifier, then one recovers (2), but one loses (1), for the reason I pointed out in my previous comment.

In either case, it does not appear to me that one can obtain a contradiction.

Posted by: Terence Tao on September 29, 2011 8:13 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

You are quite right, and my original response was wrong. Thank you for spotting my error.

I withdraw my claim.

Posted by: Edward Nelson on October 1, 2011 1:39 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

probably a stupid question, but if it would be true that Peano arithmetic is inconsistent, then what did Gerhard Gentzen do wrong? I thought he showed the consistency of the Peano aximons (but with methods outside of Peano arithmetic)…

Posted by: wolfgang on September 27, 2011 2:52 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Gentzen uses induction on $\epsilon_0$ (on a restricted class of formulae) to prove the consistency of PA. PA is essentially induction on $\omega$ on an unrestricted class of formulae.
Posted by: Tom Ellis on September 27, 2011 3:23 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Gentzen proved the consistency of Peano arithmetic using some version of arithmetic plus an additional principle called ‘induction up to $\epsilon_0$’. Suppose the version of arithmetic he used were Peano arithmetic. (That would certainly work.) If Peano arithmetic were inconsistent, Peano arithmetic plus induction up to $\epsilon_0$ would also be inconsistent. In that case, one could prove anything using Peano arithmetic plus induction up to $\epsilon_0$—even the consistency of Peano arithmetic.

So, that’s one possible scenario.

But this makes Gentzen’s proof look rather silly. If you use an axiom system $B$ that’s stronger than the axiom system $A$ to prove the consistency of system $A$, there’s a sense in which you’re really not gaining anything.

In fact Gentzen’s proof is not so silly. Gentzen actually proved the consistency of Peano arithmetic using a weaker version of arithmetic, called ‘primitive recursive arithmetic’, together with induction up to $\epsilon_0$. This combination is not stronger than Peano arithmetic. (Nor is it weaker.)

Thanks to Gentzen’s proof, if Peano arithmetic turned out to be inconsistent, we would immediately know that primitive recursive arithmetic plus induction up to $\epsilon_0$ is also inconsistent.

So, Gentzen wouldn’t have done anything wrong, except using an inconsistent axiom system to carry out his proof!

There’s a pretty nice discussion of these issues, focusing on other aspects, on Wikipedia. See the section “Relation to Gödel’s proof”.

Posted by: John Baez on September 27, 2011 3:36 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I agree with this summary of Gentzen’s argument. However, the attempted proof applies to primitive recursive arithmetic (and less). So in this context ε0 is not relevant. Finitary reasoning is, according to Nelson, is already inconsistent, and THAT is where Gentzen’s proof would go wrong.

Posted by: Daniel Mehkeri on September 28, 2011 12:54 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

A bit of expansion on my previous remarks. (I was unable to figure out how to write in math mode on this platform, unfortunately; sorry.)

Nelson’s argument is an attempt to modify the Kritchman-Raz proof of the second incompleteness theorem, which in turn is based on the Chaitin incompleteness theorem. (I recommend reading the Kritchman-Raz Notices article, by the way.)

The Chaitin incompleteness theorem is a rigorous version of the Berry paradox. Roughly speaking, it says that if a number x has Kolmogorov complexity K(x) greater than l, and T is a consistent theory, then T will be unable to prove that K(x) is actually greater than l, *provided that l is sufficiently large depending on T* (in particular, l has to be larger than the complexity of T itself). In particular, it provides a simple proof of the first incompleteness theorem. The proof proceeds by running Berry’s “first uninteresting number” argument, where “number” should be replaced by a pair (x,p) where x is a number and p is a proof (using some reasonable ordering on pairs to define “first”), and “(x,p) is uninteresting” means “p is a proof in T that K(x) > l.” This gives a number x for which both K(x) \leq l (provided l is larger than the complexity of T), and for which T can prove that K(x) > l, demonstrating the inconsistency of T.

The Kritchman-Raz argument combines Chaitin’s theorem with a version of the unexpected hanging paradox to prove the second incompleteness theorem, that a reasonable consistent theory T cannot prove its own consistency. The idea is to take the number l provided from Chaitin’s theorem, and try to count the number \mu of the numbers less than 2^{l+1}+1 that have Kolmogorov complexity greater than l. Clearly there \mu is at least 1, because there are only 2^{l+1} programs of length l or less. If \mu is exactly 1, then one can run all programs of length l or less in parallel until all but one of the numbers less than 2^{l+1}+1 is outputted, thus proving that the remaining number has complexity greater than l; but this contradicts Chaitin’s theorem. Thus \mu is at least 2. But if \mu is exactly 2, one can run all the programs of length l or less in parallel until all but two of the numbers less than 2^{l+1}+1 are outputted, proving (in T) that the remaining two numbers have complexity greater than l, again contradicting Chaitin’s theorem (here we have to implicitly use the fact that T can prove its own consistency). We can continue this argument 2^{l+1}+1 times to conclude that \mu > 2^{l+1}+1, which is absurd.

What Nelson is trying to do is to modify the Kritchman-Raz argument using a theory slightly weaker than PA that he calls Q_0^*. Of course, we know that this theory cannot prove its own consistency (if it is consistent), by the second incompleteness theorem. However, the theory is hierarchical, and roughly speaking contains a nested sequence Q_1 \subset Q_2 \subset Q_3 \subset \ldots of increasingly stronger theories, such that each Q_{i+1} can prove the consistency of the predecessor Q_i. The idea is to run the Kritchman-Raz argument to show that Q_1 can prove that \mu>1, that Q_2 can prove that \mu>2, and so forth, until one shows that Q_{2^{l+1}+1} (and thus Q_0^* and PA) can prove that \mu > 2^{l+1}+1, which is absurd.

Unfortunately, in order to make this work, all of the intermediate theories Q_1,Q_2,\ldots need to have Kolmogorov complexity less than l, in order to be able to invoke Chaitin’s theorem for these intermediate theories. But there are over 2^{l+1} of these theories, and so at least one of them has to have complexity greater than l, so the argument does not seem to work.

Posted by: Terence Tao on September 27, 2011 5:29 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Thanks again, Terry—and thanks, David, for getting the TeX in Terry’s post to work, here.

There’s a lot to say, but here I’ll just say: to get TeX to work here, you must choose a “Text Filter” other the default one when posting your article. You’ll see a little menu when you post a comment here, saying “Text Filter”. The default choice is “Convert Line Breaks”. Any of the others will let you use TeX.

If you might want to use HTML, try “Itex to MathML with Parbreaks”. If you might want to use markup language like this for italics an so on, try “Markdown with itex to MathML”. If you don’t care about HTML or markup language, either of these should be okay.

If you couldn’t get this to work, the same must be true for lots of other people. I would like Jacques Distler to change the default to one where TeX actually works.

Posted by: John Baez on September 28, 2011 1:30 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

[Let me see if I can get the tex to work – DC]

A bit of expansion on my previous remarks.

Nelson’s argument is an attempt to modify the Kritchman-Raz proof of the second incompleteness theorem, which in turn is based on the Chaitin incompleteness theorem. (I recommend reading the Kritchman-Raz Notices article, by the way.)

The Chaitin incompleteness theorem is a rigorous version of the Berry paradox. Roughly speaking, it says that if a number $x$ has Kolmogorov complexity $K(x)$ greater than $l$, and $T$ is a consistent theory, then $T$ will be unable to prove that $K(x)$ is actually greater than $l$, provided that $l$ is sufficiently large depending on $T$ (in particular, $l$ has to be larger than the complexity of $T$ itself). In particular, it provides a simple proof of the first incompleteness theorem. The proof proceeds by running Berry’s “first uninteresting number” argument, where “number” should be replaced by a pair $(x,p)$ where $x$ is a number and $p$ is a proof (using some reasonable ordering on pairs to define “first”), and “$(x,p)$ is uninteresting” means “$p$ is a proof in $T$ that $K(x) \gt l$.” This gives a number $x$ for which both $K(x) \leq l$ (provided $l$ is larger than the complexity of $T$), and for which $T$ can prove that $K(x) \gt l$, demonstrating the inconsistency of $T$.

The Kritchman-Raz argument combines Chaitin’s theorem with a version of the unexpected hanging paradox to prove the second incompleteness theorem, that a reasonable consistent theory T cannot prove its own consistency. The idea is to take the number $l$ provided from Chaitin’s theorem, and try to count the number $\mu$ of the numbers less than $2^{l+1}+1$ that have Kolmogorov complexity greater than $l$. Clearly there $\mu$ is at least $1$, because there are only $2^{l+1}$ programs of length $l$ or less. If $\mu$ is exactly $1$, then one can run all programs of length $l$ or less in parallel until all but one of the numbers less than $2^{l+1}+1$ is outputted, thus proving that the remaining number has complexity greater than $l$; but this contradicts Chaitin’s theorem. Thus $\mu$ is at least $2$. But if $\mu$ is exactly $2$, one can run all the programs of length $l$ or less in parallel until all but two of the numbers less than $2^{l+1}+1$ are outputted, proving (in $T$) that the remaining two numbers have complexity greater than $l$, again contradicting Chaitin’s theorem (here we have to implicitly use the fact that $T$ can prove its own consistency). We can continue this argument $2^{l+1}+1$ times to conclude that $\mu \gt 2^{l+1}+1$, which is absurd.

What Nelson is trying to do is to modify the Kritchman-Raz argument using a theory slightly weaker than PA that he calls $Q_0^{\ast}$. Of course, we know that this theory cannot prove its own consistency (if it is consistent), by the second incompleteness theorem. However, the theory is hierarchical, and roughly speaking contains a nested sequence $Q_1 \subset Q_2 \subset Q_3 \subset \ldots$ of increasingly stronger theories, such that each $Q_{i+1}$ can prove the consistency of the predecessor $Q_i$. The idea is to run the Kritchman-Raz argument to show that $Q_1$ can prove that $\mu \gt 1$, that $Q_2$ can prove that $\mu \gt 2$, and so forth, until one shows that $Q_{2^{l+1}+1}$ (and thus $Q_0^{\ast}$ and PA) can prove that $\mu \gt 2^{l+1}+1$, which is absurd.

Unfortunately, in order to make this work, all of the intermediate theories $Q_1,Q_2,\ldots$ need to have Kolmogorov complexity less than $l$, in order to be able to invoke Chaitin’s theorem for these intermediate theories. But there are over $2^{l+1}$ of these theories, and so at least one of them has to have complexity greater than $l$, so the argument does not seem to work.

Posted by: Terence Tao on September 27, 2011 8:05 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Just one brief comment (which concerns what is presumably but a thinko): Q*_0 is not “slightly weaker” than PA but vastly weaker than PA. It does not prove the totality of exponentiation, and it is relatively interpretable in Q itself. What isn’t clear to me is what the relation between Q*_0 so-called “bounded arithmetic” or I\Delta_0. They may be the same theory, but Nelson does not Q*_0 that way; at least, they are closely related.

I am no expert here, but I too had general worries about the bounds….

Posted by: Richard Heck on September 27, 2011 11:09 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

For what its worth, this is the happiest a post on here has made me in a decent amount of time.

Posted by: Michael Blackmon on September 28, 2011 9:40 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

The following remarks from my conversation on Google+ may be helpful for people who know less logic than I do. I’ll end with a question for people who know more.

Someone said:

The part of the outline I really really don’t understand is everything to do with the formal systems $Q_0$ , $Q_0^*$ (roughly) of Robinson Arithmetic (said to be weaker than first-order Peano Arithmetic).”

The system $Q$ is “Robinson arithmetic”, a version of arithmetic much weaker than Peano arithmetic because it’s finitely axiomatizable and lacks the axiom schema of mathematical induction… but still strong enough that Gödel’s theorem applies! There’s a nice introduction here:

On the other hand $Q_0$ is a slight variant of $Q$, apparently introduced by Nelson. In this variant Robinsons axiom

$x \ne 0 \implies \exists y \, S y = x$

is replaced by introducing a unary function symbol $P$ (predecessor) and two axioms:

$P0 = 0$

$x \ne 0 \implies x = S P x$

The technical advantage is that now no axioms have quantifiers! Gödel’s theorem still applies.

I don’t understand $Q_0^*$. The star stands for some kind of “relativization” process I never learned about. Could someone explain this?

Posted by: John Baez on September 28, 2011 8:41 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

You didn’t understand anything!

Only Gödel’s 1st incompleteness applies to Q

For the 2nd incompleteness to work one needs something not much weaker than PRA

I\Delta_0 does not prove its own consistency for a completely different reason (see Paris & Wilkie)

Posted by: Anonymous on September 28, 2011 9:24 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

It’s neither kind nor wise to say things like “You didn’t understand anything!” when someone makes a small mistake—or even a huge one, for that matter. I hope you are not planning to become a teacher. But if you do, you’ll have to change your attitude to succeed.

If I’d said ‘Gödel’s first incompleteness theorem’ instead of ‘Gödel’s theorem’, I believe my remarks would be correct. Thanks for the other information. I’d still like someone to answer my question.

Posted by: John Baez on September 28, 2011 9:37 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Anonymous, you are mistaken. Gödel’s second incompleteness theorem does hold for every recursively axiomatized extension of $Q$.

In fact, something even stronger is known: if $T$ is a consistent recursively axiomatized extension of $Q$, and $I$ is a definable cut in $T$, then there exists a constant $k$ such that $T$ cannot prove “there is no proof of inconsistency of $T$ in $I$ with cut-rank at most $k$”. This is a result of Pudlák.

Posted by: Emil Jeřábek on September 28, 2011 6:47 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Thanks, Emil. Perhaps I caved prematurely due to Anonymous’ bullying?

The Wikipedia discussion page for Robinson arithmetic is somewhat helpful. Someone named CBM wrote:

I noticed that Robinson arithmetic was the subject of some edits on the article. I was interested in that subject earlier this year, and when I looked into it this is what I found:

• Of course Q is strong enough for the standard proof of the first incompleteness theorem. This is what it was designed for, after all.
• The common belief among experts is that Q does not prove the Hilbert–Bernays derivability conditions that are needed for the standard proof of the second incompleteness theorem. However, there is no published proof that any particular derivability condition is not provable in Q. This was discussed on the FOM email list in September 2010.
• Nevertheless, there is an alternate proof that Q does not, in fact, prove Con(Q). In this limited sense, the second incompleteness theorem holds for Q. I believe that the first proof that Q does not prove Con(Q) is due to A. Bezboruah and J. C. Shepherdson, “Gödel’s Second Incompleteness Theorem for Q”, The Journal of Symbolic Logic Vol. 41, No. 2 (Jun., 1976), pp. 503-512.
Posted by: John Baez on September 29, 2011 2:53 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

There’s a technical point here that is obvious to experts but which may be worth stating explicitly because the experts sometimes forget to mention it. Namely, before discussing whether Q can prove Con(Q), we should ask what we even mean by “Con(Q)”. If you’re working with a sufficiently strong system, like primitive recursive arithmetic (PRA), this is pretty clear, because PRA is strong enough to prove all the basic facts you want to prove about manipulating strings to produce proofs. Q, however, is so ridiculously weak that you can write down a vast array of different statements that, in PRA, are all provably equivalent to Con(Q), but which are not equivalent if all you’re allowed to assume is Q. So we have a “will the real Con(Q) please stand up?” problem.

This issue is of course mentioned in the paper by Bezboruah and Shepherdson. What they show is that for a particular natural choice of “Con(Q)”, Q can’t prove Con(Q). This is an interesting technical result but it’s not all that interesting philosophically; Q can’t really be expected to “prove its own consistency” since it’s too stupid to even know what consistency really means.

Posted by: Timothy Chow on September 30, 2011 3:17 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

IIRC, Q^*_0 is close to I\Delta_0, a very weak theory. It can be interpreted in Q. I think for the sake of the argument you can ignore Q^*_0 and replace it with something like I\Delta_0. Stronger theories like S_2 are also interpretable in Q, but they are still quite weak, nothing like I\Delta_0+EXP or PA.

What Nelson means by relativization is that he starts with Q, defines an interpretation of a slightly stronger theory and a formula that defines a cut in each model of Q that would be the model of the stronger theory. The word relativization here comes from the fact that we are relativizing formulas to those cuts (e.g. if the formula defining the cut is C, we replace quantifiers like \forall x with \forall x \in C).

It seems that the main result which is needed for the argument is that it can prove a form of its own consistency.

Posted by: Kaveh on September 28, 2011 12:02 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Q^* can in fact prove a *form* of its own consistency. (This is also discussed in Sam Buss’s thesis chapter 7 for slightly stronger theories like S^i_2 which are still interpretable in Q. These theories are much weaker than PA).

As far as I understand from his outline, a very high level description of Nelson argument is as follows:

1. Q^* proves that Q^* is consistent’.

(This is a theorem from his previous work, Predicative Arithmetic. It can also be proven in PRA.)

2. PRA can prove KR result for Q^* (with consistency replaced with this weaker notion of consistency’). So PRA thinks that Q^* cannot prove its own consistency’.

(This seems to be what is new.)

3. PRA prove two contradictory results, therefore PRA is inconsistent.

Posted by: Kaveh on September 28, 2011 11:30 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

This is a copy of a personal e-mail message I sent to Edward Nelson today.

Dear Prof. Nelson,

here is what is wrong with your proof of inconsistency of PRA. (The mistake in the proof I’m going to describe below is the same one found by Terrence Tao, by the way, but I will explain it in more detail.)

Formula numbers and page numbers refer to the outline:

http://www.math.princeton.edu/~nelson/papers/outline.pdf

Obviously, the $\ell$ in Chaitin’s theorem (formula (10)) depends on the theory $T$, so I will denote it by $\ell(T)$. (By the way, this $\ell(T)$ is what Terence Tao calls the “Kolmogorov complexity” of the theory $T$.)

Setting $\ell=\ell(Q_0^*)$, then, if $Q_0^*$ proves $K(\xi) \gt \ell$, then $Q_0^*$ is inconsistent.

Now let’s review the argument presented at the top of page 5. Working within $Q_0^*$, assume $\mu=1$. From this we obtain $\rho, \lambda, \xi$ and a $(\rho,\lambda)$-proof of $K(\xi)\gt \ell$. Now Chaitin’s theorem yields that $Q_0^*$ is inconsistent, i.e., that $Q_0^*$ proves (say) $not(0=0)$. But, (and here is the catch) Chaitin’s theorem does not yield that we have a $(\rho,\lambda)$-proof of $not(0=0)$!! So the first displayed formula on page 5 (line 7) is not correct.

Here is what is going on. Chaitin’s theorem can indeed be used to establish the implication:

there is a $(\rho,\lambda)$-proof of $K(\xi)\gt \ell$ $\implies$ there is a $(\rho,\lambda)$-proof of $not(0=0)$

but the relevant $\ell$ here is not $\ell(Q_0^*)$, but some other number $\ell(\rho,\lambda)$ which, roughly speaking, is the $\ell$ of the “theory” obtained by restricting proofs in $Q_0^*$ to $(\rho,\lambda)$-proofs. (More precisely, the relevant $\ell(\rho,\lambda)$ is defined in terms of the length of the Chaitin Turing machine that only inspects $(\rho,\lambda)$-proofs, rather than all proofs in $Q_0^*$.)

Best regards,
Daniel Tausk

Posted by: Daniel Tausk on September 29, 2011 8:26 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I’d like to make some comments about Nelson’s reasoning other than his proof.

Nelson objects to “believing in ω as a real object” but then goes on to believe in ω. For instance, he uses Godel’s Completeness Theorem (p. 9, Elements), which depends on the existence of ω and is infinitary reasoning par excellence. And he believes in the totality of the successor function, where the ontological force of Peano Arithmetic really lays.

Nelson has no trouble believing in ω. All his complaints are simply a bait-and-switch for his worries about induction. Prime facie induction does not depend on believing in ω as a real object, as it makes a claim about the numbers that there are, and nothing whatever about *what* numbers there are. How does Nelson connect the two? By claiming that “the induction axiom schema is usually justified as follows…” (p. 11). I’m not a master of mathematical history, but the little that I know makes me doubt that the schema is or has been usually justified “as follows”. I, for one, would never think of justifying induction in this way. Perhaps a or some footnotes are in order here?

Finally, I have to disagree with the poster’s “The problem is making a consistent system of arithmetic that can prove itself consistent.” There are systems of arithmetic which can prove themselves consistent. There are natural systems of arithmetic which can prove themselves consistent. The problem, if problem there is, is developing a consistent, meaningful system of mathematics (e.g/ including analysis) that can prove itself consistent.

Posted by: a on September 29, 2011 12:06 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Hello again!

When you say

Nelson has no trouble believing in $\omega$.

I think he’d disagree: I think he’s an ultrafinitist. For example, on page 50 of Predicative Arithmetic he doubts the totality of the exponential function, as follows:

The intuition that the set of all subsets of a finite set is finite—or more generally, that if $A$ and $B$ are finite sets, then so is the set $B^A$ of all functions from $A$ to $B$—is a questionable intuition. Let $A$ be the set of some 5000 spaces for symbols on a blank sheet of typewriter paper, and let $B$ be the set of some 80 symbols of a typewriter; then perhaps $B^A$ is infinite. Perhaps it is even incorrect to think of $B^A$ as being a set. To do so is to postulate an entity, the set of all possible typewritten pages, and then to ascribe some kind of reality to this entity—for example, by asserting that one can in principle survey each possible typewritten page. But perhaps it is simply not so. Perhaps there is no such number as $80^{500}$

These doubts are difficult for most mathematicians to sympathize with—including me. However, I’m glad someone is trying to see whether a consistent and workable approach to mathematics can be developed in the face of such radical doubts. I don’t know if it’s possible.

For example, when you write:

For instance, he uses Godel’s Completeness Theorem (p. 9, Elements), which depends on the existence of $\omega$ and is infinitary reasoning par excellence.

I don’t know what Nelson would say to that.

But when you write:

He believes in the totality of the successor function, where the ontological force of Peano Arithmetic really [lies].

I think I sort of understand what he might say. In Predicative Arithmetic he wrote there is a

difference between sitting comfortably in camp and saying “If I have climbed part way up the mountain, then I can always take one more step up the mountain”—and actually climbing a mountain, when each successive step requires and act.

In other words: just because the successor function is total, that does not, to him, imply that $\omega$ exists as a completed object. Being always able to take one more step does not mean that the infinite sequence of steps exists: for the next step to exist you have to actually take it.

The passage of Elements that I quoted earlier makes a similar point:

Finitists believe that primitive recursions always terminate; for example, that applying [the axioms of Q] and [the definition of the superexponentiation function $\Uparrow$] a sufficient number of times,

$S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0 \Uparrow S S 0$

reduces to a numeral. But the putative number of times these rules must be applied can only be expressed by means of a superexponential expression — the argument is circular.

Finally, on another note, you write:

Finally, I have to disagree with the poster’s “The problem is making a consistent system of arithmetic that can prove itself consistent.” There are systems of arithmetic which can prove themselves consistent. There are natural systems of arithmetic which can prove themselves consistent. The problem, if problem there is, is developing a consistent, meaningful system of mathematics (e.g. including analysis) that can prove itself consistent.

Okay, good point. I’m sadly ignorant of the systems of arithmetic that can prove themselves consistent—how strong they can be, etcetera.

Posted by: John Baez on September 29, 2011 1:28 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Nelson: “Perhaps there is no such number as 80^5000.” I am more sympathetic to this view than almost anyone. But by the same skeptical attitude, one should be able to argue, perhaps there is such a number as 80^5000 but there is no number as 80^5000 +1. Maybe one can survey the number of pages of the first, but not the second (even though I think surveying is a fairly bogus way of determining it).

“just because the successor function is total, that does not, to him, imply that ω exists as a completed object. “

I’ll grant you that Nelson doesn’t accept that w exists as a “completed object”. I honestly don’t know the difference between complete and potential, but never mind, call me a philistine and be done with it. My claim is that Nelson is committed to thinking that all the numbers, of standard model fame, exist (that, as I would put it, w exists); not that w exists “as a completed object”.

But ok, I obviously don’t know what Nelson actually thinks. I’ll refine my criticism to this: if Nelson really did think the problem is with w, and the existence of some numbers, then he should deny the only axiom whose primary thrust is ontological - namely that every number has a successor (which is a number). He shouldn’t just be assuming the totality of the successor function as an obvious given, because his arguments against induction are no weaker, and can be stronger, when put against the totality of a successor function. Induction is, or should be, secondary in all this.

Posted by: a on September 29, 2011 8:51 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Dear a, I think Nelson is influenced not only by purely philosophical considerations, but also by work on comptuational feasibility by people like Bellantoni, Cook and Leivant.

Matters are perhaps clearest when considered intuitionistically, which doesn’t change consistency strength, but offers a more direct computational reading. In a realizability interpretation of Heyting arithmetic, the successor function can be realized by an operation that requires a constant additional amount of memory. Induction, on the other hand, must realized by a recursive definition which may use enormous amounts of additional memory.

So if you are thinking in terms of feasible computability, the successor is relatively unproblematic, whereas induction needs to be strictly controlled to be well-behaved.

Posted by: Neel Krishnaswami on September 30, 2011 7:26 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I think Nelson’s reply would be that it is induction which allows us to prove $10^{10^{10^10}}$ exists in a relatively small number of words, without “really” showing it exists by constructing it with the successor function. Taking successors is a concrete procedure you can do at home, but superexponential functions require too much abstract reasoning, which cannot be replaced by concrete reasoning by humans on earth, or even Vulcans for that matter. But perhaps Data could do it, no?

Posted by: Monroe Eskew on September 29, 2011 9:37 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I thought of writing some stuff, and referring you to a paper I thought you’d find interesting. Then it occurred to me you’re the one who wrote it. I should wait to see if there’s a prize for guessing who “A” is.

As for Nelson, remember he thinks PRA, and less, is inconsistent. That’s less than Sigma_1 induction. I’m not sure why he would think it was consistent to allow full induction but only assume successor up to some medium-sized numbers. Especially if we go to second-order like I think you want to.

Posted by: Daniel Mehkeri on September 30, 2011 5:42 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Herein (in one of the answers provided) you can find a link to a paper on a weak system of arithmetics, which seemingly proves its own consistency. Basically, as far as I understand it, in this system even multiplication is not total, and that in turn prevents Godel’s full-blown argument to apply:

http://mathoverflow.net/questions/9864/presburger-arithmetic

Anyway, I do not think that is the point.

The issue at stake (not in Nelson’s PA claim, but in the ultrafinistist’s program) is:

can we emancipate mathematics from N?

In other words, can we do away with all sorts of infinity (actual or potential), without depriving ourselves of its luxury?

By this I mean the following: if we built our mathematical world from, say, the initial segment
{0, 1, ….1000000000000000}, it wouldn’t be too good (too rigid, too crisp). But if, on the other hand, we had an arithmetical universe in which there are such beasts as finite inaccessible integers (yes, you have read the oxymoron correctly, it is not a typo), that would be way more interesting. Having parts (or perhaps all) of Cantor ‘s paradise squeezed down to the finitary realm, that would be, I trust, a real garden of eden, for mathematicians, for mathematical physicists (so they stop trying to make space-time discrete still using calculus), and finally it would be an excellent topic for Greg Egan’s next SF novel….

Posted by: Mirco Mannucci on September 29, 2011 4:28 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I don’t know any advantages to studying arithmetic by restricting it to initial segments or to a particular initial segment with a large upper bound. Clearly a skeptic should allow their possibility, but if nothing is gained by the restriction, surely it is better simply to allow for both the possibility of initial segments and the standard model. I.e. the proper attitude is neither theism nor atheism, but agnosticism.

Posted by: a on September 29, 2011 7:57 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

As I get older I seem to be getting more and more relaxed about foundational issues. I’m happy to see people formalize and explore all imaginable attitudes toward the foundations of mathematics. I feel confident that the more interesting axiom systems will eventually attract more researchers, while the less interesting ones will remain marginal. I am not eager for one system to prevail over all others… nor do I feel any desire for systems I dislike to go completely extinct.

It’s a lot like my fondness for biodiversity. I enjoy the diversity of life, and am very happy there are tigers, and would be sad for them to go extinct, even though I wouldn’t want a bunch running around in my back yard.

In particular, I’m glad there are ultrafinitists, because I suspect that only someone with views like that could be motivated to prove the inconsistency of (say) Peano arithmetic, and seek plausible strategies for doing it.

If everyone believes Peano arithmetic is consistent, and it’s not, we’re in big trouble because it’ll take us a long time to discover it. So we need a few lonely people working on the other side of this issue. I don’t think they’ll succeed, but I’m glad they’re trying. Even if they don’t succeed, there could be some interesting concepts and theorems that only they are likely to find.

Finally, I don’t expect these people to take the same ‘relaxed, balanced’ attitude that I have. I suspect that only someone with strong opinions could possibly be motivated to spend a lot of time developing ultrafinitism, or trying to prove the inconsistency of Peano arithmetic. Expecting them to share my relaxed attitude is a bit like expecting a tiger to be an environmentalist.

Posted by: John Baez on September 30, 2011 3:20 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Expecting them to share my relaxed attitude is a bit like expecting a tiger to be an environmentalist.

All the tigers I know are environmentalists. They fiercely protect their habitat. They are not vegetarians.

Posted by: Eugene Lerman on September 30, 2011 3:44 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Personally I find it very discomforting to think that ideology should be a prerequisite to finding a certain kind of mathematical result. The fact that Nelson has this ultrafinitist ideology made initially skeptical of the result, much like one would be skeptical of studies claiming no link between cigarettes and lung cancer produced by scientists employed by tabacco companies.

The fact that the inconsistency of PA would have such far-reaching implications also shows that in this case, ideology should not even be much help. For if PA is inconsistent, then so are ZFC, supercompact cardinals, Grothendieck universes, some type theories, etc. Indeed the inconsistency of PA would imply the inconsistency of most hypotheses of mathematicians, whether or not they study logic and formal axiom systems. The fact that Nelson’s ideology drives him away from these other methods and areas puts a handicap on his search for an inconsistency.

Posted by: Monroe Eskew on September 30, 2011 6:08 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I, in turn, am uncomfortable with these remarks, which I think go too far. For one thing, Edward Nelson is likely reading these words, since he has already participated in this thread. It might help to bear this in mind, and also bear in mind that he is obviously no fool in logic. (He has done, for example, terrific work in Internal Set Theory.)

It goes too far to suggest that his “ideology drives him away from these other methods and areas and puts a handicap on his search for inconsistency”, because he is surely aware of these other areas, and undoubtedly knows much more about them than your average bear (or tiger). You have no idea of the inner workings of his mind, which cannot possibly be divined on the basis of this particular presentation, and you cannot possibly make a reductionist claim on the extent to which ideology drives him to do anything or miss anything. (More precisely, you could, but you’d be wrong to do that.)

I agree that Professor Nelson evidently has some strong opinions, and I agree with John that it would take that type of strength to pursue such a bold and lonely undertaking. Let’s just leave it at that. The status of his alleged proof of inconsistency will be sorted out in the fullness of time.

Posted by: Todd Trimble on October 1, 2011 2:02 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I agree with Todd. Indeed, Nelson’s recent, gracious, and timely concession demonstrates that his convictions and beliefs have not compromised his mathematical objectivity or professionalism. (Incidentally, Nelson was my logic professor when I was a graduate student.)

In my view, the matter is now closed and we can move on to discussing other topics.

Posted by: Terence Tao on October 1, 2011 3:51 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I apologize.

Posted by: Monroe Eskew on October 1, 2011 6:54 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Let me apologize again because I realized that the analogy with tobacco companies could be taken questioning Nelson’s honesty. Of course I have no reason to do that. I only meant to say that in principle, when someone claims to have scientific proof of a long-held, controversial philosophical position, it seems reasonable to worry, at least a little bit, that wishful thinking could be involved, and approach with a bit of skepticism.

As a general principle, I do not think the aim of science should be validating prior philosophical beliefs.

As for the statement about ultrafinitism leading one away from other methods, it was inspired by John Baez’ anecdote about Catherine Hennix who said Baez’ work on analysis rested on a foundation of sand. I extrapolated too far from this.

Posted by: Monroe Eskew on October 1, 2011 9:28 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Thanks, Monroe. I don’t wish to dwell upon the topic much longer, but in response to

As a general principle, I do not think the aim of science should be validating prior philosophical beliefs

I agree with this as general precepts go, but it’s not at all clear to me that this accurately describes what Nelson is doing. It might be that he has been pursuing a certain intuition which is hard to see clearly (an experience all mathematicians have), and that the intuition can be partly summarized in terms of philosophy (this too is a very common experience for mathematicians), but all the same he is subjecting all of it to the strict rigors of logical analysis: his response makes that perfectly clear.

It may sound corny, although I really believe it: it takes all types to make the mathematical world go round. In the final analysis, there is no absolutely right way of being, and iconoclasts are good for the overall health of mathematics.

But I don’t want to speculate any further on this; let’s move on as Terry suggests.

Posted by: Todd Trimble on October 2, 2011 12:05 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Ultrafinitists don’t believe that really large natural numbers exist. The hard part is getting them to name the first one that doesn’t.

Richard Elwes replied, pointing out a conversation between the ultrafinitist Alexander Esenin-Volpin and the master of large infinities, Harvey Friedman. Friedman challenged Esenin-Volpin to name the first number that doesn’t exist:

I raised just this objection with the (extreme) ultrafinitist Esenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with $2^1$ and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about $2^2$, and he again said yes, but with a perceptible delay. Then $2^3$, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take $2^{100}$ times as long to answer yes to $2^{100}$ then he would to answering $2^1$. There is no way that I could get very far with this.

I believe that after each question Esenin-Volpin was mentally counting to the desired power of 2, to verify that it really exists.

(In other words, he was checking that it could be constructed by repeatedly applying the successor operation to the number 0.)

On a digressive personal note: I met Esenin-Volpin back when I was grad student at MIT. He is quite a jolly fellow, despite having been imprisoned by the Soviets three times for his dissident political views.

They locked him a mental hospital — for “pathological honesty”, according to the famous dissident Vladimir Bukovsky. According to Wikipedia,

Volpin was the first dissident in the history of the Soviet Union who proclaimed that it is possible and necessary to defend human rights by strictly observing the law.

I know his student Christer (now Catherine) Hennix much better, and had many arguments with him, as he claimed all my mathematical research was founded on sand. At the time, my work involved a lot of analysis, based on principles that he disbelieved and argued against. I found that rather upsetting. I think I could handle it better now.

I much more enjoyed talking to Henry Flynt, the “cognitive nihilist” who founded concept art, but seems to have decided that only really worthwhile radical act is to undermine the foundations of mathematics. He had me explain Gödel’s theorem and other results from mathematical logic, a subject he was attempting to destroy. But somehow he didn’t hold my being a mathematician against me; he’s just too nice a guy.

I’m not exactly sure how I got hooked up with all these extremists, but it’s a darn good thing I did, since I met my wife Lisa through them. They were all part of the same art scene.

Posted by: John Baez on September 30, 2011 3:56 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

The problem is not the size of the number but its information contents.
On a pocket calculator, you can multiply 10^30 by 10^50, but you cannot add or multiply two numbers with more than 10 digits.
In real life, you can do the super-exponentiation that Edward Nelson took as an example in his book under discussion, but you cannot use a sequence of more than 10^100 digits that lack a finite expansion rule like 0.101010… or SUM 1/n^2.

Regards, WM

Posted by: WM on September 30, 2011 7:05 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Another extremist that C.C. Hennix probably knew was Richard Stallman, who also worked in the MIT AI Lab. When most of the lab left to work for private companies, Stallman, a strong advocate of hacker culture and the free sharing of code and ideas, started the free software movement:

He founded the GNU project to write free replacements for all the Unix tools common at the time. This resulted in a huge amount of useful software unencumbered with patents and restrictive copyrights, the most famous of which are GNU Emacs and the GNU/Linux operating system (commonly referred to as just “Linux”, which drives Stallman crazy).

I was reminded of Stallman by your remarks about ideologues and extremists pushing on the boundaries of our ideas. I think Stallman is one of these, and while I do not share his level of commitment to free software, it is this very idealism that has made him a positive force in the world.

Posted by: John Huerta on October 1, 2011 12:25 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

It’s funny that you should mention Richard Stallman, because Lisa knew him pretty well when she lived in Cambridge, Massachusetts — the same town where I met Flynt, Hennix and Esenin-Volpin. For a while she was the only one doing software documentation for Lisp Machines Incorporated, a company where Stallman worked. She helped put herself through grad school that way.

Posted by: John Baez on October 2, 2011 1:28 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

“Only math nerds would call 2500 finite” Leonid Levin

Oops!

Posted by: Allan E on October 9, 2011 6:22 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Apologies, I’m spoiled with auto-linking these days. Here is the link to what can go wrong when copying from Wikipedia

Posted by: Allan E on October 9, 2011 6:28 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

John wrote: ”In other words, he was checking that it could be constructed by repeatedly applying the successor operation to the number 0.”

I’m not an expert so can someone explain what all the fuss is about whether we can imagine applying the successor function repeatedly? Why can’t we pretend that we can?
Sorry for my vague question.

Posted by: kim on September 30, 2011 7:01 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Kim wrote:

I’m not an expert so can someone explain what all the fuss is about whether we can imagine applying the successor function repeatedly? why cant we pretend that we can?

Most of us mathematicians do pretend that we can. We say that “in principle” we could write a number like $80^{5000}$ as $1 + 1 + 1 + \cdots + 1$, and that’s good enough for us. Most of go even further and work comfortably with infinite sets. It’s just an extremely small minority, the ultrafinitists, who find it unsatisfactory to say you can do something “in principle” when you can’t actually do it.

Most mathematicians either haven’t heard of ultrafinitism or consider it unworkably extreme.

Posted by: John Baez on September 30, 2011 7:43 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Every part of the physical universe that can be used for computing purposes has an information storage capacity of less than 10^100 bits. So much to the point of unary representation or representation of numbers with larger information content.

But a simple conclusion shows that, independent of that, infinite sets cannot be applied in mathematics. We only believe that we would apply an infinite sequence when we write, e. g.,
0.111…
But this is only a finite formula (made of 8 symbols) that generates as many digits as we like (potential infinity). If it were an actually infinite sequence, then we could distinguish it from all expressions of the sequence
0.1
0.11
0.111

and not only from every expression of this sequence.

Regards, WM

Posted by: WM on September 30, 2011 6:46 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I have a question about Nelson’s paper, and a stupid one at that. Please note that I’m just a student, and my area is mostly mechanics and applied maths, but more abstract areas of mathematics and have always fascinated me, even though I don’t fully understand or have the time/desire to understand a lot of things.

I haven’t had the time to read the whole paper, but anyway - in every step leading up to the proof of inconsistency of PA, does he use only theorems that are not based on PA? In my understanding, if the proof is based on theorems that can be derived only from the axioms of PA, the proof is invalid.

Sorry if I posted something stupid, but this question has been bugging me since I stumbled onto this page.

Posted by: George Oblapenko on September 30, 2011 8:41 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Actually I think what’s important is that the proof only use theorems that are based on PA!

If you use theorems based on some system other than PA to prove the inconsistency of PA, it’s possible that the system you’re using is to blame. If it’s inconsistent, it might prove that PA is inconsistent even though PA actually is consistent.

But if you can prove within PA that PA is inconsistent, then PA is inconsistent.

You don’t have to worry that ‘the proof could be invalid’. Sure, the proof is ‘invalid’ if PA is inconsistent… but that’s fine, because in that case you’ve already won.

It’s a bit like if I said: “I’m going to prove I’m inconsistent. One of my axioms is that 1+1 = 2. Adding 3 to both sides, I conclude 2 = 7. That’s a contradiction, and from a contradiction anything follows. Therefore, 1+1 is also not equal to 2. Thus, I’m inconsistent.” Now, you may complain that this argument is ‘invalid’, and it is, but it does show that I’m inconsistent.

Ain’t logic fun?

Posted by: John Baez on September 30, 2011 9:00 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Just a slight quibble here: you wrote, “if you can prove within PA that PA is inconsistent, then PA is inconsistent.” What I think you meant to say is that if, while engaging in reasoning that uses only assumptions that are no stronger than the axioms of PA, you stumble upon a contradiction, then PA is inconsistent. For then you can just mimic your argument formally to explicitly exhibit a proof of a contradiction from the axioms of PA. And once you explicitly exhibit such a formal proof of a contradiction, it doesn’t matter how you came up with it. In particular, even though you now no longer believe that your previous reasoning was “legitimate,” that doesn’t matter. PA proves a contradiction: look and see for yourself.

The reason I say I’m quibbling is that strictly speaking, exhibiting a PA-proof of “PA is inconsistent” (as opposed to a PA-proof of “1=0”) doesn’t automatically yield the inconsistency of PA. In fact, we know by Goedel’s 2nd that if PA is consistent then we can safely add “PA is inconsistent” to the axioms of PA without generating a contradiction, so in particular if “PA is inconsistent” happens to already be a theorem of PA, we’re not going to get an automatic contradiction. I think this is related to George’s concern. If we’re reasoning within PA and we stumble across the theorem “PA is inconsistent,” surely the only way we can immediately conclude that PA really is inconsistent is if we believe that the theorem is true? Yet whether the theorems we’re producing while reasoning within PA are true is precisely one of the issues in question. This worry is legitimate. That’s why Nelson is at pains to derive a contradiction using reasoning formalizable in PA, and is not content with just proving “PA is inconsistent” within PA.

Posted by: Timothy Chow on September 30, 2011 3:35 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Timothy wrote:

What I think you meant to say is that if, while engaging in reasoning that uses only assumptions that are no stronger than the axioms of PA, you stumble upon a contradiction, then PA is inconsistent.

Yes, thanks — that’s what I should have said, and if I’d been thinking carefully it’d have been what I “meant to say”. I know, or at least knew, about $PA + \not(Con(PA))$ and $\omega$-inconsistent systems. I find that since I haven’t thought about logic much for the last decade, my knowledge of details is slipping. The same is actually happening for a lot of the pure math I knew, now that I’m thinking a lot about carbon emissions, climate change, biology and so on. The old stuff pops back pretty quickly but it’s not there instantly. Accepting this with equanimity is one of the challenges of switching career directions. When I was younger the ego blow would have been too big. Now I can (just barely) take it. Thanks for being kind.

Posted by: John Baez on October 1, 2011 2:53 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

So John it seems that much of this is about self reference is this correct?

And in that wikipedia article it says:

‘Edward Nelson criticizes the classical conception of natural numbers because of the circularity of its definition’

But how can we ever ‘get started’ if we dont accept the axioms?

How difficult is it to understand Godels theorems?

Posted by: kim on September 30, 2011 2:02 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Kim wrote:

So John it seems that much of this is about self reference is this correct?

That’s an extremely vague question, because you didn’t say what ‘this’ is. You shouldn’t expect to get useful answers when you ask such vague questions. You’re forcing me to make up an interesting interpretation of the question as well as answer it. That’s a lot of work, and most people won’t bother.

(I’m only scolding you because you often do this. It turns out that asking precise questions is crucial to learning things quickly.)

But how can we ever ‘get started’ if we don’t accept the axioms?

There are many different choices of axioms for arithmetic. Nelson doesn’t accept the axioms for Peano arithmetic because he doesn’t accept the principle of mathematical induction, for reasons that he explains in the passage I quoted in this blog post.

However, there are other axiom systems that he would accept, and I suspect that the book he’s writing will discuss those.

How difficult is it to understand Gödels theorems?

Again, that’s too vague a question to elicit useful information. How difficult is it for whom? Me? You? The average citizen? I suppose you want to know how difficult it is for you, but obviously I can’t answer that.

I got the basic idea of Gödel’s incompleteness theorems by reading various pop math books when I was a kid. Later I took some mathematical logic courses in college and went through the proofs in detail twice.

The basic idea was beautiful and hard to forget once I learned it; there’s a good summary on Wikipedia so I urge you to start there. It probably has links to many other easy introductions.

There are also some technical details that take quite a while to fill in. You can either learn them, or not bother.

Posted by: John Baez on October 1, 2011 2:44 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

How difficult is it to understand Gödel’s theorems?

I don’t know what you’ve tried to read, but if you have the time and interest, you might really enjoy Hofstadter’s classic Gödel, Escher, Bach, where one of the major themes is the incompleteness phenomena. This is still largely at a pre-technical level, but there’s plenty of thought-provoking meat there nonetheless.

When you’re ready to get a bit more technical and learn about details of Gödel numberings and so forth, there’s a nice online article by David Speyer at the Secret Blogging Seminar, here, which you might enjoy.

Among the zillions of accounts out there, you might look into the publications of Raymond Smullyan, who has spent a lifetime figuring out neat and elegant ways of discussing these ideas, at both a technical and pre-technical level.

Posted by: Todd Trimble on October 1, 2011 12:32 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I was going to suggest Gödel, Escher, Bach, which is wonderfully fun, but recently rereading it I realized that his discussion of Gödel’s theorems is so wrapped in layers of whimsy that someone who merely wants to understand the theorems and their proofs might find it rather inefficient. Ditto for Smullyan’s delightful books.

There have got to be some good, clear, unwhimsical, nontechnical introductions to the basic ideas behind Gödel’s incompleteness thoerems. Unfortunately I don’t know them, except for the Wikipedia article. I could look them up but I still wouldn’t have a favorite.

Posted by: John Baez on October 2, 2011 3:35 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

The account that Hofstadter himself recommends is this one:

In fact, he edited the new printing, and wrote a forward. I haven’t read it, but I’ve looked at in bookstores. It looks fairly accessible.

Posted by: John Huerta on October 2, 2011 1:08 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I would not recommend Nagel and Newman - they get many thing quite wrong.

Instead, I would warmly recommend the more recent:

• Torkel Franzen, Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse.

As it happens, I have reviewed it here:

http://www.ams.org/notices/200703/rev-raatikainen.pdf

Posted by: Panu Raatikainen on October 2, 2011 4:16 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Excellent! Thank you, Panu (and many thanks also to the late Professor Franzen).

Because I like direct links whenever possible, http://www.ams.org/notices/200703/rev-raatikainen.pdf.

Posted by: Todd Trimble on October 2, 2011 4:36 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Torkel Franzen’s book is one of my all-time favorites, but for someone who wants a friendly, but complete, presentation of the proof, I’d recommend Peter Smith’s book, An Introduction to Goedel’s Theorems. Franzen’s book may be too brief for a beginner, since his main goal in the book is to expose the way people have misused Goedel’s theorem, and that is therefore what most of the book is about.

Posted by: Timothy Chow on October 2, 2011 9:04 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

John, “But if you can prove within PA that PA is inconsistent, then PA is inconsistent.”

Gödel’s second incompleteness theorem implies that a consistent theory may prove its own inconsistency. For example, let $T = PA + \neg Con(PA)$. We know (assuming $PA$ is consistent), that $PA \nvdash Con(PA)$. Hence, $PA + \neg Con(PA)$ is consistent. In general, $PA$ also proves $Con(PA + \phi) \rightarrow Con(PA)$. So, $PA \vdash Con(PA + \neg Con(PA)) \rightarrow Con(PA)$. So, $PA + \neg Con(PA) \vdash \neg Con(PA + \neg Con(PA))$. So, $T \vdash \neg Con(T)$. But $T$ is consistent.

So, $T$ is a consistent theory which proves its own inconsistency. Of course, $T$ is not sound. (I.e., $\mathbb{N} \nvDash T$.) And $T$ is $\omega$-inconsistent, since $T$ proves .

(If this example is not liked because it assumes the consistency of $PA$, then let $S$ be any consistent extension of $I \Sigma_1$. Then $S + \neg Con(S)$ is a consistent theory which proves its own inconsistency.)

So, a proof within $PA$ that $PA$ is inconsistent establishes only that $PA$ is $\omega$-inconsistent. But it might still be consistent.

Jeff

Posted by: Jeffrey Ketland on September 30, 2011 3:38 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Thanks for the answers - since I haven’t delved deeply into the book, I had only a slight idea of how Nelson plans on proving the inconsistency of PA. After reading the outline (the link for which I missed the first time), I now more or less understand that indeed, he’s “searching” for a contradiction.

Posted by: George on September 30, 2011 6:03 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Obviously I can’t speak for Nelson, but some quick points:

An ultra-finitist doesn’t need to deny the existence of the successor. It is a simple finitely describable object i.e. a simple operation (or algorithm if you like) to obtain the successor of a given number, no need to think of it as the graph of a function which is an infinite object.

The fact that Nelson tries to present the material in a way that classical mathematicians can understand doesn’t mean that he needs to do it that way, the presentation doesn’t mean that he believes in existence of them, e.g. you don’t need to talk about non-standard models of arithmetic at all.

An ultra-finitist does not need to give a natural number whose successor is not defined. If you can give a number (in unary) to an ultra-finitist, he can (probably) compute the successor of it (in principle), but the problem is that you cannot give very large numbers in unary to him. Also the time that is needed to compute the successor can increase with the size of input. Let’s say I give you a string of symbols which is 10000 pages long and claim that it is a natural number in unary, it will take you some time even to check my claim that it is really a number in unary. What is common mathematical practice is to use symbols and expressions (if you prefer terms of number type that are not in their normal form) in place of canonical numbers corresponding to them, but you need to prove that you can turn them into their normal canonical form, and the proof is not trivial, it depends on the operations that you are allowed to use. If we are talking about computing the successor operation, you need to give the input in its canonical unary form which you may not be able to do. Probably no one will ever apply the successor operation to 10^10^1000, not because there is a problem with successor but because no one is going to write that 10^10^1000 in its canonical normal form.

About the amount of classical analysis that can be carried out predictively (in Nelson’s sense), take a look at this. Note: BFTA is interpretable in Q.

Posted by: Kaveh on September 30, 2011 6:07 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Aside from other issues with this proof, given that it is part of an argument for something less than finitism, I’d at least expect it to be constructive.

To be constructive, it should give a particular derivation of “false” in the system(s) it claims to be inconsistent. If it can’t, then isn’t it just making vague, ungrounded statements about derivations without actually giving the concrete example of interest? How are we supposed to believe in such a thing if no one can tell us what it is?

Posted by: Rowan Davies on October 1, 2011 1:10 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I think you’re setting your standards unreasonably high if you say that only a constructive proof of the inconsistency of (say) Peano arithmetic is worth bothering with. Even a nonconstructive proof would rock the world. If it then turned out nobody could find a constructive proof, well, that would be even more interesting.

Posted by: John Baez on October 1, 2011 2:37 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

If there were a non-constructive proof of a contradiction in PA, it could be transformed to a constructive proof. This follows from the Gödel-Gentzen double-negation translation.
http://en.wikipedia.org/wiki/G%C3%B6del%E2%80%93Gentzen_negative_translation

Posted by: Panu Raatikainen on October 1, 2011 10:02 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Not that I hope that Nelson is right, but not everything that Tao says is correct either…

see my post in FOM

http://www.cs.nyu.edu/pipermail/fom/2011-October/015830.html

Posted by: Panu Raatikainen on October 1, 2011 4:57 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I’m copying Panu’s post to FOM here. Note that you have to be a subscriber to post to FOM. So if this is a continuation of the conversation here, let’s not force people to go there, and then reply back here; that’s awkward.

The post follows:

Tao wrote:

Basically, in order for Chaitin’s theorem (10) to hold, the Kolmogorov complexity of the consistent theory T has to be less than l.

Nelson wrote:

So far as I know, the concept of the “Kolmogorov complexity of a theory”, as opposed to the Kolmogorov complexity of a number, is undefined.

Diamondstone wrote:

You can talk about the Kolmogorov complexity of anything that can be coded with a number, including any finitely axiomatizable theory (code the axioms with a number) or any computably axiomatizable theory (code the machine enumerating the axioms with a number).

Ihis is true - but it does not make sense to talk about *the* Kolmogorov complexity of a theory, as this is totally relative to the particular way of arithmetization, and the choice is arbitrary. You can make the complexity of a theory T arbitrarily small, or large, with different choices.

In particular, Tao’s claim quoted above is false.

Best

Panu

Posted by: Todd Trimble on October 1, 2011 5:52 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Also, for easy reference, a direct link to Panu’s FOM post should be given. Here it is.

Posted by: Todd Trimble on October 1, 2011 6:00 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Tao’s claim is that the Chaitin machine must include a proof checker for T, which is right. He then stipulated a definition of complexity of a theory T as the length of the smallest machine that checks proofs in T. Modulo some basic choices on coding Turing machines, which seems to be necessary for Kolmogorov complexity of numbers to even make sense, Tao’s notion seems well-defined. Furthermore, we can assume these choices are held constant in the context of Nelson’s argument, in which I see no mention of variations on coding.

Posted by: Monroe Eskew on October 1, 2011 6:47 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

To Monroe Eskew:

What you say is more or less right (though, a coding of Turing machines is not sufficient, as you seem to suggest; also an arithmetization of the language of PA (or whatever) is needed; and these are two separate issues; that was my initial point).

Anyway:

Chaitin’s theorem can be proved in many ways, and not all of them involve “a Chaitin machine”. (I give one alternative in my paper.)

A proof with a Chaitin machine provides, for a theory T, a sufficiently large l such that T cannot prove K(n) > l for any n. However, the constant l provided by this kind of proof may not be - and usually is not - the *minimal* such number. Sometimes, it is a far cry from the minimal! Ignoring this has caused a lot of confusion.

In the literature, we have called the *minimal* number c for T such that T cannot prove K(n) > c (for any n) “the characteristic constant” of T.

Now as Tao correctly notes later (let us fix an arithmetization), a relatively simple theory T may have a subtheory S which has an astronomically large Kolmogorov complexity.

But of course, being a subtheory, it cannot prove more cases of K(n) > m than T. So the minimal “characteristic” constant of S cannot be larger than that of T.

So Tao’s original claim, that “Basically, in order for Chaitin’s theorem (10) to hold, the Kolmogorov complexity of the consistent theory T has to be less than l ” is false, or at least ambiguous and unclear.

In sum, there is no connection between the Kolmogorov complexity of a theory and the number (always finite) of cases of K(n) > m that it can prove!

Posted by: Panu Raatikainen on October 2, 2011 5:19 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

And, I would like to add, it follows that what Tao says in a follow-up:

“Basically, I think the conceptual error here is to believe that the quantity l=l(T) provided by Chaitin’s theorem is monotone in the sense that if a given l works for a theory T, then it would also work for all restricted subtheories of T.”

seems to be misguided as well.

It *does* work for a subtheory - that is what it is to be a subtheory: it cannot prove anything the full theory cannot…

Posted by: Panu Raatikainen on October 2, 2011 8:19 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I agree with most of what you say in your linked paper, Panu, but I would like to raise one small nitpick: on p. 580, you say “Closer reflection shows that the value of c_F is actually determined by the smallest (by its code) Turing machine which does not halt, but for which this cannot be proved in F. For consider [the smallest such machine, with code e]. It follows that it is impossible to prove in F that [the nth machine is the first machine to halt and output m], for any m and and n > e, since one cannot in F refute the possibility that [e outputs m].”

This seems to be mistakenly assuming that if F cannot prove “e halts”, then there is no m such that F can prove “e does not output m”. But this is generally not the case at all; there will be many instances in which, for example, a machine can be proven never to output the value 5, but whether it does in fact output a value other than 5 or instead runs forever without halting is undecidable (in F).

What you would need to refer to in this paragraph, it seems to me, is not simply a Turing machine which cannot be proved by F to halt, but, rather, a Turing machine e such that, for every potential output m, F cannot prove that e does not output m.

(Note that, though it may be less obvious that such a machine exists, you do essentially describe a way of constructing such a machine in your proof of theorem 5.1: just construct the program “Enumerate the theorems of F, search for the first one claiming that I do not output some particular value, and then output that particular value”, using the fixed point theorem for the self-reference)

[Actually, in full pedantry, the characteristic constant is of course not entirely determined by the code of the smallest machine with this property, but merely upper bounded by it. The possibility exists that there is an even earlier initial segment of the codes of Turing machines such that F cannot rule out, for any m, that some Turing machine among that initial segment outputs m, even though for each particular machine among that segment, there is some m such that F rules out the possibility that that particular machine outputs m.]

Anyway, as I said, this is all just a small nitpick, and I am greatly in sympathy with the overall message and demonstrations of your paper.

Posted by: Sridhar Ramesh on October 1, 2011 7:24 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

To Sridhar Ramesh:
You’re absolutely right! I should have mentioned that when I first linked the paper…

This was first noticed by Daniel Leivant immediately after ther paper came out (and by several other people later on), but as Leivant noted, this little lacuna in no way affects my overall critical arguments.

A thorough analysis has been given (in an unpublished paper) by Aatu Koskensilta, and now also in Shingo Ibuka, Makoto Kikuchi and Hirotaka Kikyo: “Kolmogorov complexity and characteristic constants of formal theories of arithmetic”, Mathematical Logic Quarterly 57, Issue 5, 470-473.

Posted by: Panu Raatikainen on October 2, 2011 4:35 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Here’s another question from a nonmathematician. Suppose Nelson’s proof had turned out sound (or suppose that some other analogous proof does). What would then be the next steps for the mathematics community? What kinds of projects (whether salvage projects or reconstruction projects) would likely be called for? I have difficulty imagining that if PA were proven inconsistent that mathematicians would just throw out (when not specifically studying inconsistent systems) everything that depends on PA and say, “Oh, well, it was all good while the dream lasted.” In other words, if PA were proven inconsistent, what would be the next set of moves?

Posted by: Brandon Watson on October 1, 2011 5:48 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

My guess (from the viewpoint of applied maths) is that not that much would change - a different set of axioms will be adopted (either they won’t fit the requirements for Godel’s 2nd incompletness theorem and therefore, could be proved to be consistent, or perhaps they will just be adopted (like PA)) - but I’m doubtful about how big of an effect it will have on existing areas of math. After all, maths (more or less the way there are now) have been applied in practice since, well, nearly forever, and so far, have served that purpose well enough.
As for more abstract fields - I’m not sure, but don’t they have a relation to less abstract fields of math, which have been succesfully apllied… (see above)?

At least, I hope so - I don’t want to have to study some “new” calculus, linear algebra, tensor analysis or whatever.

Posted by: George on October 1, 2011 6:32 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

The axioms of first order Peano Arithmetic (PA) are independent. This means no given axiom of PA can be derived from the other axioms. It also means, if PA is consistent, we can replace an axiom of PA with its negation and still have a consistent theory. Otherwise, the axiom wouldn’t be independent.

I have been studying a theory I call Modular Arithemtic (MA). MA has the same axioms as PA except Ax (S(x) ~= 0) is replaced with its negation, Ex (S(x)=0). We can demonstrate there are arbitrarily large finite models of MA based on modular or clock arithmetic. The smallest such model is defined by:

( S(0)=0 ) -> ( Ax(x=0) and 0+0=0 and 0*0=0 )

This is a theorem in the language of MA. The language of MA consists of the axioms of MA and the axioms of first order logic with equality. The statement above can be derived using the induction axiom. Since MA and PA have the same induction axiom schema, this statement is also a theorem of PA.

Asking whether MA can prove its own consistency depends a lot on what one means by “consistent”. We can prove there exist finite structures satisfying the axioms of MA using just the language of MA. MA can prove MA has finite models. MA can’t prove it has models “with context”. Just like PA, MA can’t prove it has an infinite model. PA is inconsistent if PA can prove PA has any model.

If one defines “consistent theory” as a theory with an infinite model then MA can’t prove it has an infinite model. Usually, we talk about models and consistency in a meta-theory like ZFC. ZFC assumes infinite sets exist. We can prove a lot of very strange things about MA in a meta theory like ZFC.

Using the compactness theorem we can prove MA has an infinite model. We can also prove there are infinite models of MA with no primes. I am pretty sure we can prove the universal set in a model of MA can’t be well ordered without additional axioms like the axiom of choice. If the universe can’t be well ordered, it can’t be well founded.

We can prove the set of all standard natural numbers is the universe of some model of MA using the Lowenheim-Skolem theorem. This allows us to prove MA is omega inconsistent.

A standard natural number is a number that can be represented by a numeral. A numeral is a term in the language of MA like SS…S(0) where there are a “finite” number of applications of the successor function. A numeral is a finite string. Of course, we have to assume we know a finite string when we see one. The following statements are theorems of MA (and PA).

(S(0)=0) -> Ax (x=0)
(SS(0)=0) -> Ax (x=0 or x=S(0))
(SSS(0)=0) -> Ax (x=0 or x=S(0) or x=SS(0))

Assume MA has a model where we have a standard natural number, z, and S(z)=0. Obviously, z is the “largest” natural number in the model. The theorems above prove if the successor of a standard natural number is zero then the universal set has a finite number of elements.

Assume the set of all standard natural numbers is the universal set of an infinite model of MA. This means we can prove Ax (S(x) ~= 0) where x is a standard natural number. By axiom we have Ex (S(x)=0). This is a contradiction.

As an ultra-finitist, I don’t see a lot of difference between omega inconsistent versus inconsistent. Remember, MA doesn’t prove MA has an infinite model. Assuming PA is consistent forces us to conclude MA is omega inconsistent.

If PA is inconsistent then the axiom Ex (S(x)=0) is a theorem of the other axioms of arithmetic. If PA is inconsistent we can’t prove there are an infinite number of primes. We can’t even prove seven is prime because seven is not a prime in some models of MA. A modular universe isn’t at all like Cantor’s paradise.

Russell
- 2 many 2 count

Posted by: Russell Easterly on October 1, 2011 10:59 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

In my opinion, it would destroy core mathematical ideas going back to Euclid. Pure math would be decimated. We would still have to keep the applied stuff and formulate some new general theory that accounts for the applied math and fosters progress in applied math. But a substantial reworking of everything would be required.

Posted by: Monroe Eskew on October 1, 2011 7:08 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

No “new” calculus, linear algebra, or other maths is required. The development of relativity and quantum mechanics didn’t quite invalidate Newtonian physics, but rather demonstrated that Newtonian physics is not applicable in various regimes. Yet Newtonian physics continues to be taught and used by many people in the real world. Similarly, applied math and a lot of pure math will continue to be taught and used, regardless of what happens in Foundations of Mathematics.

Posted by: Paul AC Chang on October 1, 2011 8:19 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I disagree. Many statements in perfectly ordinary mainstream math, including the more theoretical things in calculus, outright imply the consistency of PA. It would be nothing short of a crisis.

Posted by: Monroe Eskew on October 1, 2011 9:36 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Here is an example due to Harvey Friedman. The following statement implies Con(PA):

“Every bounded sequence of rationals contains a subsequence that converges at least as fast as {1/n}.”

Posted by: Monroe Eskew on October 1, 2011 9:42 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

PA can be inconsistent and large part of homotopy theory may still survive. The theorem would be false but all that means is that there is a sequence of rationals that does not have any fast converging subsequence. Maybe non-existence of any fast converging subsequence for that specific sequence of rationals does not have any effect on homotopy theory!?

Do you know any major and fundamental theorem in homotopy theory that would be false because of inconsistency of PA?

Posted by: logician on October 1, 2011 9:59 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Sorry, I do not know.

To me, the most significant consequence is that infinite sets cannot exist, at least modulo some weak fragment of set theory. Hence most of analysis goes out the window.

Posted by: Monroe Eskew on October 1, 2011 10:24 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Not necessarily. We might be able to recover a large part of analysis by restricting all objects to finitely describable ones (computable analysis?). Infinite sets that can be described by finite means may survive in such a universe. Exact statement of some results in analysis will not be true, but the constructions and concepts developed in analysis will remain useful.

Let me state that like you I don’t expect anyone finding a proof of inconsistency in PA. There has been lots of mathematicians and we have explored a considerable part of consequences of ZFC, not even once we had two theorems that contradicted each other. We would feel insecure if contradictions were popping up all over the mathematics, we are not in such a situation. Even if there is an inconsistency, it is probably well hidden, probably it is so long that no mathematician can ever find it even in a Zillion years. Now, does that mean what we prove in ZFC or PA is really true? No, consistency is very far from truth. We learned to leave with incomplete systems (never having the full truth), maybe we can also learn to leave with systems that have a small number of pathological false consequences.

(I am sure you are aware of Shelah’s paper Logical Dreams, but for others reading the blog it might be enlightening: http://arxiv.org/abs/math/0211398)

Posted by: logician on October 2, 2011 12:24 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

logician wrote:

Do you know any major and fundamental theorem in homotopy theory that would be false because of inconsistency of PA?

That question doesn’t make sense to me. If an inconsistency were discovered in Peano arithmetic, it would mean that the basic rules of classical logic, the basic rules governing addition and multiplication of natural numbers, and/or one of our most fundamental proof techniques—mathematical induction—are seriously flawed. We use these ideas all throughout mathematics. So we wouldn’t suddenly know that certain specific theorems here and there are false. All we’d know, at first, is that the whole framework of mathematics was rotten and untrustworthy. So we’d need to understand what went wrong, find new intuitions that we can trust, and see how much mathematics could be salvaged or redone.

Presumably Edward Nelson’s Elements was going to begin tackling this enormous task. In his outline he wrote:

To demonstrate that sophisticated modern mathematics can be done in so primitive a theory as $Q_0^*$ is an open-ended project, and I plan to make a bare beginning in the book.

Posted by: John Baez on October 2, 2011 3:24 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

If PA is inconsistent it means that something in PA is wrong, not that everything we have done is wrong. It means that our intuition is wrong somewhere (as it has been in several cases in the history of mathematics) but that does not imply that what followed from those intuitions were also wrong. For most of history we were doing mathematics without a foundation, and at times we were doing mathematics in systems that we did know are inconsistent. I think mathematics did and can survive without a foundation and formalism. Foundation tries to be as general as possible to contain everything that we have done or may do in future, and this generality sometimes leads to problems as it did with the full comprehension axiom in set theory, but for most tasks we don’t need to consider all those wild and strange objects that general foundations makes us believe they exist.

Posted by: logician on October 25, 2011 4:59 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

You should take a look at the research program called “Reverse Mathematics”:

http://en.wikipedia.org/wiki/Reverse_mathematics

Note then that ACA_0 is an extension of PA, and RCA_0 is an extension of PRA. If all these were inconsistent, and should be abandoned, …

Huge parts of ordinary analysis and algegra should also be jettisoned!

Monroe Eskew is absolutely right: “It would be nothing short of a crisis.”

Posted by: Panu Raatikainen on October 2, 2011 5:43 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Brandon Watson wrote: ” I have difficulty imagining that if PA were proven inconsistent that mathematicians would just throw out (when not specifically studying inconsistent systems) everything that depends on PA and say, “Oh, well, it was all good while the dream lasted.””

Not in the least.

Axioms were designed to put mathematics on a proper foundation, but if there appear contradictions, then they only show that these axioms and the rules of inference are erroneous. Mathematics does not depend on axioms. Mathematics has been adapted from reality - and reality is consistent. 1 + 1 = 2 can be proven with an abacus. For that purpose no axioms and no 40-page logic proof are required.

The Peano Axioms are believed to define the natural numbers. They don’t. If you take them literally, you will find that every sequence starting with 1 (respectively 0) and without repeating terms will comply with the axioms. That does not only hold for 1, 1/2, 1/4, … (where the successor always is found in a very natural way, namely by halving the cake) or 1, 1/2^2, 2/3^2, … where we even can compute the sum pi^2/6 of all natural numbers. Also all rational numbers (in Cantor’s sequence) or all algebraic numbers (in Dedekind’s sequence) comply with the Peano Axioms. Has anybody felt problems by that imprecision?

In my books and in my math-lessons I use the axioms (although I never met a student who really needed them to understand |N)
(1) 1 is in M
(2) If n is in M, then n+1 is in M
(3) Every set M that complys with (1) and (2), contains |N
(of course |N must also comply with (1) and (2))

Some mathematicians oppose that +1 is undefined as long as the real numbers have not been introduced.
But is +1 “defined” by the usual axioms that fix commutativity and associativity of addition? No in the least!

+1 must be known and in fact is known by every human. Without knowing it, you cannot “define” the natural sequence. And knowing it, you need no axioms at all, in particular you need not the notion of a successor.

Axioms are not required to do mathematics. For that purpose (and for all practical purposes, FAPP, as John Bell used to say) you only need some good taste, hopefully aquired from a good teacher. Therefore we should at least adapt that motto from Ed Nelson’s book:

Die Mathematiker sind eine Art Franzosen:
Redet man zu ihnen, so uebersetzen sie es in ihre Sprache,
und dann ist es alsobald ganz etwas anders.
(JOHANN WOLFGANG VON GOETHE)

Whether logicians prove PA consistent or inconsistent, may be their personal enjoyment. It does not concern mathematics. The essential parts of mathematics are prooven by comparison with reality and in particular by computers.

Regards, WM

Posted by: WM on October 2, 2011 11:00 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I totally agree with WM.

Applied Math is an indispensable part of various engineering disciplines because its application and usefulness in predictive models has been validated against real-world conditions again and again and again.

If, for argument’s sake, PA was proven inconsistent, then math merely becomes a defacto natural science like biology or chemistry, in the sense that the “validity” of math no longer stems from axioms, but rather validation against real world conditions and observations.

Posted by: Paul AC Chang on October 2, 2011 8:21 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

You say that reality is consistent, but this isn’t really known for certain. All you can say is that “so far we have made no observations that indicate reality is inconsistent”, which is exactly the same case as in the consistency of ZFC. That’s just abduction, not deduction. Perhaps a proof of the inconsistency of PA would indicate that reality itself is logically inconsistent, whatever that would mean. But if nothing else, it would certainly rock the foundations of all branches of science as well, and thus applied mathematics.

Posted by: Idran on October 12, 2011 7:58 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I’d say that from the looks of recent comments, which are both speculative and imprecise, this thread has run its course.

The question of whether Nelson’s proof of inconsistency is valid has been settled, and unless anyone has something actually productive to add, it is time to move on.

Posted by: Todd Trimble on October 12, 2011 9:48 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

If Nelson’s proof had checked, there would have been a philosophical battle to sort out what kind of changes were better suited to provide a proper foundation for mathematics. But it is pretty obvious to me, from reading Nelson’s Outline, that he thought the problem was not simply with the axiom scheme for induction, but with induction itself. Nelson’s book “Radically Elementary Probability,” which is gorgeous and available freely, gives an idea of what might have happened.

Posted by: analyst on October 2, 2011 9:43 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Here is a preliminary analysis of what just happened. It’s a post by Catarina Dutilh Novaes, who is a philosophy professor at the University of Groningen. It appeared on M-Phi, a group blog “dedicated to mathematical philosophy”:

Posted by: John Baez on October 3, 2011 11:07 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I proved the inconsistency of arithmetic in an entirely different way. The P vs. NP problem is a golden opportunity for any inconsistency conjecture. P or NP consists of an infinite no. of languages where each language consists of an infinite no. of strings. A simple encoding of the Kleene-Rosser paradox brings the result (by diagonalization):

e(KR)in NP iff e(KR) not in NP.

See full papers at: http://kamouna.wordpress.com

Posted by: Rafee Kamouna on October 3, 2011 12:02 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

For an easy beginner’s intro to the work of Kritchmann and Raz, see:

Also, Henry Flynt has alerted me to the existence of this paper on the arXiv, written by two of the ultra-finitists mentioned earlier:

The first sentence in the abstract seems to claim they’ve found an inconsistency in arithmetic, but I haven’t read the paper yet, and of course I make no claims for its validity. I mainly mention it because this blog entry is one of the few places one can find information on ultra-finitism.

Posted by: John Baez on October 6, 2011 4:14 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

I would like to inform that commonly with Dr Teodor J. Stepien, I delivered a talk “On consistency of Peano’s Arithmetic System”, at the Conference “2009 European Summer Meeting of Association for Symbolic Logic, Logic Colloquium’09” (July 31 - August 5, 2009, Sofia, Bulgaria). In this talk, we presented a sketch of the proof of the consistency of Peano’s Arithmetic System (of course, the full proof was constructed by us before the mentioned Conference “Logic Colloquium 2009”). This proof is ABSOLUTELY elementary, i.e. there are used ONLY the axioms of first-order logic and the axioms of Peano’s Arithmetic System. Hence, from the construction of this proof, it follows that Gödel’s Second Incompleteness Theorem is INVALID. The abstract of the talk mentioned above, was published in “The Bulletin of Symbolic Logic” : T. J. Stepien and L. T. Stepien, “On the consistency of Peano’s Arithmetic System”, Bull. Symb. Logic 16 , 132 (2010). The link to the Proceedings of the Conference “2009 European Summer Meeting of Association for Symbolic Logic, Logic Colloquium’09”, where this above mentioned abstract was placed: http://www.math.ucla.edu/~asl/bsl/1601-toc.htm

Posted by: Lukasz on September 26, 2012 3:45 PM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

Appendix: Teodor J. Stepien published his previous papers, as Teodor Stepien.

Posted by: Lukasz on October 8, 2012 10:34 AM | Permalink | Reply to this

### Re: The Inconsistency of Arithmetic

A proof that ZFC has no any omega-models

http://www.cs.nyu.edu/pipermail/fom/2013-February/016986.html

Posted by: Jaykov Foukzon on February 16, 2013 8:49 PM | Permalink | Reply to this

Post a New Comment