## October 11, 2011

### Weak Systems of Arithmetic

#### Posted by John Baez

The recent discussion about the consistency of arithmetic made me want to brush up on my logic. I’d like to learn a bit about axioms for arithmetic that are weaker than Peano arithmetic. The most famous is Robinson arithmetic:
Robinson arithmetic is also known as Q, after a Star Trek character who could instantly judge whether any statement was provable in this system, or not:

Instead of Peano arithmetic’s axiom schema for mathematical induction, Q only has inductive definitions of addition and multiplication, together with an axiom saying that every number other than zero is a successor. It’s so weak that it has computable nonstandard models! But, as the above article notes:

Q fascinates because it is a finitely axiomatized first-order theory that is considerably weaker than Peano arithmetic (PA), and whose axioms contain only one existential quantifier, yet like PA is incomplete and incompletable in the sense of Gödel’s Incompleteness Theorems, and essentially undecidable.

But there are many interesting systems of arithmetic between PA and Q in strength. I’m hoping that if I tell you a bit about these, experts will step in and tell us more interesting things—hopefully things we can understand!

For example, there’s primitive recursive arithmetic, or PRA:

This system lacks quantifiers, and has a separate predicate for each primitive recursive function, together with an axiom recursively defining it.

What’s an interesting result about PRA? Here’s the only one I’ve seen: its proof-theoretic ordinal is ${\omega }^{\omega }$. This is much smaller than the proof-theoretic ordinal for Peano arithmetic, namely ${ϵ}_{0}$.

What’s ${ϵ}_{0}$? It’s a big but still countable ordinal which I explained back in week236. And what’s the proof-theoretic ordinal of a theory?

Ordinal analysis concerns true, effective (recursive) theories that can interpret a sufficient portion of arithmetic to make statements about ordinal notations. The proof theoretic ordinal of such a theory T is the smallest recursive ordinal that the theory cannot prove is well founded — the supremum of all ordinals $\alpha$ for which there exists a notation $o$ in Kleene’s sense such that T proves that $o$ is an ordinal notation.

For more details, try this wonderfully well-written article:

Climbing down the ladder we eventually meet elementary function arithmetic, or EFA:

Its proof-theoretic ordinal is just ${\omega }^{3}$. It’s famous because Harvey Friedman made a grand conjecture about it:

Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA. EFA is the weak fragment of Peano Arithmetic based on the usual quantifier-free axioms for 0, 1, +, $×$, exp, together with the scheme of induction for all formulas in the language all of whose quantifiers are bounded.

Does anyone know yet if Fermat’s Last Theorem can be proved in EFA? I seem to remember early discussions where people were wondering if Wiles’ proof could be formalized in Peano arithmetic.

But let’s climb further down the ladder. How low can we go? I guess $\omega$ is too low to be the proof-theoretic ordinal of any theory “that can interpret a sufficient portion of arithmetic to make statements about ordinal notations.” Is that right? How about $\omega +1$, $2\omega$, and so on?

There are some theories of arithmetic whose proof-theoretic ordinal is just ${\omega }^{2}$. One of them is called 0. This is Peano arithmetic with induction restricted to predicates where all the for-alls and there-exists quantify over variables whose range is explicitly bounded, like this:

$\forall i\le n\phantom{\rule{thickmathspace}{0ex}}\forall j\le n\phantom{\rule{thickmathspace}{0ex}}\forall k\le n\phantom{\rule{thickmathspace}{0ex}}\left({i}^{3}+{j}^{3}\ne {k}^{3}\right)$

Every predicate of this sort can be checked in an explicitly bounded amount of time, so these are the most innocuous ones.

Such predicates lie at the very bottom of the arithmetical hierarchy, which is a way of classifying predicates by the complexity of their quantifiers. We can also limit induction to predicates at higher levels of the arithmetic hierarchy, and get flavors of arithmetic with higher proof-theoretic ordinals.

But you can always make infinities bigger — to me, that gets a bit dull after a while. I’m more interested in life near the bottom. After all, that’s where I live: I can barely multiply 5-digit numbers without making a mistake.

There are even systems of arithmetic too weak to make statements about ordinal notations. I guess Q is one of these. As far as I can tell, it doesn’t even make sense to assign proof-theoretic ordinals to these wimpy systems. Is there some other well-known way to rank them?

Much weaker than Q, for example, is Presburger arithmetic:

This is roughly Peano arithmetic without multiplication! It’s so simple you can read all the axioms without falling asleep:

$¬\left(0=x+1\right)$ $x+1=y+1⇒x=y$ $x+0=x$ $\left(x+y\right)+1=x+\left(y+1\right)$

and an axiom schema for induction saying:

$\left(P\left(0\right)\phantom{\rule{thickmathspace}{0ex}}&\phantom{\rule{thickmathspace}{0ex}}\left(P\left(x\right)⇒P\left(x+1\right)\right)\right)\phantom{\rule{thickmathspace}{0ex}}⇒\phantom{\rule{thickmathspace}{0ex}}P\left(y\right)$

for all predicates $P$ that you can write in the language of Presburger arithmetic.

Presburger arithmetic is so simple, Gödel’s first incompleteness theorem doesn’t apply to it. It’s consistent. It’s complete: for every statement in Presburger arithmetic, either it or its negation is provable. But it’s also decidable: there’s an algorithm that decides which of these two alternatives holds!

However, Fischer and Rabin showed that no algorithm can do this for all statements of length $n$ in less than ${2}^{{2}^{cn}}$ steps. So, Presburger arithmetic is still fairly complicated from a practical perspective. (In 1978, Derek Oppen showed an algorithm with triply exponential runtime can do the job.)

Presburger arithmetic can’t prove itself consistent: it’s not smart enough to even say that it’s consistent! However, there are weak systems of arithmetic that can prove themselves consistent. I’d like to learn more about those. How interesting can they get before the hand of Gödel comes down and smashes them out of existence?

Posted at October 11, 2011 2:37 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2446

### Re: Weak Systems of Arithmetic

Does anyone know yet if Fermat’s Last Theorem can be proved in EFA? I seem to remember early discussions where people were wondering if Wiles’ proof could be formalized in Peano arithmetic.

I’ve heard Angus MacIntyre talking about this. He is working on a paper arguing that Wiles’ proof translates into PA. I say ‘arguing’ rather than ‘proving’ because all he plans to do is show that the central objects and steps can be formalised in PA, rather than translate the entirety of Wiles’ proof, which would be a a ridiculously Herculean task. I don’t know if his paper is available yet, but there’s some discussion of it here.

Posted by: Richard Elwes on October 11, 2011 8:52 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Richard wrote:

I’ve heard Angus MacIntyre talking about this. He is working on a paper arguing that Wiles’ proof translates into PA.

Hmm, that’s interesting! Sounds like a lot of work—but work that’s interesting if you really know and like number theory and logic. Of course one would really want to do this for Modularity Theorem, not just that piddling spinoff called Fermat’s Last Theorem.

I say ‘arguing’ rather than ‘proving’ because all he plans to do is show that the central objects and steps can be formalised in PA, rather than translate the entirety of Wiles’ proof, which would be a a ridiculously Herculean task.

Right. But by the way, I think most logicians would be perfectly happy to say ‘proving’ here.

Posted by: John Baez on October 11, 2011 9:20 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

I think most logicians would be perfectly happy to say ‘proving’ here.

Well, when I heard Angus talk he was keen to emphasise that it would not be a complete proof, but would only focus on the major bits of machinery needed. So it seems polite to echo the official line!

Of course one would really want to do this for Modularity Theorem, not just that piddling spinoff called Fermat’s Last Theorem.

Yes - my notes from the talk are elsewhere, but I think his main focus is indeed on the central modularity result (I don’t know whether he addresses the full theorem, or just the case needed for FLT).

In any case, he claims that it is effectively $Π^0_1$, and provable in PA.

Posted by: Richard Elwes on October 11, 2011 10:51 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Regarding FLT, nLab has a short section on this. So any findings to be added there. It mentions Colin McLarty’s research.

I have also heard Angus MacIntyre on a sketch of a proof that PA suffices. He seems to have given a number of talks on this, e.g., here and here, the later mentioning a discussion on FOM.

There’s a paper by Jeremy Avigad – Number theory and elementary arithmetic – which should interest you.

Posted by: David Corfield on October 11, 2011 9:48 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

McLarty has recently shown (I believe) that finite-order arithmetic is sufficient to define pretty much all of Grothendieck-style algebraic geometry necessary for arithmetic questions. ${n}^{\mathrm{th}}$-order arithmetic admits quantification over ${P}^{n}\left(ℕ\right)$, the $n$-times iterated power set for some given $n$. The $n$ needed depends on the problem in question, and the hope is that $n<2$ (PA or weaker) is sufficient for FLT, or even the modularity theorem (since there is a proof of the Modularity Theorem which is simpler than Wiles’ original proof of the semistable case).

The trick is defining derived functor cohomology for sheaf coefficients. All the algebra content is apparently very basic from a logic point of view.

Posted by: David Roberts on October 12, 2011 9:43 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

So, I just spent a bit playing around with $I{\Delta }_{0}$ to get a sense for it. I wanted to build a Gödel coding, and I found I needed the following lemma (quantifiers range over positive integers):

${\forall }_{n}{\exists }_{N}{\forall }_{k\le n}{\exists }_{d}\phantom{\rule{1em}{0ex}}kd=N$.

Easy enough in PA; it’s a simple induction on $n$. But in $I{\Delta }_{0}$ I can’t make that induction because there is no bound on $N$. (There’s also no bound on $d$, but I can fix that by changing the statement to ${\exists }_{d\le N}$; this is also true and trivially implies the above.) I can’t fix it by adding in $N\le {n}^{100}$ because that’s not true; the least such $N$ is of size $\approx {e}^{n}$. I can’t write $N\le {4}^{n}$ because I don’t have a symbol for exponentiation.

Anyone want to give me a tip as to how to prove this in $I{\Delta }_{0}$?

Posted by: David Speyer on October 11, 2011 8:59 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

That’s a great puzzle, David! I’m not very good at these things, so I hope someone materializes who can help you out. In the meantime here are some references that might (or might not provide useful clues. At least I found they’re somewhat interesting.

First:

I’m guessing you could do what you want in $I{\Delta }_{0}+\mathrm{exp}$, but you’re struggling to do it in $I{\Delta }_{0}$.

A few intriguing quotes:

Of the commonly studied bounded arithmetic theories, $I{\Delta }_{0}+\mathrm{exp}$, the theory with induction for bounded formulas in the language of $0,S,+,×$ together with the axiom saying the exponential function is total, is one of the more interesting…

Wilkie–Paris have shown several interesting connections between $I{\Delta }_{0}+\mathrm{exp}$ and weaker theories. They have shown $I{\Delta }_{0}+\mathrm{exp}$ cannot prove $\mathrm{Con}\left(Q\right)$

Despite the fact that $I{\Delta }_{0}+\mathrm{exp}$ is not interpretable in $I{\Delta }_{0}$, it is known if $I{\Delta }_{0}+\mathrm{exp}$ proves $\forall x\phantom{\rule{thickmathspace}{0ex}}A\left(x\right)$ where $A$ is a bounded formula then $I{\Delta }_{0}$ proves

$\forall x\phantom{\rule{thickmathspace}{0ex}}\left(\exists y\phantom{\rule{thickmathspace}{0ex}}y={2}_{k}^{x}\right)⇒A\left(x\right)\right)$

Here ${2}_{k}^{x}$ is a stack of 2’s $k$ high with an $x$ at top.

Here $k$ depends on $x$ in some way. I guess he’s saying that while $I{\Delta }_{0}$ can be used to describe a relation deserving of the name $y={2}_{k}^{x}$, it can’t prove that exponentiation is total, so it can’t prove there exists a $y$ such that $y={2}_{k}^{x}$. So, we need to supplement its wisdom for it to prove something similar to $\forall x\phantom{\rule{thickmathspace}{0ex}}A\left(x\right)$. Or in his words:

Intuitively, this results says: given $x$, if $I{\Delta }_{0}$ knows a big enough $y$ exists then it can show $A\left(x\right)$ holds.

Of course you don’t want to resort to a trick like this!

Posted by: John Baez on October 12, 2011 5:05 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Over on Mathoverflow, Joel David Hamkins says that “any theory $I{\Sigma }_{n}$, even $I{\Sigma }_{0}$,” is able to “perform basic Gödel coding and simulate Turing machine computations”.

As far as I can tell, $I{\Sigma }_{0}$ is the same as $I{\Delta }_{0}$, since in the arithmetical hierarchy ${\Sigma }_{0}$ is just another name for ${\Delta }_{0}$.

(By the way: in the reference I just gave, you’ll see superscript 0’s as well as subscripts, but also an admission that it’s common to leave these superscript 0’s out.)

So, it sounds like Hamkins knows how to do Gödel coding in $I{\Delta }_{0}$, or at least knows someone or something who does.

Posted by: John Baez on October 12, 2011 5:19 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

I think you can bet your sweet bippy that he himself knows how to do it. :-) Hey, maybe someone should ask on Math Overflow!

Posted by: Todd Trimble on October 12, 2011 6:23 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

It looks like the conversation has moved on. In case anyone is still puzzled, let me spell out what Kaveh is saying:

My statement cannot be proved in $I{\Delta }_{0}$ because $I{\Delta }_{0}$ is polynomially bounded.

I think I understand what this means now. Suppose that $I{\Delta }_{0}$ proves

${\forall }_{n}{\exists }_{m}:\dots$

where the ellipsis are any grammatical statement about $n$ and $m$. Then there is some polynomial $p\left(n\right)$ there exists an $m$ with $m\le p\left(n\right)$.

This is not true for my statement! The smallest valid $N$ is $\mathrm{LCM}\left(1,2,\dots ,n\right)$, which is $\approx {e}^{n}$. (The more obvious choice of $N$ is $n!$, which is even bigger.) So this is a great example of a sentence which is true (as a statement about ordinary integers) and grammatical in $I{\Delta }_{0}$, but not provable in $I{\Delta }_{0}$, on account of the fact that it involves a fast growing function.

This example really helps me understand the more complicated examples of statements which are known to be undecidable in PA because of fast growing functions, like the Paris-Huntington theorem. I always run into a psychological roadblock with examples like Paris-Huntington, because the encoding of those statements into formal language is so complex. This example is straightforwardly a number theoretic statement, so I think I’ll use it as my standard example of a statement which is undecidable for growth rate reasons in the future.

I’ll point out that there is plenty of stuff which is provable in $I{\Delta }_{0}$. I got through showing “if $x$ divides $y$ then $x\le y$”, “every positive integer is either of the form $2k$ or $2k+1$”, “if $a$ divides $c$ and $b$ divides $c$, then $\mathrm{LCM}\left(a,b\right)$ divides $c$”, and several other standard examples of induction in elementary number theory before trying this one.

Posted by: David Speyer on October 16, 2011 9:01 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

I have two naive questions:

On the wikipedia page “ordinal analysis”, RFA (rudimentary function arithmetic) is mentioned as having proof-theoretic ordinal omega^2, but nothing is said about it. Has anyone here heard of it? Is it EFA minus exponentiation?

Even if some systems may be too weak to be assigned proof-theoretic ordinals, is it possible to make sense of “if that system had a proof-theoretic ordinal in any reasonable sense, then this ordinal would be …”? In view of the wikipedia page on the Grzegorczyk hierarchy (which gives systems of strength omega^n), it is tempting to say that Presburger arithmetic “should” have strength omega.

Posted by: ben on October 12, 2011 5:26 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

While naive, these questions are not sufficiently naive for me to answer them. So, I hope someone else can! They’re interesting.

Posted by: John Baez on October 12, 2011 5:37 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

What’s an interesting result about PRA?

It is suggested that PRA is an upper bound for what Hilbert considered to be finitistic reasoning.

Is there some other well-known way to rank them?

There are (e.g. by the class of their provably total functions), but I guess you want something similar to ordinal analysis. In that case check Arnold Beckmann’s project on dynamic ordinal analysis.

How interesting can they get before the hand of Gödel comes down and smashes them out of existence?

For most purposes the bounded arithmetic theory ${V}^{0}$ (which is quite similar to $I{\Delta }_{0}$) is the natural starting point. The provably total functions of ${V}^{0}$ are exactly ${\mathrm{AC}}^{0}$ functions (the smallest complexity class complexity theorist usually consider). For comparison, provably total functions of $I{\Delta }_{0}$ are Linear Time Hierarchy (LTH). ${V}^{0}$ is capable of talking about sequences using a better encoding that Godel’s beta function (write the numbers in the sequence in binary, add 2 between each consecutive pair, read in base 4). It can also check if a given number encodes the computation of a given Turing machine on a given input.

But a more natural theory to work with might be ${\mathrm{VTC}}^{0}$ whose provably total functions are complexity class ${\mathrm{TC}}^{0}$ which can also parse syntax. See Cook and Nguyen (draft) for more information.

I think self-verifying theories that can prove their own consistency (in the usual formalization) are artificial. For more information about them see:

Dan Willard, “Self Verifying Axiom Systems, the Incompleteness Theorem and the Tangibility Reflection Princible” , Journal of Symbolic Logic 66 (2001) pp. 536-596.

Dan Willard, “An Exploration of the Partial Respects in which an Axiom System Recognizing Solely Addition as a Total Function Can Verify Its Own Consistency”, Journal of Symbolic Logic 70 (2005) pp. 1171-1209.

I just spent a bit playing around with $I{\Delta }_{0}$ to get a sense for it. I wanted to build a Gödel coding …

You cannot prove that in $I{\Delta }_{0}$ because that would give a exponentially growing function while $I{\Delta }_{0}$ is a polynomially bounded theory.

On the wikipedia page “ordinal analysis”, RFA (rudimentary function arithmetic) is mentioned as having proof-theoretic ordinal ${\omega }^{2}$, but nothing is said about it.

Rudimentary sets are defined in Smullyan 1961. They are essentially ${\Delta }_{0}=\mathrm{LTH}$. I am not sure about the theory RFA but I would guess it is essentially $I{\Delta }_{0}$. EFA is $I{\Delta }_{0}+\mathrm{EXP}$ (where $\mathrm{EXP}$ expresses that the exponential function is total).

Even if some systems may be too weak to be assigned proof-theoretic ordinals …

See above (the part about Beckmann’s research).

Posted by: Kaveh on October 13, 2011 6:42 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Thanks for posting this, John.

(Small quibble though - you have “$Q$ only has inductive definitions of addition, multiplication and exponentiation”, but $Q$ lacks the primitive recursive defn for exponentiation. Those axioms would be:

$\left({E}_{1}\right)$: $\forall x\left({x}^{0}=S\left(0\right)\right)$

$\left({E}_{2}\right)$: $\forall x\forall y\left({x}^{S\left(y\right)}=x×{x}^{y}\right)$

But in standard logic, simply assuming a function symbol more or less presupposes the totality of corresponding function, i.e., exponentiation in this case. I.e., if we have a primitive binary symbol (i.e., ${x}^{y}$), then $\forall x\forall y\exists z\left(z={x}^{y}\right)$ is a theorem of logic! For if $f$ is, say, a 1-place function symbol, then $⊢\forall x\left(f\left(x\right)=f\left(x\right)\right)$. And this gives us $⊢\forall x\exists y\left(y=f\left(x\right)\right)$. Quick indirect proof: suppose $\exists x\forall y\left(y\ne f\left(x\right)\right)$. So, $\forall y\left(y\ne f\left(c\right)\right)$, by introducing a new constant $c$; which gives the contradiction $f\left(c\right)\ne f\left(c\right)$.)

When Joel David Hamkins says that any weak system in the hierarchy $I{\Sigma }_{n}$ simulates computation, I think (I am guessing) he just means that any recursive function is representable in $Q$ and its extensions. E.g., if $f:{ℕ}^{p}\to ℕ$ is a partial recursive function, then there is an ${L}_{A}$-formula $\varphi \left(y,{x}_{1},\dots ,{x}_{p}\right)$ such that, for all $k,{n}_{1},\dots ,{n}_{p}\in ℕ$,

if $k=f\left({n}_{1},\dots ,{n}_{p}\right)$, then $Q⊢\forall y\left(y=\underline{k}↔\varphi \left(y,\underline{{n}_{1}},\dots ,\underline{{n}_{p}}\right)\right)$

In particular, exponentiation, $f\left(a,b\right)={a}^{b}$, is recursive. So, there is an ${L}_{A}$-formula $\mathrm{Exp}\left(y,{x}_{1},{x}_{2}\right)$ such that, for all $n,m,k\in ℕ$,

if $k={n}^{m}$, then $Q⊢\forall y\left(y=\underline{k}↔\mathrm{Exp}\left(y,\underline{n},\underline{m}\right)\right)$

So, $\mathrm{Exp}\left(y,{x}_{1},{x}_{2}\right)$ represents exponentiation. However, $Q$ cannot prove it total. I.e., for any such representing formula $\mathrm{Exp}$,

$Q⊬\forall {x}_{1}\forall {x}_{2}\exists y\mathrm{Exp}\left(y,{x}_{1},{x}_{2}\right)$

It’s a long time since I worked through some of the details of bounded arithmetic, and my copy of Hayek and Pudlack is in Munich. So I can’t see immediately how to give a model for this. Still, $Q$ is very, very weak and here is a simple non-standard model $𝒜$. (From Boolos and Jeffrey’s textbook.) Let $A=\mathrm{dom}\left(𝒜\right)=\omega \cup \left\{a,b\right\}$, where $a$ and $b$ are new objects not in $\omega$. These will behave like “infinite numbers”. We need to define functions ${S}^{𝒜},{+}^{𝒜}$ and ${×}^{𝒜}$ on $A$ interpreting the ${L}_{A}$-symbols $S$, $+$ and $×$. Let ${S}^{𝒜}$ have its standard values on $n\in \omega$ (i.e., ${S}^{𝒜}\left(2\right)=3$, etc.), but let ${S}^{𝒜}\left(a\right)=a$ and ${S}^{𝒜}\left(b\right)=b$. Similarly, $+$ and $×$ are interpreted standardly on $\omega$, but one can define an odd multiplication table for the values of $a{+}^{𝒜}a$, $a{+}^{𝒜}b$, $a{×}^{𝒜}b$, etc. Then one proves $𝒜\vDash Q$, even though $𝒜\ncong ℕ$. This model $𝒜$ is such that,

(i) $𝒜⊭\forall x\forall y\left(x+y=y+x\right)$

(ii) $𝒜⊭\forall x\forall y\left(x×y=y×x\right)$

So, this tells us that $Q$ doesn’t prove that $+$ and $×$ are commutative.

I don’t think this simple model $𝒜$ with two infinite elements, $a$ and $b$, is enough to show that $Q$ doesn’t prove that exponentiation is total.

The idea would be to find a model $ℬ\models Q$ such that $ℬ⊭\forall {x}_{1}\forall {x}_{2}\exists y\varphi \left(y,{x}_{1},{x}_{2}\right)$, for any ${L}_{A}$-formula $\varphi \left(y,{x}_{1},{x}_{2}\right)$ that represents $f\left(a,b\right)={a}^{b}$ in $Q$. I don’t know off-hand what such a model $ℬ$ looks like though.

Jeff

Posted by: Jeffrey Ketland on October 13, 2011 8:20 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey writes:

Small quibble though - you have “Q only has inductive definitions of addition, multiplication and exponentiation”, but Q lacks the primitive recursive defn for exponentiation.

Whoops. I don’t know how I made that mistake. I’ve changed that in my blog post, and corrected a typo of yours in return. Thanks!

Q is indeed incredibly weak!

Posted by: John Baez on October 14, 2011 5:48 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Thanks, John. Looking at Hamkins’s reply properly, my guess about it above was wrong; he is talking about Tennenbaum’s Theorem (which does apply to $I{\Delta }_{0}$; Richard Kaye (University of Birmingham, UK) has a nice paper on his webpages explaining these results).

Jeff

Posted by: Jeffrey Ketland on October 14, 2011 2:09 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

On one of Kaveh’s points, another property of PRA is that if $\varphi$ is a ${\Pi }_{1}$-sentence, then:

$I{\Sigma }_{1}⊢\varphi$ iff PRA $⊢\varphi$. (Parsons 1970)

Yes, Tait has argued that PRA represents the upper limit on what a finitist should “accept”. However, I think that Kreisel had argued earlier that it should be PA.

Jeff

Posted by: Jeffrey Ketland on October 13, 2011 8:51 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Here’s another system (call it FPA) which proves its own consistency. Work in 2-nd order logic, with predicative comprehension. Let 0 be a constant; let N be a (1-ary) predicate, meant to represent being a natural number; and let S be a (2-ary) relationship, meant to represent the successor relationship. Do *not* assume the totality of S. Instead assume

(1) S is functional, i.e. Nx and Sx,y and Sx,z implies y = z
(2) S is one-to-one, i.e. Nx and Ny and Sx,z and Sy,z implies x = y
(3) (for all n) not Sn,0
(4) Induction (full induction, as a schema)

Because the totality of S is not assumed, FPA has the singleton model {0}. It also has all the initial segments as models, as well as the standard model (well, whichever of those models actually exist). In a nutshell, FPA is “downward”; if you assume that a number n exists, then all numbers less than n exist and behave as you expect. Most of arithmetic is, in fact, “downward,” so FPA can prove most familiar propositions, or at least versions of them. It can prove (unlike Q) the commutative laws of addition and multiplication. It can prove Quadratic Reciprocity. It cannot prove that there are an infinite number of primes (it cannot even prove the existence of 2, after all), but it can prove that between n/2 and n for any n > 2, there always exists a prime. It’s not far-fetched to think that FPA can prove Fermat’s Last Theorem. So, mathematically anyway, it’s pretty strong. (Still it’s neither stronger nor weaker than Q. It’s incomparable, because it assumes induction, which Q does not, but does not assume the totality of successoring, which Q does.)

In particular FPA can talk about syntax because syntactical elements can be defined in a downward way. Something is a term if it can be decomposed in a particular way. Something is a wff if it can be decomposed in a particular way. Etc.

Now, to prove its own consistency, it suffices for FPA to show that the assumption of a proof in FPA of “not 0 = 0” leads to a contradiction. But a proof is a number (in Godel’s manner of representing syntactical elements) and, in fact, a very large number. This large number then gives enough room, in FPA, to formalize truth-in-the-singleton-model and to prove that any sentence in the inconsistency proof must be true. But “not 0 = 0” isn’t true. Contradiction! Therefore FPA has proven its own consistency.

Here’s a link to a book-long treatise, if it interests anyone:

It’s possible to formalize everything in a first-order system, if the second-order is bothersome for some.
Posted by: t on October 14, 2011 8:05 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Wow, that’s quite interesting! Thanks!

Since this post grew out of our earlier discussions of ultrafinitism, I couldn’t help noting that this axiom system should be an ultrafinitist’s dream, since you can take any model of Peano Arithmetic, throw out all numbers $>n$, and be left with a model of this one!

Indeed I see Andrew Boucher writes:

Most sub-systems of Peano Arithmetic have focused on weakening induction. Indeed perhaps the most famous sub-system, Robinson’s Q, lacks an induction axiom altogether. It is very weak in many respects, unable for instance to prove the Commutative Law of Addition (in any version). Indeed, it is sometimes taken to be the weakest viable system; if a proposition can be proved in Q, then that is supposed to pretty much established that all but Berkleyan skeptics or fools are compelled to accept it.

But weakness of systems is not a linear order, and F is neither stronger nor weaker than Q. F has induction, indeed full induction, which Q does not. But F is ontologically much weaker than Q, since Q supposes the Successor Axiom. Q assumes the natural numbers, all of them, ad infinitum. So in terms of strength, F and Q are incomparable. In actual practice, F seems to generate more results of standard arithmetic; and so in that sense only, it is “stronger”.

One of the most important practitioners of Q has been Edward Nelson of Princeton, who has developed a considerable body of arithmetic in Q. While Nelson’s misgivings with classical mathematics seemed to have their source in doubts about the existence of the natural numbers, the brunt of his skepticism falls on induction, hence his adoption of Q. “The induction principle assumes that the natural number series is given.” [p. 1, Predicative Arithmetic] Yet it would seem that induction is neither here nor there when it comes to ontological supposition. Induction states conditions for when something holds of all the natural numbers, and says nothing about how many or what numbers there are. So a skeptic about the natural numbers should put, so to speak, his money where his doubts are, and reject the assumption which is generating all those numbers — namely the Successor Axiom — and leave induction, which those doubts impact at worst secondarily, alone.

He also mentions other systems capable of proving their own consistency:

A number of arithmetic systems, capable of proving their own consistency, have become known over the years. Jeroslow [Consistency Statements] had an example, which was a certain fixed point extension of $Q\vee \forall x\forall y\phantom{\rule{thinmathspace}{0ex}}\left(x=y\right)$. More recently, Yvon Gauthier [Internal Logic and Internal Consistency] used indefinite descent and introduced a special, called “effinite”, quantifier. And Dan Willard [Self-Verifying Axiom Systems] has exhibited several cases, based on seven “grounding” functions. These systems lack a certain naturalness and seem to be constructed for the express purpose of proving their own consistency. Finally, Panu Raatikainen constructed what is effectively a first-order, weaker variant of F; this system can prove that it has a model [Truth in a Finite Universe], but its weakness does not allow the author to draw conclusions about intensional correctness and so it seems to fall short of the ability to prove its own self-consistency.

Posted by: John Baez on October 15, 2011 7:43 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

I remember Andrew Boucher describing his theory $F$ on sci.logic years back; the problem is that it doesn’t interpret syntax (e.g., Tarski’s $\mathrm{TC}$). (The current state of play is that $\mathrm{TC}$ is interpretable in $Q$.)

The language ${L}_{F}$ is a second-order language with a binary relation symbol $S$ instead of the usual unary function symbol. Even with this small modification (so as to drop the automatic existence of successors), the syntax of ${L}_{F}$ is still that of a standard language, with, say, symbols $0$, $S$, $=$ and $¬$, $\to$, $\forall$ and variables $v,{v}^{\prime },{v}^{\prime \prime }$, etc. It is straightforward to prove, based merely on the description of ${L}_{F}$ and the usual assumptions about concatenation, that:

$\mid {L}_{F}\mid ={\aleph }_{0}$.

So the language ${L}_{F}$ itself is countably infinite. Denying the existence of numbers while asserting the existence of infinitely many syntactical entities is incoherent, as one of Gödel’s basic insights is: syntax = arithmetic.

Suppose we then begin to try and interpret the syntax of ${L}_{F}$ in $F$ itself. Ignore the second order part, as it introduces needless complexities. In the metatheory, suppose we assign gödel codes as follows:

$#\left(0\right)=1$,

$#\left(S\right)=2$,

$#\left(=\right)=3$

$#\left(¬\right)=4$

$#\left(\to \right)=5$,

$#\left(\forall \right)=6$

$#\left(v\right)=7$

$#\left(\prime \right)=8$

Incidentally, this already goes beyond $F$ itself, as the metatheory already implicitly assumes the distinctness of these numbers. How would this be done, given that one cannot even prove the existence of 1?

In ${L}_{A}$, we encode any string (sequence of primitive symbols) as the sequence of its codes, and we encode a sequence $\left({n}_{1},\dots ,{n}_{k}\right)$ of numbers as a sequence number, e.g., as

$⟨{n}_{1},\dots ,{n}_{k}⟩=\left({p}_{1}{\right)}^{{n}_{1}+1}×\dots ×\left({p}_{k}{\right)}^{{n}_{k}+1}$.

For example, the string $\forall \forall S$ is really the sequence $\left(\forall ,\forall ,S\right)$, and is coded as the sequence $\left(6,6,2\right)$, which becomes the sequence number ${2}^{7}×{3}^{7}×{5}^{2}$.

But what, in $F$, is the corresponding numeral for any expression of the language ${L}_{F}$? In the usual language ${L}_{A}$ of arithmetic, an expression $ϵ$ with code $n$ is assigned the numeral $\underline{n}$, written $\left[ϵ\right]$, which is $S\dots S0$. That is, $0$ prefixed by $n$ occurrences of $S$, where $S$ is a function symbol. (Can’t get “ulcorner” to work!)

How would this work in $F$? Consequently, $F$ does not numeralwise represent non-identity of syntactical entities.

For example, in syntax we have

$A$: “The quantifier $\forall$ is distinct from the conditional $\to$

Under the coding above, this becomes

${A}^{\circ }$: $\underline{6}\ne \underline{5}$.

which is trivially provable in $Q$.

Now it’s very unclear to me how one even expresses $\underline{6}\ne \underline{5}$ in ${L}_{F}$. But however it is done, we get that

$F⊬\underline{6}\ne \underline{5}$.

A requirement on a theory $T$ that interprets syntax is that, for expressions ${ϵ}_{1},{ϵ}_{2}$, we have unique singular terms $\left[{ϵ}_{1}\right]$, $\left[{ϵ}_{2}\right]$ such that,

if ${ϵ}_{1}\ne {ϵ}_{2}$ then $T⊢\left[{ϵ}_{1}\right]\ne \left[{ϵ}_{2}\right]$

But $F$ doesn’t give this. Instead, we have

$F⊬\left[\forall \right]\ne \left[\to \right]$.

So, alleged “agnoticism” about numbers has become “agnoticism” about syntax. Which contradicts the non-agnoticism of the description of syntactical structure of ${L}_{F}$ itself.

There is no faithful interpretation of the syntax of ${L}_{F}$ into $F$. So, syntactical claims about the properties of $F$ cannot be translated into $F$. The meta-theory of the syntax of $F$ already assumes an infinity of distinct syntactical entities.

In particular, claims about consistency cannot be translated into $F$.

Jeff

Posted by: Jeffrey Ketland on October 15, 2011 2:59 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Thanks for your comment. Unfortunately, I don’t think it’s quite right and F does indeed interpret syntax adequately, so that it does express notions of consistency.

First, just as F is agnostic about the infinity of the natural numbers, it is agnostic about the infinity of the syntax. The infinity of syntax comes from assuming that there are an infinite number of variables; F doesn’t make this assumption. I guess a stickler might say this is no longer second- (or first-) order logic because these assume that there are an infinite number of variable symbols. But I would hope most would agree this is not an essential feature of the logic.

JK: “Incidentally, this already goes beyond F itself, as the metatheory already implicitly assumes the distinctness of these numbers. How would this be done, given that one cannot even prove the existence of 1?”

While one cannot prove that 1 exists, it is possible to prove that anything which *is* one is unique (and so distinct). That is, it is possible to define a predicate one(x) as (Nx and S0,x). It is not possible to prove that there exists x s.t. one(x), but it *is* possible to prove that (x)(y)(one(x) and one(y) implies x=y). So proof of existence, no; proof of distinctness, yes. One can define two(x) as (Nx and there exists y such that one(y) and Sy,x). And so forth as far as one wants or has energy to go.

Moreover, one can define the concepts of odd and even. One defines even(x) iff Nx and (there exists y)(y+y = x). Again, no assertion that one can prove that even(x) or odd(x) for any x. But one *can* prove that there is no x such that both even(x) and odd(x). Again, existence no, distinctness yes.

So one can represent the syntax. Define predicates one, two, three, … , ten. Define Big(x) as Nx and not one(x) and not two(x) and … and not ten(x). Then x represents a left parentheses if x = 0. x represents a right parenthesis if one(x). x represents the implication sign if two(x). x represents the negation sign if three(x). x represent the equal sign if four(x). And so forth. x represents a small-letter variable if Big(x) and even(x). x represents a big-letter variable if Big(x) and odd(x).

One gives the usual recursive definitions to syntactical entities like AtomicWff(x) and Proof(x). Again, one cannot show there exist any x such that AtomicWff(x). But one can show that, *if* AtomicWff(x), then x has all the properties that it should have.

So, given that x cannot prove there exist any syntactical entities, how can it prove its own consistency? Because consistency means there is no proof of “not 0 = 0”. So a proof of consistency is not a proof that something exists, but a proof that something does not exist. It *assumes* the existence of a syntactical entity, in this case a proof of “not 0 = 0”, and shows that the assumption of the existence of this entity leads to a contradiction. Thus F is able to prove a system’s consistency. (What F cannot prove is prove that a system is inconsistent; because then it would have to prove that there exists something, namely a proof of “not 0 = 0”, and that it cannot do.)

Anyway, all this is described in gory detail in the link that I gave.

Posted by: t on October 15, 2011 5:54 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

“The infinity of syntax comes from assuming that there are an infinite number of variables; F doesn’t make this assumption.”

This is not correct. A propositional language $L$ with a single unary connective $¬$ and a single atom $p$ has infinitely many formulas. So,

$\mathrm{Form}\left(L\right)=\left\{p,¬p,¬¬p,\dots \right\}$

and

$\mid \mathrm{Form}\left(L\right)\mid ={\aleph }_{0}$.

The potential infinity here is a consequence of the implicit assumptions governing the concatenation operation $*$. Formulas are, strictu dictu, finite sequences of elements of the alphabet. It is assumed that sequences are closed under concatenation. If $\alpha ,\beta$ are sequences, then $\alpha *\beta$ is a sequence.

“I guess a stickler might say this is no longer second- (or first-) order logic because these assume that there are an infinite number of variable symbols.”

As noted, it has nothing to do with variables. The strings of the propositional language $L$ above form an $\omega$-sequence. In general, if $\alpha$ and $\beta$ are strings from the language $L$, then $\alpha *\beta$ is a string. This is simply assumed.

“But I would hope most would agree this is not an essential feature of the logic.”

That any standard language $L$ for propositional logic (containing at least one atom and one connective) or first-order logic has cardinality ${\aleph }_{0}$ is usually a preliminary exercise in logic.

Jeff

Posted by: Jeffrey Ketland on October 15, 2011 10:27 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Well, obviously I wouldn’t assume the totality of the concatenation operator.

“As noted, it has nothing to do with variables.” This is not correct. Your language is infinitary if the number of variables is infinitary.

“That any standard language L for propositional logic (containing at least one atom and one connective) or first-order logic has cardinality ℵ0 is usually a preliminary exercise in logic.”

Of course. But it’s not an essential feature of the logic, in the sense one could give an adequate description of the logic without this feature.

Posted by: t on October 16, 2011 10:50 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Indeed, the infinitude of variable symbols is entirely a red herring. For those who want a finite alphabet, the standard (AIUI) solution is to have a symbol x and a symbol ' such that x is a variable and $v$' is a variable whenever $v$ is. (Thus the variable symbols are x, x', x'', etc.)

Posted by: Toby Bartels on October 27, 2011 3:02 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

t, “One gives the usual recursive definitions to syntactical entities like AtomicWff(x) and Proof(x).”

What, exactly, are these entities $\mathrm{AtomicWff}\left(x\right)$ and $\mathrm{Proof}\left(x\right)$? How many symbols do they contain? Are they distinct? How does one prove this? Have you ever tried to estimate how many symbols occur in the arithmetic translation of the sentence

“the formula $\forall x\left(x=x\right)$ is the concatenation of $\forall x$ with $\left(x=x\right)$”?

You’re assuming something that you then claim to “doubt”. You do not, in fact, “doubt” it: you assume it.

One never says, in discussing the syntax of a language, “if the symbol $\forall$ is distinct from the symbol $v$ …”. Rather, one says, categorically, “the symbol $\forall$ is distinct from the symbol $v$”. The claim under discussion amounts to the view that one ought to be “agnostic” about the distinctness of, for example, the strings $\forall x\left(x=0\right)$ and $\forall y\left(y\ne 0\right)$.

One can write down a formal system of arithmetic which has a “top” - called “arithmetic with a top”. But it is not as extreme as $F$. Such theories have been studied in detail by those working in computational complexity and bounded arithmetic (see, e.g., the standard monograph by Hajek and Pudlak, which I don’t have with me). See, e.g., this:

http://www.math.cas.cz/~thapen/nthesis.ps

Agnosticism about numbers = agnosticism about syntax. You can’t have your “strict finitist” cake, while eating your syntactic cake, as they’re the same cake!

Jeff

Posted by: Jeffrey Ketland on October 15, 2011 9:52 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

“What, exactly, are these entities AtomicWff(x) and Proof(x)?”

They are syntactical entities. I could write them down for you explicitly here, but as you can probably tell, I’m not gifted writing down logical symbols in these comments. Or you can look at the top of page 110 and on page 111 of the link, where you will find them already written down explicitly.

“Are they distinct? How does one prove this?”

I’m not sure whether you are talking about meta-theory or theory. In the theory F, if you assume there exists something which represents AtomicWff(x) and another thing which represents Proof(x), then you would be able to prove these things distinct, because their ith symbols will be different for some i. But one doesn’t need to prove this, certainly not in the proof that the system is consistent. In the meta-theory the two syntactical entities are different, and you see this by writing them down.

“You’re assuming something that you then claim to “doubt”. “

No I’m not. Again you seem to be confusing meta-theory with theory, or assuming that there must be some tight connection between them. You can’t prove that 1 exists in F. You agree, right? So F makes no assumptions that I doubt. Sure I can write down a formula in F which has more than one symbol. So? That has no bearing on what F does or does not assume. In any case my doubts are not that 1 exists, or that 10 exists, but that *every* natural number has a successor. And the fact that I can write down a formula with 1 million symbols (well, if you pay me enough) cannot erase my doubts, nor has any bearing on these doubts.

“One never says, in discussing the syntax of a language, “if the symbol ∀ is distinct from the symbol v …”.

Your manner of expression is again not clear. “One never says…” Are you talking theory, meta-theory, what? F can prove: “if the symbols ∀ and v exist (or to be more precise, if the numbers used to represent them exist), then they are distinct.”

“Rather, one says, categorically, “the symbol ∀ is distinct from the symbol v”.” Well, F cannot prove that the numbers representing the symbols exist. But, in order to prove the consistency of itself, F doesn’t need to. Proving the consistency of a system, does not require F to show that anything exists. Rather, it has to show that something does *not* exist.

“The claim under discussion amounts to the view that one ought to be “agnostic” about the distinctness of, for example, the strings ∀x(x=0) and ∀y(y≠0).”

No, no, no. For some reason you are hung up on distinctness. F can prove distinctness. Again, it can prove that if these strings (or more precisely, the sequences representing them) exist, then they are distinct. So F is most certainly not agnostic about their distinctness. All that F cannot prove is: the strings exist.

“One can write down a formal system of arithmetic which has a “top” - called “arithmetic with a top”. But it is not as extreme as F. ” Again, you are making imprecise claims. F allows for the possibility of the standard model. Formal systems with a “top” do not. Everything that F proves will be true in PA. There are things that “top” formal systems prove that are false in PA. So what on earth does “extreme” mean?

Posted by: t on October 15, 2011 11:10 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

“So what on earth does ‘extreme’ mean?”

A theory of syntax that doesn’t prove that $\forall$ is distinct from $=$?

Posted by: Jeffrey Ketland on October 15, 2011 11:34 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

ROTFL. Ok, you win. I’ll grant you that F doesn’t “prove that symbols are distinct” in the sense of “prove that they exist.” And I’ll grant you that this means that its “theory of syntax” is “extreme.”

Still, in order to prove that a system is consistent, one can work with an “extreme” “theory of syntax” which doesn’t “prove that symbols are distinct” because, to prove a system is consistent, one needs to prove that something *doesn’t* exist, not to prove that something *does*. (In your terminology, would this be, “one needs to prove that something isn’t distinct, not to prove that something is”??) If you or anyone else thinks that F is inconsistent, then you must come up with a proof of “not 0=0”. And, by the mere fact of that proof supposedly existing, F can show that it is able to model truth-in-{0} for the statements in the proof and so that “not 0 = 0” cannot be a statement in the proof. Contradiction. Therefore you, or anyone else, cannot come up with a proof. And since all this reasoning can be done in F, F can prove its own consistency. It’s that simple.

Posted by: t on October 16, 2011 8:08 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

t, I see what you wish to do with this theory $F$. But you lack numerals, since $S$ is not a function symbol. So, instead, for example, one might express $0\ne 1$ by a formula

$\forall x\forall y\left(\left(\underline{0}\left(x\right)\wedge \underline{1}\left(y\right)\right)\to x\ne y\right)$,

where the formulas $\underline{n}\left(x\right)$ are given by a recursive definition

$\underline{0}\left(x\right)⇔x=0$

$\underline{n+1}\left(x\right)⇔\exists y\left(\underline{n}\left(y\right)\wedge S\left(x,y\right)\right)$

So, to assert the existence of the number $7$, for example, you have $\exists x\left(\underline{7}\left(x\right)\right)$. And, presumably, for all $k\le n$,

$F⊢\exists x\left(\underline{n}\left(x\right)\right)\to \exists x\left(\underline{k}\left(x\right)\right)$,

Then define $\left(\mathrm{NotEq}{\right)}_{n,m}$ to be the formulas

$\forall x\forall y\left(\left(\underline{n}\left(x\right)\wedge \underline{k}\left(y\right)\right)\to x\ne y\right)$

Then I believe one has: for all $n,k\in N$,

If $n\ne k$, then $F⊢\left(\mathrm{NotEq}{\right)}_{n,k}$

As for syntactic coding, since $\forall$ is coded as 6 and $=$ as $3$, then $F$ can define, e.g.,:

$\underline{\forall }\left(x\right)⇔\underline{6}\left(x\right)$

$\underline{=}\left(x\right)⇔\underline{3}\left(x\right)$

Then (I think), $F$ does prove the distinctness of $\forall$ and $=$ in a conditional manner, namely,

$F⊢\forall x\forall y\left(\left(\underline{\forall }\left(x\right)\wedge \underline{=}\left(y\right)\right)\to x\ne y\right)$

But no, I don’t accept that $F$ “proves its own consistency”. Just to begin with, one doesn’t have a proof predicate which strongly represents the proof relation for $F$.

And to return to the central issue, you are assuming the existence of a language ${L}_{F}$ whose cardinality (the cardinality of its set of formulas) is ${\aleph }_{0}$. You’re assuming this already in the metatheory. You already have ${\aleph }_{0}$ syntactical entities. What is the point of being “agnostic” about, say, the number $1$ if you are already assuming, in your informal metatheory, the existence of ${\aleph }_{0}$-many syntactical entities? In other words, I am doubting your “agnosticism”. You’re simply trying to have your syntactic cake while eating (i.e., professing) the “strict finitism” cake. It doesn’t work, because they are the same cake.

To repeat: form the point of view of ontology, interpretability, etc., syntax = arithmetic. The same thing. They can be modelled in each other. To “doubt” arithmetic while accepting syntax is incoherent.

To make it work, you need to develop a separate “strictly finite” syntax, for example, a la Quine and Goodman 1947. It would have to drop the totality of concatenation on syntactical entities. It really is not worth bothering with, as it doesn’t work, though. At the very best, you simply end up reinventing, in a weird way, all the things that have been discussed countlessly many times in the very rich research literature about nominalism. See, for example,

Burgess, J and Rosen, G. 1997. A Subject with No Object. OUP.

Jeff

Posted by: Jeffrey Ketland on October 16, 2011 8:44 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

“(a lot of things snipped)”

You clearly haven’t read the linked paper, or even (I imagine) glanced over it, right? That doesn’t seem to faze you in the least, though, in making various definitive pronouncements.

“Just to begin with, one doesn’t have a proof predicate which strongly represents the proof relation for F.”

Well, you will have to give a reasoned argument why (the technical notion of) representability is essential to (the intuitive notion of) expressability. Consider the simpler case of even(x), which can be defined in F as (there exists y)(y+y=x). Because of F’s ontological limitations, even(x) doesn’t represent evenness. Yet even(x) clearly expresses the notion of evenness. I think you can be most succinct in your point by noting that the Hilbert-Bernays conditions of provability do not hold for the provability predicate in F. But as I mention in the linked paper, the Hilbert-Bernays conditions do not adequately capture the (intuitive) notion of provability.

“And to return to the central issue, you are assuming the existence of a language L F whose cardinality (the cardinality of its set of formulas) is ℵ0.”

If that’s the central issue, then you are wrong, as I am not. Look, you obviously haven’t read or thought hard about what I’ve done or written, so perhaps you should stop saying that I am making assumptions which I do not make. Right? That’s only fair, right?

“It would have to drop the totality of concatenation on syntactical entities.”

Obviously. I see now you have replied in another place about this, so I will now switch there.

Posted by: t on October 16, 2011 10:37 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Sorry, it’s the end of the week-end, so this is the end of the road for my comments in this thread. If you’re interested, have a look at the link I put in my first comment. Thanks and bye.

Posted by: t on October 16, 2011 10:53 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

I’m trying to understand this discussion. It seems to me that Jeffrey Ketland is saying, roughly, that because our usual theory of syntax can prove that the system $F$ has infinitely many formulas, while $F$ has finite models (as well as infinite ones), the system $F$ is ‘incoherent’ as a theory of arithmetic. For example, he says:

To repeat: form the point of view of ontology, interpretability, etc., syntax = arithmetic. The same thing. They can be modelled in each other. To “doubt” arithmetic while accepting syntax is incoherent.

So the language ${L}_{F}$ itself is countably infinite. Denying the existence of numbers while asserting the existence of infinitely many syntactical entities is incoherent, as one of Gödel’s basic insights is: syntax = arithmetic.

But this is puzzling in two ways. First of all, I don’t think $F$ “denies the existence of numbers”: any model of Peano arithmetic will be a model of $F$, so you can have all the natural numbers you might want. There’s a difference between denying something and not asserting something.

But more importantly, I don’t really care whether $F$ is “incoherent” from the point of view of “ontology” due to some claimed mismatch between the syntax of theory $F$ (which has infinitely many formulas, according to standard mathematics) and the models $F$ has (which include finite ones). “Incoherent” and “ontology” are philosophical notions, but I’m a mere mathematician. So I’m much more interested in actual theorems about $F$.

If these theorems are proved in a metatheory that can prove $F$ has infinitely many formulas, that’s fine! — just make sure to tell me what metatheory is being used. And if someone has proved some other theorems, in a metatheory that can’t prove $F$ has infinitely formulas — in other words, a metatheory that more closely resembles $F$ itself — that’s fine too! All I really want to know is what’s been proved, in what framework.

But I guess it all gets a bit tricky around Gödel’s 2nd incompleteness theorem. What does it mean for $F$ to “prove its own consistency”? I guess it means something like this. (I haven’t thought about this very much, so bear with me.) Using some chosen metatheory, you can prove

$F⊢\mathrm{Con}\left(F\right)$

where $\mathrm{Con}\left(F\right)$ is some statement in $F$ that according to the chosen metatheory states the consistency of $F$. The Hilbert-Bernays provability conditions are supposed to help us know what “states the consistency of $F$” means, but if you want to use some other conditions, that’s okay — as long as you tell me what they are. I can then make up my mind how happy I am.

Posted by: John Baez on October 18, 2011 11:06 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

t, yes, I’ve read your monograph. Actually, I read it several years ago, because I remember you discussing this quite a bit on sci.logic with Torkel and others.

Thanks for the comment, John. You’re right, I should have written “Being agnostic about numbers while asserting the existence of …” instead of “Denying the existence of numbers while asserting the existence of …”. But I’m not referring to $F$. I’m referring to the metatheory of $F$ and the views of the author! That is, $F$ is agnostic about, say, $1$, and is meant to represent the author’s agnosticism; however, the author’s metatheory is not agnostic about an $\omega$-sequence of syntactical entities, say,

$\left(0=0\right),¬\left(0=0\right),¬¬\left(0=0\right),\dots$

understood as types (i.e., finite sequences), not physical tokens.

John, “- just make sure to tell me what metatheory is being used. And if someone has proved some other theorems, in a metatheory that can’t prove $F$ has infinitely formulas; in other words, a metatheory that more closely resembles F itself - that’s fine too! All I really want to know is what’s been proved, in what framework.”

Right, agree 100%. I am arguing that the metatheory already contains arithmetic by interpretation. So the philosophical view, which is agnostic about numbers but accepts ${\aleph }_{0}$-many syntactical entities, doesn’t make sense, particularly if it is motivated by a philosophical scepticism about infinity. One cannot honestly pretend to be a strict finitist while adopting (as true) a finitary metatheory at the level of $\mathrm{PRA}$, say. Ok, that’s my sermon about the metatheory over!

Unfortunately, formalizing the metatheory would be hard: one would need to use a formal theory of concatenation, like Grzegorczyk’s $\mathrm{TC}$, with along some induction. It could be done. The resulting metatheory would be around the level $I{\Sigma }_{1}$ (maybe $I{\Delta }_{0}$, maybe between).

(On the theory $\mathrm{TC}$, see, e.g.:

Visser, A. 2009. “Growing Commas. A Study of Sequentiality and Concatenation”, NDJFL 50.

There are various pieces of recent work on this too. $\mathrm{TC}$ is very closely related to $Q$.)

The properties of $F$ are a separate matter from the philosophical views and the metatheoretic assumptions. I’m a bit sceptical that $F$ proves its own consistency. This is because I’m sceptical that the ${L}_{F}$-formula being mooted to express consistency actually does express consistency of $F$! It might indeed prove this formula, but it might do some for some silly reason, rather than because it actually “thinks” that $F$ is consistent.

Normally, we have language $L$ which extends the usual first-order language ${L}_{A}$ of arithmetic (in particular, $S$ is a function symbol). Let $T$ be a recursively axiomatized theory in $L$. If $T$ is strong enough, there is a proof predicate ${\mathrm{Proof}}_{T}\left(y,x\right)$ which strongly represents (in $Q$, say) the recursive relation “$y$ is the code of a derivation in $T$ of formula whose code is $x$”. One defines the provability predicate ${\mathrm{Prov}}_{T}\left(x\right)$ as $\exists y{\mathrm{Proof}}_{T}\left(y,x\right)$. One shows that ${\mathrm{Prov}}_{T}\left(x\right)$ satisfies HB conditions. (This is where one usually waves one’s hands, and says, “$T$ must contain $I{\Sigma }_{1}$, or something like that”. One can get below $I{\Sigma }_{1}$ but it’s complicated.) One defines $\mathrm{Con}\left(T\right)$ as $¬{\mathrm{Prov}}_{T}\left(\left[\underline{0}=1\right]\right)$. Then one gets the 1st incompleteness theorem can be formlized inside $T$, and so $T⊢\mathrm{Con}\left(T\right)\to G$; and thus, if $T$ is consistent, $T⊬\mathrm{Con}\left(T\right)$.

But the language ${L}_{F}$ doesn’t have numerals for the simple reason that $S$ is not a function symbol, but a binary predicate (so, successor is not assumed to be total). So, there are no canonical numerals, and this absence makes expressing things very difficult, and it’s not clear what weak and strong representability of relations now means. It’s hard even to express $2=5$ or $2\ne 5$. To be provable, both have to be expressed using universal quantifiers and a sequence of formulas “$x$ is a $1$”, “$x$ is a $2$”, etc. Then the translation of $2\ne 5$ doesn’t seem to be the negation of the translation of $2=5$.

(I’m also pretty sure that $F$ does not have pairing too. I.e., a formula $\varphi \left(z,x,y\right)$ which means, “$z$ is the ordered pair $\left(x,y\right)$”.)

Perhaps I am wrong. But all of this seems too extreme and probably pointless. There are other formal very weak systems for arithmetic that wouldn’t introduce such complexities but also have finite models of every non-zero cardinality. Andrew mentions such a system: a a 2000 paper by Panu Raatikainen, “The Concept of Truth in a Finite Universe”, and Panu calls it “Truncated Arithmetic”. The axioms of $\mathrm{TA}$ are :

$\left(A1\right):¬\left(x,

$\left(A2\right):x

$\left(A3\right):\left(x

$\left(A4\right):¬\left(x<0\right)$

$\left(A5\right):S\left(x\right)=y\to \forall z¬\left(x

$\left(A6\right):\left[x

$\left(A7\right):x+0=x$

$\left(A8\right):x+S\left(y\right)=S\left(x+y\right)$

$\left(A9\right):x×0=0$

$\left(A10\right):x×S\left(y\right)=\left(x×y\right)+x$

I.e., the usual successor axioms of $Q$

$S\left(x\right)\ne 0$

$S\left(x\right)=S\left(y\right)\to x=y$

have been dropped. But we have the standard numerals, $0$, $S\left(0\right)$, $S\left(S\left(0\right)\right)$, etc. This means that talk of representability of recursive functions and relations goes through as normal. But we can have a model $𝒜$ which satisfies $\exists x\left(S\left(x\right)=x\right)$. And a model $𝒜$ such that $𝒜\models \underline{2}=\underline{57}$.

Jeff

Posted by: Jeffrey Ketland on October 18, 2011 7:32 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey wrote:

So the philosophical view, which is agnostic about numbers but accepts ${\aleph }_{0}$-many syntactical entities, doesn’t make sense, particularly if it is motivated by a philosophical scepticism about infinity. One cannot honestly pretend to be a strict finitist while adopting (as true) a finitary metatheory at the level of PRA, say.

Right. But this doesn’t concern me much, since I’m willing to countenance philosophical views of all sorts. I’d be happy to hear what an ultrafinitist metatheory could prove about ZFC, for example… and delighted if adding large cardinal axioms to some powerful metatheory could be used to prove extra results about this theory F. I view all mathematical and metamathematical systems as my friends. So I’m eager to hear what any one has to say about any other — but I don’t feel the need to agree with any of their opinions.

My reason for being particularly interested in ultrafinitist theories (by which I mean: theories where very large natural numbers either don’t exist, or might not exist) does not stem from a distaste for other kinds of theories. Instead, my main reason is that ultrafinitist theories seem particularly challenging to formulate in a nice way, and haven’t been studied as much as other alternatives. So, they’re open territory, relatively speaking — and I’m always drawn to open territory: that’s where the new stuff is!

More generally, weak systems of arithmetic seem worth studying for various reasons. First, only theories weaker than Q (say) have a shot at being decidable, and with computers being so important these days, it’s nice to know a lot of decidable theories. Second, this in turn might lead to weak but still interesting theories that evade the hypotheses of Gödel’s first and second incompleteness theorems, and that would be cool.

I’m a bit sceptical that $F$ proves its own consistency. This is because I’m sceptical that the ${L}_{F}$-formula being mooted to express consistency actually does express consistency of $F$!

I understand that worry. I hope a consensus arises on this issue, because I may not have the time or energy to figure it out myself!

It’s hard even to express 2=5 or 2≠5.

True, but your difficulties makes perfect sense once we get used to a world where we can’t take the existence of these numbers for granted.

We can say various things like “if $x$ is the successor of the successor of zero, then $x$ is not the successor of the successor of the successor of the successor of the successor of zero”, or for short “if $x$ is 2 then $x$ is not 5”. And we can reason with them. We just have to stop taking for granted that “there exists $x$ such that $x$ is 2”.

… the translation of 2≠5 doesn’t seem to be the negation of the translation of 2=5.

I don’t think you should expect to find the translation of 2≠5 or 2=5 into this system. You’re taking ideas from a world where 2 and 5 are sure to exist, and trying to translate them into a world where that’s no longer the case. You can say “if $x$ is 2 then it’s 5”, and you can say “if $x$ is 2 then it is 5”. But these aren’t negations of each other—and that makes perfect sense, since there might not be 2.

… all of this seems too extreme and probably pointless.

I have a different reaction: I think it’s fascinating to explore new ways of thinking, and ‘extreme’ ones can be very fun, and sometimes illuminating. The idea of arithmetic where you’re not sure that the number 2 exists—how refreshing!

There are other formal very weak systems for arithmetic that wouldn’t introduce such complexities but also have finite models of every non-zero cardinality. Andrew mentions such a system: a a 2000 paper by Panu Raatikainen, “The Concept of Truth in a Finite Universe”, and Panu calls it “Truncated Arithmetic”.

By the way, personally I find a system where very large numbers plus 1 equals themselves to be further removed from my intuitive picture of the natural numbers than a system where very large numbers just don’t exist. But I don’t mind people studying either system, and maybe we could even translate between the two: if one system ‘hits the wall’ and says 100+1 = 100, the other says 100+1 just doesn’t exist.

It’s like the difference between a hitting a wall and continuing to walk in place, and walking off the edge of a cliff.

Posted by: John Baez on October 19, 2011 7:27 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Thanks, John. I’m in agreement with much of that, but a quick comment on the “syntax objection to ultra-finitism”, which is a technical objection to a philosophical view. I’m arguing that the philosophical view is “incoherent”, not that the formal system $F$ is incoherent. How could a formal system be incoherent? After all, it’s simply a formal system, with various mathematical properties: for example, it’s extremely unclear whether it encodes syntax adequately at all (such theories are usually required to have pairing, be sequential, etc.).

So, I’ll try and explain the syntax objection to ultra-finitism again as as clearly as I can. This is an objection to a philosophical view that recommends some sort of scepticism for philosophical/ontological reasons.In fact, this objection was given long, long ago by both Quine and Church, and is occasionally mentioned later (e.g., Putnam, Wetzel) although it seems to have been quietly forgotten.

Unfortunately, different versions of ultra-finitism seem to based on different considerations. (Nelson’s views are a bit unclear to me because he seems to accept the potential infinity of numbers, but is sceptical about a certain operation - exponentiation. One might call this exponentiation-finitism. This is why $Q$ embodies what he accepts, mathematically speaking. So, I think that Nelson does not endorse $\left(\mathrm{UF}\right)$ below. Nelson’s real beef seems to be with induction, not potential infinity. On the other hand, I really am not sure about this, or what Nelson does or doesn’t accept regarding syntax. )

Here are the two relevant claims:

$\left(\mathrm{UF}\right)$ One ought to be agnostic about there being ${\aleph }_{0}$-many anythings.

$\left(\mathrm{Syn}{\right)}_{{\aleph }_{0}}$ There are ${\aleph }_{0}$-many syntactical entities.

The first of these is the basic philosophical view involved. The second is a theorem of the metatheory assumed. I.e., $\mathrm{MT}⊢\forall x\exists y\left(x, where $x means “$x$ is a proper initial segment of $y$” (see Visser’s paper on $\mathrm{TC}$ that I mentioned above for the detailed definitions of such notions in formalized syntax).

Then:

Syntax Objection: the two claims $\left(\mathrm{UF}\right)$ and $\left(\mathrm{Syn}{\right)}_{{\aleph }_{0}}$ are incoherent.

(They’re not strictly logically inconsistent, because $\left(\mathrm{UF}\right)$ is an epistemically normative claim. The incoherence is more like claiming to be agnostic about food while at the same time eating a cheese sandwich.)

So, an ultra-finitist who endorses (UF) needs to give their meta-theoretic syntax in an ultra-finitistically acceptable way. They cannot describe their favourite theory $T$ in a language $L$ with $\mid L\mid ={\aleph }_{0}$, for this in in tension with their philosophical view, $\left(\mathrm{UF}\right)$. They must somehow devise a notion of an language of finite cardinality, governed by axioms for concatenation which don’t imply $\left(\mathrm{Syn}{\right)}_{{\aleph }_{0}}$.

Jeff

Posted by: Jeffrey Ketland on October 20, 2011 2:30 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

For Nelson’s views, at least for views he held at some point, you can check his book Predicative Arithmetic. My impression is that he doesn’t accept what you are saying, in particular he makes a clear distinction between generic and non-generic numbers.

You don’t need to consider a theory as in the classical sense, we only consider the part that can be “realized” in the real physical world. Obviously this is not a precise statement, what do I mean by realizing in the real physical world? But I can try to give the feeling of what I mean. Think of a computer. There is a limit on the amount of memory it has. Only consider expressions in the language that can be stored in the memory of that computer. If you cannot state a formula there is no point in talking about it from ultrafinitistic point of view. The ultrafinitistic viewpoint is incoherent if you try to treat mathematical objects as you are used to in classical mathematics. You assume that the syntax is infinite, why? Because you are using an unrestricted inductive definition for it which from ultrafinitistic point of view does not make sense. But there are formulas that I can store in the memory of my computer, rejecting the inductive definition of syntax does not mean rejecting syntax. I can talk about 2 and 5 because I can represent them. I can talk about the formula representing consistency because I can write it down and encode it as a reasonable size natural number. I cannot talk about arbitrary large formulas or proofs or numbers, I can only talk about those which really exists. This doesn’t rule out developing ultrafinitistic techniques that would show a if in future someone comes with a longer proof I cannot show that is not a proof of inconsistency, whichever proof you realize and give to me I will show that it is not a proof of inconstancy of my system, and as long as you haven’t realized such a proof I don’t need to care about it. I think that is really what is intended by consistency. What is the point of showing consistency or soundness in the first place? The point is that my system works correctly in the sense that what I derive in my system are correct. Do I care about possible non-realizable proofs of inconsistency? Obviously not. Why should I care?

Now, this doesn’t get rid of subtleties of making ultrafinitism precise, but your objection about syntax is not a strong philosophical objection against ultrafinistism. I think real computers are a good demonstration of ultrafinitistic mathematics. Engineers deal with the reality of dealing with a explicitly bounded amount of finite objects all the time, it is not an easy experience to generalize it to a mathematical theory but it is what happens in real life. From my point of view, ultrafinitistic viewpoints are attempts to bring the theory of mathematics closer to the reality of physical life. For a fixed computer with fixed amount of memory there is maximum number that it can store, trying to compute its successor will end up in a result that is different from what mathematics tells us. It might overflow and go back to the smallest number, it might return the maximum number or might just trow an exception, which one happens is not that important, what is important is that it is not the successor of the number as it is defined in classical mathematics. Ultrafinitisms try to capture this kind of physical reality which is often very messy.

Posted by: logician on October 25, 2011 3:58 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

hi logician,

…we only consider the part that can be “realized” in the real physical world. Obviously this is not a precise statement, what do I mean by realizing in the real physical world?

Indeed. You mean physical tokens. This is a well known notion. For example, read a nice recent book on the topic.

Wetzel, L. 2009. Types and Tokens.

You will find a nice discussion of similar things in various other places: e.g., Burgess $&$ Rosen 1997, A Subject with No Object. Or earlier stuff by Charles Chihara. Or Hilary Putnam.

But I can try to give the feeling of what I mean. Think of a computer. There is a limit on the amount of memory it has. Only consider expressions in the language that can be stored in the memory of that computer.

Of course.

If you cannot state a formula there is no point in talking about it from ultrafinitistic point of view. The ultrafinitistic viewpoint is incoherent if you try to treat mathematical objects as you are used to in classical mathematics.

Indeed. This kind of thing has already been studied. E.g., most famously,

Quine, W.V. $&$ Goodman, N. 1947, “Steps Towards a Constructive Nominalism”, JSL 12.

Weir, A. 2010. Truth Through Proof: A Formalist Foundation for Mathematics.

You can read a nice review of this by John Burgess.

Alternatively, you might read some of the literature on logics for resource-bounded agents.

There is a huge literature on all this stuff.

Jeff

Posted by: Jeffrey Ketland on October 25, 2011 7:48 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeff,

Thanks for the references. I will check them.

Best Regards

Posted by: logician on October 25, 2011 10:16 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

On a second thought, it might be more appropriate to consider Nelson a strict-predicativist than an ultrafinitist.

Posted by: logician on October 25, 2011 4:16 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

logician wrote:

You assume that the syntax is infinite, why? Because you are using an unrestricted inductive definition for it which from ultrafinitistic point of view does not make sense.

I agree. It’s unfair to ultrafinitism to 1) choose a metatheory for discussing it in which one can prove the set of formulae is infinite, then 2) say this metatheory is philosophically incompatible with ultrafinitism and then 3) claim that somehow this is a problem with ultrafinitism.

Note: there’s no inherent problem with 1). It may be very productive to study ultrafinitism using the ordinary tools of metamathematics. If we do this, then 2) is true: our metatheory embodies different philosophical ideas than the theory we’re using it to study. That’s fine. The problem comes in step 3), which amounts to “blaming the victim”.

Jeffrey wrote:

They must somehow devise a notion of an language of finite cardinality, governed by axioms for concatenation which don’t imply $\left(\mathrm{Syn}{\right)}_{{\aleph }_{0}}$.

Just as the theory $F$ is unable to prove there are infinitely many natural numbers, we can expect the ultrafinitist to eventually develop a metamathematics that is unable to prove there are infinitely many formulae.

However, I don’t think it’s fair to demand a full development of metamathematics based on ultrafinitist principles before we begin studying ultrafinitist mathematics!

A comparison with topos theory might be helpful. People had to study intuitionism and develop topos theory for quite a while before they could develop metamathematics in a nice way based on intuitionist principles. Before that one might complain about using the principle of excluded middle in the metamathematical study of intuitionistic logic… but this would not be a reasonable objection to intuitionism. To blame intuitionism for the fact that we’re studying it using classical logic would again be “blaming the victim”.

Posted by: John Baez on October 25, 2011 4:49 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

John,

Just as the theory F is unable to prove there are infinitely many natural numbers, we can expect the ultrafinitist to eventually develop a metamathematics that is unable to prove there are infinitely many formulae.

Right, so, e.g., take a look at Quine $&$ Goodman 1947 (link above). Then we can consider some other ideas. Perhaps we fix an upper bound and consider, say, some kind of bounded arithmetic (these are very well-studied - see, e.g., the monograph Hajek/Pudlack 1993, with plenty of info about finite initial segments of models). Or we don’t, but we go modal (Hellman, Chihara). Or we don’t fix an upper bound, but we go vague somehow (Sazonov’s idea). Or …, etc. Such things - it might surprise you - have already been discussed!

However, I don’t think it’s fair to demand a full development of metamathematics based on ultrafinitist principles before we begin studying ultrafinitist mathematics!

I agree. At the moment, someone points out an inconsistency. People should be free to do as they please. But whether it works or not, whether it succeeds, is a separate matter. When someone comes up with an idea, then maybe someone else criticizes it. The person who came up with the idea might then accept the challenge. Etc. So, here’s my criticism: the standard metatheory of even a very simple language has only infinite models. What’s the response? Well, maybe one can give a strictly finite theory in a strictly finite language, where one makes only strictly finite assumptions (about strictly finite proofs/derivations: e.g., of length $\le {2}^{n}$ - all binary sequences up to length $n$). Perhaps. Let’s see if that can done. If so, that’s interesting. If not, why not? There is work to do.

For a comparison, a finitist demand on a proof-theoretic reductions is that the proof-theoretic reduction also be provable in, e.g., PRA. For example, ${\mathrm{ACA}}_{0}$ conservatively extends $\mathrm{PA}$, and this fact is provable in $\mathrm{PRA}$. This is a kind of inner consistency requirement.

There is already a detailed literature about types, tokens, reformulating mathematics without referring even to abstract entities, etc. So, this has already been studied, in huge detail. I am briefly describing the conclusions of people who have worked on this topic and analysed what’s possible (see some of the links I give in my reply to “logician”. I can give you more, if you like: if one “goes modal”, then one can give a metamathematics without assuming any mathematical entities: one assumes possible formula tokens, or possible numeral tokens, or possible entities more generally. This idea has been developed by Charles Chihara and Geoffrey Hellman - see the standard monograph analysing all of this: Burgess $&$ Rosen 1997 (A Subject with No Object). I recommend it.)

Jeff

Posted by: Jeffrey Ketland on October 25, 2011 9:10 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey wrote:

There is work to do.

Okay, I agree with that. I’d been afraid you were trying to “nip ultrafinitism in the bud’ by claiming some sort of internal inconsistency in the ultrafinitist worldview. So, I wanted to point out that ultrafinitist mathematics will look nicer when studied using an ultrafinitist metamathematics. I agree there’s more work to be done here. I don’t want to develop this metamathematics, or even think about it much, but I hope someone does.

In particular, I think the system F, where we don’t assert the existence of a successor to every natural number, will look nicest when studied using a metamathematics where we don’t assert the existence of the concatenation of two strings of symbols. In F we can’t prove there are infinitely many natural numbers; in such a metamathematics we can’t prove there are infinitely many strings of symbols drawn from a finite alphabet.

In short: “as above, so below”.

(In category theory this idea, namely that “concepts are happiest in a context that resembles themselves”, is sometimes called the “microcosm principle”. There it’s usually used to note that objects equipped with a given structure are often most elegantly defined in a category equipped with the same kind of structure: for example, monoid objects in a monoidal category. But now I’m thinking the same phenomenon shows up in metamathematics.

People can study an intuitionistic version of the natural numbers, a natural numbers object, in any topos. If someone invents a good general ultrafinitist theory of the natural numbers, it may make sense in any ‘ultrafinite topos’—a concept that hasn’t been defined.)

Posted by: John Baez on October 26, 2011 3:53 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Thanks, John - the category theory stuff is way above my head, though! (Btw, Steve Awodey was here, in the office opposite, for a few months earlier this year, when there was this fuss about Voevodsky and PA.) More on the topic of this thread though, I’d meant to mention the following paper earlier, as it involves doing category theory inside a weak arithmetic theory,

Visser, A. “Cardinal Arithmetic in Weak Theories”.

Not sure if/when this was published. Visser has published a couple of things in this area in the last couple of years (two articles on similar material in Review of Symbolic Logic).

Also, there’s paper by Richard Pettigrew on a certain kind of finitary set theory (due to John Mayberry) closely related to bounded arithmetic ($I{\Delta }_{0}+\mathrm{exp}$, I think)

Pettigrew, R. 2010. “The Foundations of Arithmetic in Finite Bounded Set Theory”. (In Roland Hinnion and Thierry Libert (eds), One Hundred Years of Axiomatic Set Theory, Cahiers du Centre de Logique, vol. 17.)

Jeff

Posted by: Jeffrey Ketland on October 26, 2011 9:23 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey Ketland wrote:

I’m arguing that the philosophical view is “incoherent”, not that the formal system $F$ is incoherent.

I understand. My reply, earlier, was “this doesn’t concern me much”. In other words, I don’t care very much if Prof. X says Prof. Y’s philosophical views are incoherent, my own being so incoherent and changeable that I’ve long since given up trying to formulate them precisely.

But it seemed to concern our friend t, so maybe he will be interested in this further clarification.

Posted by: John Baez on October 20, 2011 4:45 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

In other words, I don’t care very much if Prof. X says Prof. Y’s philosophical views are incoherent, my own being so incoherent and changeable that I’ve long since given up trying to formulate them precisely.

But this is a rather negative (meta)view, and it’s too self-deprecating! For it suggests that one can never give a precise analysis of certain views in either epistemology or metaphysics, and compare them, e.g., using techniques of mathematical logic. I am far more optimistic about making definite, specific progress on such matters—but one has to work hard to formulate views with as much precision as possible.

Here’s an example from something I’ve worked on a bit. Suppose Prof. X and Prof. Y advocate views $\left({V}_{1}\right)$ and $\left({V}_{2}\right)$ respectively:

$\left({V}_{1}\right)$ Accepting a theory $T$ consists in believing $T$ is empirically adequate.

$\left({V}_{2}\right)$ Accepting a theory $T$ consists in believing that $R\left(T\right)$ is true.

(where $R\left(T\right)$ is a certain operation applied to theories, called Ramsification)

Suppose further that one can prove a theorem:

$R\left(T\right)$ is true iff $T$ is empirically adequate.

To do this, one has to be careful about how theories are formalized, how to define “empirically adequate”, and $R\left(T\right)$. So, there is some wriggle room. That said, it seems that views $\left({V}_{1}\right)$ and $\left({V}_{2}\right)$ are quite close to being equivalent (not exactly, because of the epistemic operators involved).

Similarly, in other cases, one might succeed in establishing inconsistencies, implications and equivalences, but it requires careful formulation of the views in question. In this way, we make definite progress in understanding the relationships between the views in question.

Jeff

(PS - thanks for fixing the typo above! I’ve eventually realised that XHMTL isn’t HTML.)

Posted by: Jeffrey Ketland on October 20, 2011 11:00 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey wrote:

But this is a rather negative (meta)view, and it’s too self-deprecating! For it suggests that one can never give a precise analysis of certain views in either epistemology or metaphysics, and compare them, e.g., using techniques of mathematical logic.

I was just explaining why I don’t care about this stuff; I wasn’t trying to say nobody should mess with it.

Yeah, sorry—this blog uses XHTML, which is more fussy than HTML.

Posted by: John Baez on October 21, 2011 1:31 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

John,

I was just explaining why I don’t care about this stuff.

Ok, let me try and get you to care about this stuff - i.e., mathematical methods in philosophy. For when one has set things up correctly, one gets certain technical results which are philosophy (in my view, they constitute the correct way to do philosophy). I’ll scribble down some technical results relevant to various philosophical questions - a couple concern applying mathematics in science.

Above, the philosophical discussion above concerned the interpretability of a syntactic theory (e.g., $\mathrm{TC}$) in the theory discussed above, $F$. That’s a technical problem, but at the same time a philosophical one. I mentioned just above another case of giving precise definitions of the notion of “empirical adequacy”. Another is this: some philosophers are interested in defining the notion “$x$ is identical to $y$”.

Philosophical question 1. Can one reduce/define/analyse the notion of identity in terms of other notions?

Here’s a precise answer to that sort of question. Suppose $L$ is a first-order language with finitely many predicates and no function symbols. Let $M$ be an $L$-structure but we do not assume the relation $=$ is definable in $M$. (I mean the diagonal $\left\{\left(x,x\right):x\in \mathrm{dom}\left(M\right)\right\}$.):

Theorem 1. Suppose $M$ is rigid. Then $=$ is definable in $M$.

Proof. Suppose $=$ is not definable in $M$. One can show that this implies that a certain $L$-formula, $x\approx y$, does not define $=$. So, there are $a,b\in \mathrm{dom}\left(M\right)$ such that $a\ne b$ and $a{\approx }^{M}b$. Let

${\pi }_{\mathrm{ab}}:\mathrm{dom}\left(M\right)\to \mathrm{dom}\left(M\right)$

be the permutation that transposes $a$ and $b$. One can show that $a{\approx }^{M}b$ implies that ${\pi }_{\mathrm{ab}}\in \mathrm{Aut}\left(M\right)$. Hence, $M$ is not rigid.

Here’s another example.

Philosophical question 2. Can $Q$ be refuted by an empirical statement? (E.g., about how many physical tokens can be inscribed perhaps on molecules, using superconductor nanotechnology, etc.,etc.)

The answer to this is no, if $Q$ is consistent, because,

Theorem 2. Suppose $Q$ is consistent and let $\varphi$ be an empirical statement. Then if $\varphi$ is consistent, $Q\wedge {\varphi }^{U}$ is consistent.

Proof. (${\varphi }^{U}$ is a relativization of $\varphi$ to a predicate $U\left(x\right)$ meaning “$x$ is not a number”.) Use Joint Consistency noting that ${L}_{Q}$ and ${L}_{\varphi }$ have only $=$ in common.

In other words, $Q$ is consistent with any such empirical fact. For example, suppose $F\left(x\right)$ means “$x$ is a molecule” or “$x$ is a region of space”. Then $Q\wedge {\exists }_{n}x\left(U\left(x\right)\wedge F\left(x\right)\right)$ is consistent, for all $n$.

Here’s another, related, example but concerning far more powerful mathematics. How are empirical facts relevant to a mathematical set theory that talks about sets of physical things? Even our most powerful such theory?

Philosophical question 3. Can set theory with urelements $\mathrm{ZFU}$ be refuted by an empirical statement?

The answer is no, if $\mathrm{ZFU}$ is consistent, because

Theorem 3. Suppose $\mathrm{ZFU}$ is consistent and let $\varphi$ be an empirical statement. Then if $\varphi$ is consistent, $\mathrm{ZFU}+{\varphi }^{U}$ is consistent. (Field 1980)

where ${\varphi }^{U}$ is a relativization to $U\left(x\right)$ meaning “$x$ is not a set”.

Here’s another example.

Philosophical question 4. Scientific theories are mathematicized. Can we eliminate reference to the mathematical objects?

Let $T$ be an axiomatized mathematicized scientific theory quantifying over concrete entities (e.g., points, regions) and mathematical ones (e.g., real numbers, sets and whatnot), formulated in a 2-sorted language $L$, one sort for concrete things and the other for mathematical things. Then,

Theorem 4. Suppose $T$ satisfies a certain coding condition. Then $T$ in $L$ is definitionally equivalent to a purely nominalistic theory ${T}^{\circ }$ in a language ${L}^{\circ }$ which is the purely nominalistic part of a definitional extension of $L$. (Burgess $&$ Rosen 1997)

Hopefully haven’t goofed up the statements of these in writing them in this comment (the one whose proof is most difficult is Th. 3); but they’re all examples of mathematical philosophy, and I think if mathematicians and physicists care about, e.g., how mathematics is applied in science, they should care about such things.

Jeff

Posted by: Jeffrey Ketland on October 21, 2011 10:58 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

John wrote:

By the way, personally I find a system where very large numbers plus 1 equals themselves to be further removed from my intuitive picture of the natural numbers than a system where very large numbers just don’t exist.

In JavaScript, that “very large” number is ${2}^{53}$, because that’s where natural numbers stop being representable as 64-bit IEEE floating point numbers.

Posted by: Mike Stay on October 26, 2011 7:44 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Okay, Mike, but you’re leaving me in suspense: what happen when you try to add 1 to ${2}^{53}$?

Posted by: John Baez on October 26, 2011 8:02 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Okay, Mike, but you’re leaving me in suspense: what happen when you try to add 1 to ${2}^{53}$?

You get ${2}^{53}$ again; the least significant bits are discarded.

Posted by: Mike Stay on October 27, 2011 1:19 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

“t, yes, I’ve read your monograph. Actually, I read it several years ago, because I remember you discussing this quite a bit on sci.logic with Torkel and others.”

It’s hard to fathom your comments otherwise.

“I’m referring to the metatheory of F and the views of the author! That is, F is agnostic about, say, 1, and is meant to represent the author’s agnosticism; however, the author’s metatheory is not agnostic about an ω-sequence of syntactical entities, say,

(0=0),¬(0=0),¬¬(0=0),…

understood as types (i.e., finite sequences), not physical tokens.’

But I am most certainly agnostic about that w-sequence of syntactical entities. The formalization of the meta-theory in F is certainly agnostic. And, while the way first- and second-order logic are usually described, by assuming the w-sequence, this way is not essential to the logic. I only need to assume whatever tokens and strings of tokens there are. I can then describe which of those strings are wffs, which are axioms, and which are proofs. There’s another paper where these issues are mentioned explicitly (at the beginning, for those who like to skim papers): www.andrewboucher.com/papers/godel.pdf.

BEGIN QUOTE (from the Godel paper):
A language is a set of symbols as well as a set of rules as to which sequence of symbols are certain linguistic types, such as logical constants, variables, terms, and well-formed formulas (wffs).
Unfortunately this is not how a language is often defined in logical texts. Often linguistic types are defined in such a way that, not only are rules enunciated as to which sequences are of the particular type, but also so that sequences of symbols are generated and asserted positively to exist ad infinitum, an assumption in essentials equivalent to the Successor Axiom.
For instance, Mendelson [Introduction to Mathematical Logic, 1st ed., p. 15] writes (I’m changing the logical notation to be consistent with the one used in the present paper):
(1) All statement letters (capital Roman letters) and such letters with numerical subsripts are statement forms.
(2) If A and B are statement forms, then so are (not A), (A and B), (A or B), (A implies B), and (A iff B).
(3) Only those expresions are statement forms which are determined to be so by means of (1) and (2).
This definition does two things. Firstly, it explains what something must be to be a statement form. Such explanation is, of course, the natural role of a definition. But it goes beyond explanation and beyond the normal job of a definition by, secondly, positively asserting the existence of objects. That is, the definition asserts that certain statement forms exist and indeed exist ad infinitum, e.g. (A),
(¬A),(¬(¬A)),(¬(¬(¬A)))),… The definition is, intentionally or otherwise, not only categorizing sequences of symbols, but positively asserting their existence. It has assertive force and is not neutral, as definitions should be.
Defining a language in this way therefore games the discussion; if the set of theories of such languages is not to be empty, then the Successor Axiom must be true. It is therefore to be avoided and an unbiased method sought. For instance, a neutral replacement of Mendelson’s definition would be:
Suppose A is a sequence of symbols. Then A is a statement form if there exists a sequence of statement forms A1,…,An
(for some n ≥ 1) such that:
1) An is A
2) for every i (1≤i≤n),Ai is either:
a) a capitial Roman letter
b) (not Aj), (Aj and Ak), (Aj or Ak), (Aj implies Ak), and (Aj if and only if Ak), where 1 ≤ j,k Remark that this definition does not assert the existence of A. Rather, it accepts A’s existence and says what condition must obtain in order for A to be in the category of statement forms. It is therefore legitimate as a definition; it categorizes but it does not assert.
END QUOTE

Posted by: t on October 23, 2011 5:01 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Hi t,

(Right, that is the definition of a construction sequence.)

Have you studied the standard monograph on first-order arithmetic? I.e.,

Hajek/Pudlak 1993, Metamathematics of First-Order Arithmetic

Here you will find, pp. 86-89, a basic theory of arithmetic without function symbols, called ${\mathrm{BA}}^{\prime }$. It has models of any finite cardinality. You will find its application later in the book, discussing models of fragments of arithmetic and bounded arithmetic.

Consider your formal system $F$. What is its cardinality? How many axioms does it have?

If, instead, you wish to consider the set of finite sequences up to length $n$ over an alphabet $A$ of size $k$, then this is, ${\bigcup }_{i=0}^{n}{A}^{i}$, and has cardinality $O\left({k}^{n}\right)$. Suppose you wish to consider proofs involving strings of length up to 80 with an alphabet of size 10, then you have ${10}^{80}$ strings.

Jeff

Posted by: Jeffrey Ketland on October 23, 2011 10:23 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Here is a question that might be of interest to people here: are there any applications of restrained logical systems to ordinary mathematics? [NB What I wrote below grew a bit long. I hope it’s still OK to post here.]

Let me explain what appears to me to be a similar situation in algebraic geometry. (I know essentially zero about logic, so there’s a chance this parallel is completely ill conceived.) Usual, complex algebraic geometry is about studying complex solutions to systems of polynomial equations. Arithmetic algebraic geometry is about studying solutions in rings of arithmetic interest, most importantly the integers, but rings of derived interest are also important, like the rationals, finite fields, p-adics, etc. An important first step in finding arithmetic solutions to a polynomial system is understanding the nature of its complex solutions. When you have one polynomial in one variable, arithmetic algebraic geometry is really just classical algebraic number theory, and the same principle applies there — it’s just so simple you don’t notice it. This is because the qualitative behavior of the complex roots of a polynomial pretty much only depends on its degree. (I say pretty much because things are a bit subtler if you have multiple roots, but this is a degenerate case.) But as any student of Galois theory will tell you, if you want to study the roots of a polynomial, the first thing you need to know is its degree.

The parallel I’d like to draw, then, is between logical systems and algebraic geometry. Formal logic involves syntactic reasoning according to some rules, just like working in an algebraic structure, such as a commutative ring, does. Since algebraic geometry is pretty much just the study of commutative algebras slightly dressed up, I hope this analogy is reasonable. In this analogy, arithmetic algebraic geometry could be viewed as a restrained form of complex algebraic geometry because all the theorems have to be about systems of polynomial equations with integer (say) coefficients rather than complex ones. Similarly a retrained logic can’t use all the axioms of usual logic.

Now many mathematicians’ reactions to arithmetic algebraic geometry would be to say “Don’t tie my hands! I have no particular interest in number theory. I want to look at all solutions. I don’t really care about the ones that happen to be integral.” Similarly most mathematicians have no particular interest in restrained logics, even if they grant that they are valid mathematical objects. They just want to use logic, and so they want as much freedom with it as possible. So they add the axiom of choice, universes, etc.

This is a perfectly reasonable approach most of the time, but there are some exceptions on algebraic geometry side of the analogy, and what I’m wondering is whether there could be similar exceptions on the logic side. But there are really two kinds of exceptions, and I should say something about both.

The first is that sometimes in the world of complex algebraic geometry, structures of an arithmetic nature come up. A parallel in the logical world is that sheaves conform to a restrained logic. So even if you started out not caring about non-classical logics, you might decide it’s more efficient for you in your study of sheaves if you know something about restrained logics. The example I’m thinking of in algebraic geometry is in the study of abelian varieties. The endomorphism ring of an abelian variety is a ring which is finitely generated as an abelian group. Therefore it has an arithmetic nature, and knowing something about this will help in your study of abelian varieties. Such examples are only so interesting, I think, because even though the original objects of study did not have a restrained nature, there’s no reason to expect that objects derived from them would also not have such a nature. Even so, I’d be interested in knowing more examples in logic.

The second kind of exception is what I really want to talk about. Let me give an example in algebraic geometry. Suppose you want to study the complex solution set of a system of polynomials. If that system has integer coefficients, then you might be able to reduce everything modulo a prime, make some deductions about the solutions modulo that prime, and then draw some conclusions about the original system. For instance if you’re studying solutions to ${x}^{n}=1$, then you just have roots of unity, and I hope it’s plausible that some facts about complex roots of unity could be proved by working modulo primes. (This is in fact true.) But let me consider this from the point of view of the rings that come up in such arguments. The relevant ring if you’re studying complex solutions to ${x}^{n}=1$ is $C\left[x\right]/\left({x}^{n}-1\right)$. If you’re studying arithmetic solutions, you want to look at the subring $Z\left[x\right]/\left({x}^{n}-1\right)$. The subring has fewer elements. How could it be the case that confining ourselves to it is helpful? The reason is that precisely because there are fewer elements $Z\left[x\right]/\left({x}^{n}-1\right)$, there are *more* maps out of it. In particular, you have maps to finite fields, rings which a complex die hard might consider exotic and non-classical.

So my first question is whether such a thing ever happens with restrained logics. Do restrained logics admit more specializations to other logics (in some sense), and could it happen that this is useful in studying classical logics?

I could stop here, but in fact the story continues. A complex die hard might say “OK, I concede that if the original polynomial equations have an arithmetic nature, then it can happen that things I care about can be proved using arithmetic. But this is exceptional. Indeed they form only countably many examples in a sea of uncountably many polynomial systems with complex coefficients.” This is a reasonable point of view, and I’d say that you have no right to expect otherwise. But amazingly sometimes arithmetic methods can still be used to do things when the original equations have arbitrary coefficients. The technique I have in mind is called “spreading”. Let $R$ be the subring of the complex numbers generated by all the coefficients in the system of polynomial equations. If one of the coefficients is transcendental, like $\pi$, then $R$ will be a ring like $Z\left[\pi \right]$, which isn’t really an arithmetic subring of $C$. But, and this is the point, $R$ is still a finitely generated ring — you just forget that $\pi$ is a complex number and treat it as a free variable. The original system of equations has coefficients in $R$, and you can reduce the system modulo primes simply by applying any homomorphism from $R$ to a finite field. (They will always exist.) Then, as before, you can using arguments over finite fields (involving counting or Frobenius, say) to draw conclusions which you can then try to lift back to $R$ and then to $C$.

Unless I’m mistaken, this is how Grothendieck proved the the Ax-Grothendieck theorem. It says that if you have an injective map from the solution set of a system of complex polynomials to itself and the map is defined by polynomial equations, then the map is surjective. Over finite fields, the analogous statement is true by a counting argument, and over the complex numbers, you use the spreading technique. The reason this technique works really gets to the heart of what algebra is. It’s that in a system of polynomial equations, there are only finitely many symbols and hence only finitely many coefficients, even if those coefficients are transcendental. Therefore the ring $R$ is finitely generated, and therefore it has many maps to finite fields (and other rings).

So my second question is whether something similar happens in logic, or whether it could. Any theorem or proof involves only finitely many symbols, and could it be the case that you can consider the minimal logic over which it makes sense and then, since this logic has a finite nature, specialize it to some exotic, restrained logic where completely different techniques are available?

I don’t know if this is completely crazy, but I’m somewhat heartened by the fact that the other person who proved the Ax-Grothendieck theorem, namely Ax, gave a proof using logic.

Posted by: James Borger on October 19, 2011 1:51 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

A parallel in the logical world is that sheaves conform to a restrained logic. So even if you started out not caring about non-classical logics, you might decide it’s more efficient for you in your study of sheaves if you know something about restrained logics.

Indeed! Can I paraphrase this part of your question as “is there a kind of category, definable inside classical foundations, whose internal logic is ultrafinitist”?

Do restrained logics admit more specializations to other logics (in some sense), and could it happen that this is useful in studying classical logics?

If I understand you correctly, this is certainly the case for intuitionistic logic. For instance, intuitionistic logic is consistent with the existence of nilpotent infinitesimals—giving rise to synthetic differential geometry—or with the “strong Church-Turing thesis” that all functions $ℕ\to ℕ$ are computable, or “Brouwer’s theorem” that all functions $ℝ\to ℝ$ are continuous. And this is of course related to the first point, since many of these statements have models in categories of sheaves.

By the way, I don’t know many good examples of either sort of thing for predicative theories. There don’t seem to be many pretoposes arising naturally inside classical mathematics that aren’t toposes, nor very many fruitful new axioms one would want to assume that are consistent with predicative mathematics but not with impredicativity. Most predicativists seem to be motivated more by philosophical considerations.

It would certainly be interesting to know whether there are interesting examples of either sort for super-weak logics. I recall chatting a little bit with Damir Dzafarov a few years ago regarding what a categorical model of RCA${}_{0}$ might look like, but we didn’t reach any very satisfying conclusions.

Posted by: Mike Shulman on October 19, 2011 6:00 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

1. Mike said: Can I paraphrase this part of your question as “is there a kind of category, definable inside classical foundations, whose internal logic is ultrafinitist”?

More or less. I was interested in *any* examples of auxiliary objects in the classical world that can be usefully thought about using some non-classical logic. But I like your version too!

2. He also said: If I understand you correctly, this is certainly the case for intuitionistic logic. For instance, intuitionistic logic is consistent with the existence of nilpotent infinitesimals—giving rise to synthetic differential geometry—or with the “strong Church-Turing thesis” that …

Are you saying that adding axioms to a logic is like adding relations to a ring and that the resulting logic being consistent is like the resulting ring being nonzero? If so, then that’s nice. Has such a “specialization” ever been used to prove anything interesting in classical logic? I fear the answer is no.

I was thinking of specialization in the ring world from the point of view of ring morphisms, rather than adding generators and relations. Is there an analogous concept in the logic world? Perhaps a morphism between logics would be a function from the set of valid proofs in one to that in the other, and this function would be required to have some “homomorphic” properties relative to some logical operations??

Posted by: James Borger on October 20, 2011 12:07 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Are you saying that adding axioms to a logic is like adding relations to a ring and that the resulting logic being consistent is like the resulting ring being nonzero?

Exactly! This is what classifying toposes are.

A ring with a presentation as a quotient of a polynomial algebra, $R=ℤ\left[{x}_{1},\dots ,{x}_{n}\right]/\left({f}_{1},\dots ,{f}_{m}\right)$ can be thought of as the result of adding $n$ “formal elements” to $ℤ$, followed by imposing $m$ “axioms”. Its universal property says that ring maps $R\to S$ (or equivalently scheme maps $\mathrm{Spec}\left(S\right)\to \mathrm{Spec}\left(R\right)$) are given by choosing $n$ elements of $S$ which satisfy the $m$ specified axioms.

Similarly, the topos of sheaves on a site $C$ can be thought of as the result of adding some “formal objects” to $\mathrm{Set}$ (roughly, the objects of $C$), some “formal morphisms” between these objects (roughly, the morphisms of $C$), and imposing some “axioms” (the topology of $C$). (I say “roughly” because there is some flatness going on too, but that’s not so important for the basic idea.) Its universal property says that left-exact cocontinuous functors $\mathrm{Sh}\left(C\right)\to E$, for any topos $E$ (or equivalently, geometric morphisms $E\to \mathrm{Sh}\left(C\right)$) are given by choosing a collection of objects and morphisms in $E$ which satisfy the axioms.

As John suggested below, this is in fact exactly what set-theorists call “forcing” (modulo classical vs intuitionistic logic, which is really irrelevant for the main point—the only difference is that if you don’t insist on classical logic, then there are more axioms you can consistently add). The analogy to polynomial rings is described at the nLab page on forcing. Set-theorists use the word “generic” for the structures added to the topos of sheaves (which they call a “forcing model”).

(John is exactly right, though, that Mac Lane and Moerdijk aren’t very explicit about how to get back to a model of membership-based set theory from the topos of sheaves. The basic idea is the same as what they do in VI.10: construct ZF-like-sets out of membership graphs or trees, but now internally to a not-necessarily-well-pointed topos. There are papers by Fourman and Hayashi that go through this in some detail. More recently it has been reinterpreted by algebraic set theorists (such as in this paper) and by myself (in this paper.)

Posted by: Mike Shulman on October 20, 2011 10:07 PM | Permalink | PGP Sig | Reply to this

### Re: Weak Systems of Arithmetic

Mike wrote:

Similarly, the topos of sheaves…

And one can make this idea into more than a mere analogy, right? I mean, there’s a reason topoi were invented in algebraic geometry. I forget the details, but I think you can take a scheme and turn it into a ringed topos, namely the topos of sheaves on that scheme. Then a map of schemes gives a map of ringed topoi, and your first example becomes practically an example of your second one (but I guess with ringed topoi instead of mere topoi).

So I think there’s a very nice translation dictionary relating algebraic geometry, logic and category theory—but only people who understand the whole elephant of topos theory are privy to it.

Posted by: John Baez on October 21, 2011 2:07 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

And one can make this idea into more than a mere analogy, right?

Yes, indeed! Although it confuses me a little, because algebraic geometers don’t just consider the topos of sheaves on a scheme qua topological space, but also the sheaves on its etale site and any number of other sites associated to it. I’ve never quite figured out how all of that fits into topos theory.

Posted by: Mike Shulman on October 21, 2011 8:07 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

That’s very interesting, James. I don’t have anything to add, but I’m glad to have learned this way of thinking about restrained systems.

Posted by: Tom Leinster on October 19, 2011 9:29 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

James wrote:

Are there any applications of restrained logical systems to ordinary mathematics?

That’s a great question, and your examples of analogous tricks from algebraic geometry make me feel sure the answer will eventually be yes. But I don’t think this area has been explored enough yet, except in certain famous special cases, including these two you already know:

1) There are a lot of great applications of extremely restrained logical systems to topology and mathematical physics. I’m talking about things like algebraic theories, PROPs and operads. These might not even be considered ‘logic’ by some, since they correspond to setups where all you have are some function symbols and all you can assert are ‘equational laws’, which are implicitly universally quantified over all variables, such as:

$x×\left(y+z\right)=x×y+x×z$

or

$g\cdot {g}^{-1}=1$

where in a PROP or operad we add even further restrictions on the kinds of equational laws we allow. I guess most logicians would consider such highly restrictive systems part of ‘universal algebra’ rather than ‘logic’, but for me they’re all part of a hierarchy of expressive power with operads near the very bottom and classical first-order or second-order logic near the very top.

Slightly above PROPs we get ‘multiplicative intuitionistic linear logic’ or MILL, which is another way of thinking about symmetric monoidal closed categories—so when we hit this point of the hierarchy at least someone considers it to be ‘logic’. Mike Stay and I have written about the many applications of symmetric monoidal closed categories to physics, topology, logic and computation. One really should have lots of papers like this, for many different kinds of logic.

2) Up near the top of the hierarchy we have topoi. People have studied the heck out of these, and they have lots of great applications to ‘ordinary mathematics’—indeed, that’s where they came from in the first place! But I just wanted to mention that Mac Lane and Moerdijk sketch a supposedly more intuitive proof that $\mathrm{ZF}+¬C$ is consistent, using the idea of forcing but also topos theory. I never quite understood it, in part because they don’t seem to fully explain how to take the topoi they get and turn them into models of ordinary set theory. But this might count as an application of ‘restrained’ (namely intuitionistic) logic to classical logic.

Any theorem or proof involves only finitely many symbols, and could it be the case that you can consider the minimal logic over which it makes sense and then, since this logic has a finite nature, specialize it to some exotic, restrained logic where completely different techniques are available?

I don’t know the answer to this question, but since I have no idea how much logic you know, I’ll mention that it reminds me of the famous compactness theorem in first-order logic: a theory given by some collection of axioms has a model iff every theory given by some finite subset of those axioms has a model. This has all sorts of wonderful consequences, which you can see if you click on the link. For example, if some first-order statement is true in every field of characteristic zero, then there exists $p$ such that it holds for every field of characteristic larger than $p$.

From the above link:

One can prove the compactness theorem using Gödel’s completeness theorem, which establishes that a set of sentences is satisfiable if and only if no contradiction can be proven from it. Since proofs are always finite and therefore involve only finitely many of the given sentences, the compactness theorem follows.
Posted by: John Baez on October 20, 2011 6:16 AM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

(continued from the last comment.)
******
Suppose A is a sequence of symbols. Then A is a wff if there exists a sequence A1,…,An
(for some n ≥ 1) such that:
1) An is A
2) for every i (1≤i≤n),Ai is either:
a) a capital Roman letter; or
b) (not Aj), (Aj and Ak), (Aj or Ak), (Aj implies Ak), or (Aj if and only if Ak), for some j,k where 1 ≤ j,k and i > j,k.
******
Sorry, the copy-paste didn’t work perfectly in the last comment, so that’s the correct version.
******
A proof of S is then a sequence of wffs W1,…,Wn such that:

1) Wn is S
2) for every i (1≤i≤n), Wi is either:
a) an axiom; or
b) there exists j,k where i > j,k and Wj is (Wk implies Wi).
*******
That’s exactly the manner in which F defines its provability predicate, so it really does have one, and it really does prove its own consistency.
*******
Adding the row of stars seemed to do the trick to get the comment accepted. Apologies if it reduced the readability.

Posted by: t on October 24, 2011 4:09 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

To explain why t’s comment begins “(continued from the last comment)”: I deleted two of t’s earlier comments. The first contained insults. The second just corrected a mistake in the first.

We don’t allow insults here. Though people occasionally get passionate, this is a friendly, helpful, civilized place, and we want to keep it that way.

Posted by: Tom Leinster on October 24, 2011 5:16 PM | Permalink | Reply to this

### Re: Weak Systems of Arithmetic

Jeffrey, hmmmm… I have now answered what you claimed to be the “central” issue. One does *not* assume that the language has a w-sequence of entities. Do you agree or not? Do you understand now that there is no incoherence in any of the claims?

“You wish to consider the consider the set of finite sequences up to length n over an alphabet A of size k…”

I don’t wish to do anything of the sort. That’s you talking, not me, trying to fit what I’m saying into a little box. One does not make any, and does not need to make any, assumption about the cardinality of F being finite or infinite, just as one does not make any assumptions about the cardinality of N.

So let’s just review your objections.

First, you claimed that F did not distinguish between syntactic entities. OK, after a few exchanges, you had an epiphany, and you now realize that F does, in fact, distinguish between entities, unless of course you’re back to meaning “distinguish” as “proving that the entity exists”.

Second, you claim that the philosophy behind F is incoherent. Now the philosophy behind F is an agnosticism about the Successor Axiom and whether N goes on and on forever. You claimed this was incoherent because the language has a w-sequence of entities. I presume now you understand that one does not make and does not need to make that assumption and so that the philosophy behind F is coherent.

Third, you claimed that, in any case, the system is “extreme and pointless.” Hmmm…. well, since the original question was about systems that can prove their own consistency, where’s the pointless? F proves its own consistency. And what’s more it’s a natural system, because it takes a well-known system, second-order deductive PA, and subtracts a few axioms. That’s about as natural as you get. And it does this and is *still* able to prove Quadratic Reciprocity and (probably) Fermat’s Last Theorem. That would, it seem, be a reasonably strong system which can prove its own consistency. Sounds interesting to me, but if you personally find it pointless, well, can’t argue with it, because pointlessness is in the eye of the beholder.

The only possible objection is actually the one that John mentioned, which is whether F really is able to talk about its own consistency. On this it’s obvious that it can - just look at how it formulates the predicate Provable(x). Consider the case of a propositional logic, which is simpler (and so I can explain it here). In my last comment I explained how one defines what is a wff in a propositional logic:

(to be continued…)

Posted by: t on October 26, 2011 8:51 AM | Permalink | Reply to this

Post a New Comment