Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

May 23, 2019

Polyadic Boolean Algebras

Posted by John Baez

I’m getting a bit deeper into model theory thanks to some fun conversations with my old pal Michael Weiss… but I’m yearning for a more category-theoretic approach to classical first-order logic. It’s annoying how in the traditional approach we have theories, which are presented syntatically, and models of theories, which tend to involve some fixed set called the domain or ‘universe’. This is less flexible than Lawvere’s approach, where we fix a doctrine (for example a 2-category of categories of some sort), and then say a theory AA and a ‘context’ BB are both objects in this doctrine, while a model is a morphism f:AB.f: A \to B.

One advantage of Lawvere’s approach is that a theory and a context are clearly two things of the same sort — that is, two objects in the same category, or 2-category. This means we can think not only about models f:ABf : A \to B, but also models g:BCg : B \to C, so we can compose these and get models gf:ACg f : A \to C. The ordinary approach to first-order logic doesn’t make this easy.

So how can we update the apparatus of classical first-order logic to accomplish this, without significantly changing its content? Please don’t tell me to use intuitionistic logic or topos theory or homotopy type theory. I love ‘em, but today I just want a 21st-century framework in which I can state the famous results of classical first-order logic, like Gödel’s completeness theorem, or the compactness theorem, or the Löwenheim–Skolem theorem.

Paul Halmos’ polyadic Boolean algebras may come close to doing what I want!

Let me try to translate his idea into more modern language. I may screw up and get a concept that’s less useful, and I may not get exactly his concept. (These are two separate issues.)

The idea is roughly this. For any finite set SS, which we think of as a ‘set of variables’, we have a Boolean algebra A(S)A(S) consisting of ‘predicates whose free variables are in the set SS’.

For example, we might build these Boolean algebras A(S)A(S) freely, starting with a binary predicate PP and a unary predicate QQ. Then

A()={,,Q(),Q(),P(,),} A(\emptyset) = \{ \top, \bot, Q(\top), Q(\bot), P(\top,\top), \dots \}

since these are what we we can build from PP and QQ using the operations of Boolean logic and no variables: here \top and \bot are the logical constants ‘true’ and ‘false’. We have

A({z})={,,P(z,z),Q(z),¬P(z,z)Q(z),Q(z)P(z,z),} A(\{z\}) = \{ \top, \bot, P(z,z), Q(z), \not P(z,z) \wedge Q(z), Q(z) \Rightarrow P(z,z), \dots \}

and so on. Similarly

A({x,y})={,,P(x,y),P(y,x),P(x,x),P(y,y),Q(x)P(x,y),} A(\{x,y\} ) = \{ \top, \bot, P(x,y), P(y,x), P(x,x), P(y,y), Q(x) \Rightarrow P(x,y) , \dots \}

Next, we want to include the ability to substitute of variables. For example, if I take the function

f:{x,y}{z} f: \{x,y\} \to \{z\}

I want to be able to take any predicate involving the free variables xx and yy and get a predicate with zz as its only free variable, obtained by substituting zz for xx (since f(x)=zf(x) = z) and also substituting zz for yy (since (f(y)=zf(y) = z.) This should give a map of Boolean algebras

A(f):A({x,y})A({z}) A(f) : A(\{x,y\}) \to A(\{z\})

In my example, this map would send ¬P(x,y)Q(y) \not P(x,y) \wedge Q(y) to ¬P(z,z)Q(z) \not P(z,z) \wedge Q(z), and so on.

So far what we’ve got is a functor

A:FinSetBoolAlg A : \mathsf{FinSet} \to \mathsf{BoolAlg}

We could generalize this to a functor

A:SetBoolAlg A : \mathsf{Set} \to \mathsf{BoolAlg}

if we wanted. This would be useful if we were studying infinitary logic, which allows formulas with infinitely many free variables. But let’s not go there! Not now, anyway.

Instead, I want to get quantifiers into the game. Here we can use Lawvere’s idea on quantifiers as adjoints to substitution. Each Boolean algebra is a poset, hence a category; similarly each map of Boolean algebras is a functor. So, we can demand of

A:FinSetBoolAlg A : \mathsf{FinSet} \to \mathsf{BoolAlg}

that for each f:STf : S \to T in FinSetFinSet, the functor A(f)A(f) has both a left and right adjoint. I’m hoping that this is a reasonably good concept of a “theory of first-order classical logic”.

Just as a sanity check, if we’re doing the example above and we take g:{x}{x,y}g: \{x\} \to \{x,y\} to be the obvious inclusion, what’s the left adjoint of A(g)A(g)? For example, what happens if we apply this left adjoint to P(x,y)P(x,y)? We should get some predicate Q(x)Q(x) such that

Q(x)impliesS(x)iffP(x,y)impliesS(x) Q(x) \; implies \; S(x) \quad iff \quad P(x,y) \; implies \; S(x)

for every unary predicate SS in A(x)A(x). And you can see that

Q(x)=yP(x,y) Q(x) = \exists y P(x,y)

does the job. So the left adjoint to ‘putting a new variable yy into our stock of variables’ is ‘existential quantification with respect to yy’. The right adjoint should similarly give universal quantification.

So we seem to be getting quantifiers in our logic just by demanding that A(f)A(f) has a left and right adjoint when ff is an injection. This reminds me of something I heard about yesterday at SYCO4: Alexander Kurz was talking about nominal sets, and he said something like “the Schanuel topos is equivalent to the category of presheaves on the category of finite sets and injections”. I don’t understand what this means (unless it’s a definition), but it seems relevant.

So — slight change of tack here — let’s look at functors

A:FinInjBoolAlg A : \mathsf{FinInj} \to \mathsf{BoolAlg}

where each A(f)A(f) has both a left and right adjoint. Maybe classical first-order logic is the study of the category with such functors as objects, and natural transformations as morphisms. Let’s optimistically call this category Logic\mathsf{Logic}, just for now. We call an object in here a theory and a morphism f:ABf : A \to B a model of AA in BB.

We can describe theories syntactically using generators and relations. That is, we can take a theory freely generated by some predicates, and then impose some equations, e.g. equations saying that some predicates are true (they equal \top) to get a more interesting theory. For example, we can write down the axioms of first-order Peano arithmetic this way and get a theory PAPA.

We can also describe theories more ‘semantically’, for use as ‘contexts’. For example, suppose I take some set VV as my ‘universe’. Then for any finite set SS, I can take

A V(S)=P(V S)A_V(S) = P(V^S)

where PP is the contravariant power set functor. This AA is covariant, and I think it does what I want, though I’m worried about covariance and contravariance. Namely: A V(n)A_V(n) is the set of all nn-ary relations on VV.

Then I believe a model of PAPA with universe VV, as normally defined in model theory, is the same as what I’m calling a model of PAPA in A VA_V, meaning a morphism f:PAA Vf: PA \to A_V in Logic\mathsf{Logic}.

It seems to be working, but I haven’t carefully checked. Am I making a mistake? Has this already been studied?

(I feel sure one or the other of those will be true.)

Also: what happens if we instead look at functors

A:FinSetBoolAlg A : \mathsf{FinSet} \to \mathsf{BoolAlg}

where each A(f)A(f) has both a left and right adjoint? I haven’t checked, but I have a feeling that this is about first-order logic with equality.

And finally, just by the way, we can reformulate a functor

A:FinInjBoolAlg A : \mathsf{FinInj} \to \mathsf{BoolAlg}

as a presheaf on FinInj\mathsf{FinInj} taking values in Stone spaces. This is sort of cute.

Posted at May 23, 2019 10:56 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3113

39 Comments & 0 Trackbacks

Re: Polyadic Boolean Algebras

There is a syntax error after “For example, if I have a function…”

Posted by: Jeremy on May 24, 2019 1:08 AM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

Fixed, along with dozens of other mistakes!

Posted by: John Baez on May 24, 2019 1:56 AM | Permalink | Reply to this

Trimble and Dolan

Mike has quite a vision of what a nn-theory should be, but staying closer to your thoughts, you might recall Todd Trimble and James Dolan’s work (I and II).

Posted by: David Corfield on May 24, 2019 7:17 AM | Permalink | Reply to this

Re: Trimble and Dolan

Oh, great! The ideas in my post were based on dimly-remembered conversations with James Dolan (who usually complains when I credit him for my distorted accounts of his thoughts, so I didn’t). But I hadn’t remembered these writeups by Todd, which go quite a bit further than I just did.

Why didn’t I remember them? I wasn’t so interested in polyadic Boolean algebras back then. But now that I’m learning larger chunks of model theory, presented in the traditional way, I’m feeling that a more ‘doctrinal’ approach might clarify a puzzle that’s bugging me.

Three differences between Todd’s setup and mine:

  • I include left and right adjoints; he includes just left adjoints because right adjoints follow via =¬¬\forall = \not \exists \not.

  • He includes the Beck–Chevalley condition; I hadn’t noticed that this is essential.

  • He uses functors A:FinSetBoolAlgA \colon \mathsf{FinSet} \to \mathsf{BoolAlg} while I switched to A:FinInjBoolAlgA \colon \mathsf{FinInj} \to \mathsf{BoolAlg} after noticing that my main example of ‘variable substitution’ only involves injections. Todd’s main example also involves just injections! I had a hunch that allowing general functions would give us first-order logic with equality, and Todd confirms this.

It would be fun to investigate both the A:FinSetBoolAlgA \colon \mathsf{FinSet} \to \mathsf{BoolAlg} and the A:FinInjBoolAlgA \colon \mathsf{FinInj} \to \mathsf{BoolAlg} approach, and how they’re related; can you do a Kan extension to get a theory of the second kind from any of the first kind?

What I really want to do, though, is apply polyadic Boolean algebras to models of Peano arithmetic.

Posted by: John Baez on May 24, 2019 5:41 PM | Permalink | Reply to this

Re: Trimble and Dolan

I was going to respond in more detail, saying more about the hyperdoctrine aspect which Mike has commented on, but I didn’t find the time today. I can find more time later.

With regard to Peano arithmetic, etc., you might want to check out the nLab page on the Gödel incompleteness theorem (which needs further improvement, but I wrote a lot of it). It develops some of the ideas within the framework of hyperdoctrines.

In particular, there’s a claim on that page which I think is correct, although I haven’t tried to write out the details and I’ve never seen any, that says that primitive recursive arithmetic (PRA) can be characterized algebraically as initial among Lawvere theories whose generating object (the object XX whose powers exhaust the objects of the theory) is a parametrized natural numbers object. If you like Lawvere theories, then that sort of description should have some appeal – assuming of course that it’s correct!

Posted by: Todd Trimble on May 24, 2019 9:38 PM | Permalink | Reply to this

Re: Trimble and Dolan

I really like that description of primitive recursive arithmetic as the initial Lawvere theory on a parametrized natural numbers object, so I’d like to either find a proof or help prove it. But first I should make sure I really understand it!

I don’t know what a “parametrized” natural numbers object is. Also, the definition of “natural numbers object” involves a certain universal property, a kind of initiality of its own. So to me, in my benighted state, “initial Lawvere theory on a natural numbers object” sounds a bit like “initial Lawvere theory on an initial monoid object”. I guess that does parse: there are various Lawvere theories for which the generating object X is monoid; in some of these theories X is an initial monoid, and there may be an initial such theory. But it’s a bit scary… do we really want this sort of “double initiality”?

Posted by: John Baez on May 25, 2019 7:24 PM | Permalink | Reply to this

Re: Trimble and Dolan

If I were explaining the point of “parametrized” natural numbers object to a student, I might first set the exercise of constructing the addition operation on a natural numbers object in a topos (or at least a cartesian closed category), and then ask the student to attempt to do the same thing when we drop the assumption of exponentials. The point of the exercise is that the notion of natural numbers object isn’t all that useful outside of the cartesian closed context, but we can regain our losses if we modify the definition and work instead with a parametrized natural numbers object.

So you know what a natural numbers object is: it’s given by data (N,0:1N,s:NN)(N, 0: 1 \to N, s: N \to N) such that for any other such data (X,x:1X,f:XX)(X, x: 1 \to X, f: X \to X), there is a unique map ϕ:NX\phi: N \to X which preserves the data. For the exercise, we know what to do in principle: addition is defined by recursion according to the rules

m+0=m,m+s(n)=s(m+n).m + 0 = m, \qquad m + s(n) = s(m + n).

So we are defining functions +0- + 0 and +s(n)- + s(n) according to these rules, which are elements in the function space N NN^N. Therefore, to get addition on NN in a topos, take XX to be the exponential N NN^N, with initial point the identity 1 N:1N N1_N: 1 \to N^N, and with the endofunction

f=s N:N NN Nf = s^N: N^N \to N^N

which takes an element g:NNg: N \to N of N NN^N to sg:NNs \circ g: N \to N. Then the unique “algebra map” ϕ:NN N\phi: N \to N^N is exactly the map which implements the recursive rules above. When we take its transform

N×NNN \times N \to N

using the product-exponential adjunction, we get exactly the addition operation on NN. Similar tricks allow us to define multiplication, exponentiation, etc., on NN in a topos.

However, if we deny ourselves the luxury of cartesian closure, we’re really not going to be able to get off the ground in the same way to define addition. However, we can simulate the way we used cartesian closure in terms of just products, somewhat similarly to how Yanofsky (and I think Lawvere as well) show how to rephrase the Lawvere fixed-point theorem in a way that circumvents the need for exponentials.

Thus, in a category with finite products, a parametrized natural numbers object is again given by data (N,0:1N,s:NN)(N, 0: 1 \to N, s: N \to N), such that for any objects A,XA, X and morphisms f:AXf: A \to X and g:XXg: X \to X, there exists a unique ϕ f,g:A×NX\phi_{f, g}: A \times N \to X which makes a certain evident diagram commute:

A×1 1 A×0 A×N 1 A×s A×N f ϕ f,g ϕ f,g X g X\array{ A \times 1 & \stackrel{1_A \times 0}{\to} & A \times N & \stackrel{1_A \times s}{\to} & A \times N \\ & \searrow \mathrlap{f} & \downarrow \mathrlap{\phi_{f, g}} & & \downarrow \mathrlap{\phi_{f, g}} \\ & & X & \underset{g}{\to} & X }

(with a slight abuse of notation for the diagonal arrow). To some extent, this redefinition allows one to proxy for algebras of type X AX^A, although one can’t quite simulate the full power of what one can do in a topos with NNO. For example, one can’t quite create Ackermann functions just with parametrized NNO’s.

Nevertheless, one can do primitive recursive arithmetic (and in fact if you look up the definition of primitive recursive function, it’s practically begging to be described in such diagrammatic terms). Regarding the “double initiality”, what I would say is that PRA, à la Lawvere theories, is the walking parametrized NNO in the doctrine of categories with finite products. So one just has to imagine a category with finite products that has a parametrized NNO, but with no extra structure or conditions whatsoever beyond that. The objects are just finite powers N mN^m, and the rest I leave to your imagination.

My intuition says this has to work to define primitive recursive arithmetic, but the usual presentations of PRA I’ve so far found a little too unpleasant to sit down and write out the proof in those terms. But maybe this discussion will goad me into giving it more of a try.

Posted by: Todd Trimble on May 25, 2019 11:31 PM | Permalink | Reply to this

Re: Trimble and Dolan

An example of a category which has a NNO, but where the NNO is not parametrized, is that of compact Hausdorff spaces. The NNO in this category is the Stone-Cech compactification of the discrete natural numbers. The book “Algebra in the Stone-Cech compactification” studies, in particular, this object.

Posted by: Martin Escardo on May 28, 2019 1:49 PM | Permalink | Reply to this

Re: Trimble and Dolan

Nice example; thanks. The category of compact Hausdorff spaces is a pretopos, but evidently not an arithmetic pretopos according to your comment. Paul Taylor made an interesting comment about this situation at MO.

Just a small note, maybe only to myself, that if CC has an NNO NN and F:CDF: C \to D is a left adjoint that preserves the terminal object, then F(N)F(N) is an NNO in DD. This applies in particular to β:SetCompHaus\beta: Set \to CompHaus, so that β\beta \mathbb{N} is indeed the NNO in CompHausCompHaus.

Posted by: Todd Trimble on May 28, 2019 3:06 PM | Permalink | Reply to this

Re: Trimble and Dolan

Yes, that’s right, Todd. As you probably also realized, what is behind the lack of parametricity is the fact that the Stone-Cech compactification functor doesn’t preserve binary products. (Interestingly, Hewitt in a 1948 publication, claimed that it would preserve them.) The book I mentioned doesn’t refer to natural numbers objects or categories, but it has all is needed to derive the lack of parametricity. If I am not mistaken, the book also shows that it is not possible to extend addition on the natural numbers to the compactification in a commutative way. But with parametricity you would be able to define a commutative addition on any natural numbers object in any category.

Posted by: Martin Escardo on May 29, 2019 2:11 PM | Permalink | Reply to this

Re: Trimble and Dolan

That’s a really interesting example, Martin! Thanks.

Posted by: John Baez on May 29, 2019 5:06 AM | Permalink | Reply to this

Re: Trimble and Dolan

Thanks for the explanation of a parametrized natural numbers object, Todd! That’s very clear. It seems plausible that all you can get from such a thing is primitive recursive arithmetic. However, before I can claim to understand this I’ll need to see how to get the Ackermann function (or more precisely, the internalized analogue of that) out of a natural numbers object in a cartesian closed category.

My intuition says this has to work to define primitive recursive arithmetic, but the usual presentations of PRA I’ve so far found a little too unpleasant to sit down and write out the proof in those terms.

Maybe it’ll help to use Goodstein’s logic-free approach to PRA. I don’t know much about this, but Wikipedia says:

It is possible to formalise PRA in such a way that it has no logical connectives at all — a sentence of PRA is just an equation between two terms. In this setting a term is a primitive recursive function of zero or more variables. In 1941 Haskell Curry gave the first such system. The rule of induction in Curry’s system was unusual. A later refinement was given by Reuben Goodstein. The rule of induction in Goodstein’s system is:

F(0)=G(0)F(S(x))=H(x,F(x))G(S(x))=H(x,G(x))F(x)=G(x) {F(0) = G(0) \quad F(S(x)) = H(x,F(x)) \quad G(S(x)) = H(x,G(x)) \over F(x) = G(x)}

Here xx is a variable, SS is the successor operation, and F,GF, G and HH are any primitive recursive functions which may have parameters other than the ones shown. The only other inference rules of Goodstein’s system are substitution rules, as follows:

F(x)=G(x)F(A)=G(A)A=BF(A)=F(B)A=BA=CB=C { F(x) = G(x) \over F(A) = G(A)} \qquad {A = B \over F(A) = F(B)} \qquad {A = B \quad A = C \over B = C}

Here A,BA,B and CC are any terms (primitive recursive functions of zero or more variables). Finally, there are symbols for any primitive recursive functions with corresponding defining equations, as in Skolem’s system above.

In this way the propositional calculus can be discarded entirely. Logical operators can be expressed entirely arithmetically […]

I’m not sure how much this helps; it’s a further stripping-down of a quantifier-free approach to PRA that has a symbol for each primitive recursive function. I guess more generally my point is that people have already massaged primitive recursive arithmetic into various forms, and maybe one of these would reduce the amount of busywork required to prove that the beautiful categorical approach is equivalent to all the others. Avoiding quantifiers seems like a good thing if one is trying to turn something into a Lawvere theory.

Posted by: John Baez on May 26, 2019 3:40 AM | Permalink | Reply to this

Re: Trimble and Dolan

According to the nLab page (which, if wrong, is my fault), the Ackermann function A:NNA: N \to N is defined as follows: A(n)=A n(n)A(n) = A_n(n) where A nA_n is defined according to the recursive rules

  • A 0(m)=2mA_0(m) = 2m.
  • A n+1(m)=A n m(1)A_{n+1}(m) = A_n^m(1) where the exponent mm refers to iterated composition.

It’s a really good but slightly tricky exercise to define this internally in a cartesian closed category with NNO. If you want to think about this some on your own, then read no further, but here is my solution.

To prevent it from looking really complicated, one should first prove the lemma that in a cartesian closed category, an NNO is a parametrized NNO. My previous comment hints at how the proof would go: given data f:AXf: A \to X and g:XXg: X \to X, form an algebra Y=(X A,name f:1X A,g A:X AX A)Y = (X^A, name_f: 1 \to X^A, g^A: X^A \to X^A), and then there’s a unique algebra map NX AN \to X^A, which transforms to ϕ f,g:A×NX\phi_{f, g}: A \times N \to X, etc., etc.

By far the trickiest part of the exercise is figuring out how to define the map

N N×NN N:(h,m)h mN^N \times N \to N^N: (h, m) \mapsto h^m

using the NNO property. For this, put A=N N,X=N N×N NA = N^N, X = N^N \times N^N, and define f:AXf: A \to X by

A×1=N N×11 N N×name 1 NN N×N N=XA \times 1 = N^N \times 1 \stackrel{1_{N^N} \times name_{1_N}}{\to} N^N \times N^N = X

and g:XXg: X \to X by

(π 1,comp):N N×N NN N×N N(\pi_1, comp): N^N \times N^N \to N^N \times N^N

which takes a pair (h,k)(h, k) to (h,hk)(h, h \circ k). The map ϕ:A×NX\phi: A \times N \to X, according to the definition of parametrized NNO, must satisfy the recursion

ϕ(h,0)=(h,1 N),ϕ(h,m+1)=(h,hϕ(h,m))\phi(h, 0) = (h, 1_N), \qquad \phi(h, m+1) = (h, h \circ \phi(h, m))

and the only thing that works is ϕ(h,m)=(h,h m)\phi(h, m) = (h, h^m). So then the composite

N N×NϕN N×N Nπ 2N NN^N \times N \stackrel{\phi}{\to} N^N \times N^N \stackrel{\pi_2}{\to} N^N

is the map (h,m)h m(h, m) \mapsto h^m.

Then the map ψ:N NN N\psi: N^N \to N^N that takes hh to λm.h m(1)\lambda m. h^m(1) (i.e. to mh m(1)m \mapsto h^m(1)) is the transform of the composite

N N×NϕN N×N Nπ 2N Neval 1NN^N \times N \stackrel{\phi}{\to} N^N \times N^N \stackrel{\pi_2}{\to} N^N \stackrel{eval_1}{\to} N

and so we get an algebra structure (N N,A 0:1N N,ψ:N NN N)(N^N, A_0: 1 \to N^N, \psi: N^N \to N^N). The unique algebra map χ:NN N\chi: N \to N^N for this structure takes nn to A nA_n, according to the recursive definition of the A nA_n. This map transforms to a map χ^:N×NN\widehat{\chi}: N \times N \to N, and the Ackermann function is the composite

NΔN×Nχ^N.N \stackrel{\Delta}{\to} N \times N \stackrel{\widehat{\chi}}{\to} N.

Posted by: Todd Trimble on May 26, 2019 4:31 PM | Permalink | Reply to this

Recursive functions

Todd wrote:

According to the nLab page (which, if wrong, is my fault), the Ackermann function A:A: \mathbb{N} \to \mathbb{N} is defined as follows…

There are quite a few different Ackermann-ish functions in common use, which all share the same key features… so if I can implement any one of these in a cartesian closed category with natural numbers object, I’ll be happy. Thanks for writing up a solution; I won’t look at it yet, since I should do this myself.

This raises an question. The class 𝒞\mathcal{C} of functions f:f: \mathbb{N} \to \mathbb{N} definable using only the fact that \mathbb{N} is a natural numbers object in a cartesian closed category seems to be a very natural class lying between the primitive recursive functions and the recursive functions. We’ve just seen that this class 𝒞\mathcal{C} is strictly bigger than the primitive recursive functions; is it strictly smaller than the recursive functions?

I have two contradictory intuitions here. On the one hand, the untyped lambda calculus is powerful enough to define all recursive functions, and the lambda calculus is all about cartesian closed categories. So we might jump to the conclusion that 𝒞\mathcal{C} is all the recursive functions.

On the other hand, it’s undecidable whether a lambda-expression defines a total function, while the functions in 𝒞\mathcal{C} are all total, and there seems to be no difficulty (in principle) in listing all these functions. Furthermore, the untyped lambda calculus is all about an object XX in a cartesian closed category with a morphism XX XX \to X^X, not a natural numbers object. We encode natural numbers in here as Church numerals, but that’s quite a different game.

So, currently I’m thinking the class 𝒞\mathcal{C} lies strictly between the primitive recursive functions and the recursive functions. It seems like a very natural class. What is it?

Posted by: John Baez on May 27, 2019 6:43 PM | Permalink | Reply to this

Re: Recursive functions

Hi, If I recall correctly, the free ccc with an nno has System T as its internal language, and so the definable functions \mathbb{N}\to\mathbb{N} are the ones whose termination is provable in Peano arithmetic, i.e. its ordinal strength is ϵ 0\epsilon_0. E.g. the Goodstein function is not a morphism in the free ccc with an nno.

Posted by: Sam Staton on May 27, 2019 8:13 PM | Permalink | Reply to this

Re: Recursive functions

Super-cool observation, Sam!

Posted by: Todd Trimble on May 27, 2019 8:35 PM | Permalink | Reply to this

Re: Recursive functions

Excellent, Sam! This is just what I was hoping for: 𝒞\mathcal{C} contains lots more recursive functions than just primitive recursive ones, but still not all.

I was just about to quote the article on ‘Categorical Logic’ in the Encyclopedia of Mathematics, written by Lambek, which says:

The arrows N kNN^k \to N in the free Cartesian closed category with natural numbers object describe those partial recursive functions which can be proved to be universally defined.

That’s fatally ambiguous: “proved how?” Now I know… at least if Sam’s memory is to be trusted.

It’s really interesting how Peano arithmetic enters here, but not all functions definable in Peano arithmetic, just the partial recursive functions that are provably total in Peano arithmetic.

Posted by: John Baez on May 27, 2019 9:27 PM | Permalink | Reply to this

Re: Recursive functions

My general intuition is that it’s impossible to have any “algebraic” characterization of just the total recursive functions, because of the halting problem. As soon as you have a system that can define all the total recursive functions, it becomes subject to the halting problem and therefore also defines some non-total recursive functions. Conversely, therefore, any system in which all definable functions are total — such as a CCC with NNO — must exclude some total recursive functions.

Maybe this would be a good place to ask a related question: I have some vague memory that the “totality” of a partial recursive function is model-dependent. That is, that there are “codes” for partial recursive functions that turn out to be total in SetSet but might only be partial in some other categories. But I’ve been unable to reconstruct such an example or find a citation, and now I’m doubting whether my memory is true. Can anyone here help?

Posted by: Mike Shulman on May 27, 2019 10:38 PM | Permalink | Reply to this

Re: Recursive functions

BTW, I’ve seen this class of functions referred to as “higher-order primitive recursive” or perhaps “primitive recursive in higher types”, e.g. here. The idea being, of course, that the relevant notion of recursion is still “primitive”, while we get additional power by allowing the codomain to be something like N NN^N instead of NN.

Posted by: Mike Shulman on May 27, 2019 10:42 PM | Permalink | Reply to this

Re: Recursive functions

Mike wrote:

My general intuition is that it’s impossible to have any “algebraic” characterization of just the total recursive functions, because of the halting problem. As soon as you have a system that can define all the total recursive functions, it becomes subject to the halting problem and therefore also defines some non-total recursive functions.

Right. This is why I felt sure (after some dithering) that this class 𝒞\mathcal{C} of functions couldn’t be anywhere near all the total recursive functions. Plus, to get all total recursive functions it seems you need some ability to define them by ‘cases’, using some ‘if-then’ logic, and a cartesian closed category with natural numbers object doesn’t seem smart enough to do that.

I’m really happy to hear a name for this class 𝒞\mathcal{C}, like “higher-order primitive recursive functions”. They seem like a really nice class of functions, smack dab between the primitive recursive ones and the recursive ones, but leaning toward the side of being “easy to deal with”: none of this unsolvable halting problem stuff intervenes.

Maybe this would be a good place to ask a related question: I have some vague memory that the “totality” of a partial recursive function is model-dependent.

That sounds believable. I’m having trouble cooking up a proof.

It reminds me of Joel David Hamkin’s result that any function f:f : \mathbb{N} \to \mathbb{N} is computable in some nonstandard model of the natural numbers:

Here, please note, I’m using \mathbb{N} to mean the standard natural numbers!!! So, I’m saying that for any function from the standard numbers to themselves, there’s some model of the natural numbers in which there’s some (perhaps nonstandard) number that’s the Gödel number of a Turing machine that computes f(n)f(n) (perhaps in a nonstandard number of steps) when you feed it a standard number nn.

However, your question is different.

Posted by: John Baez on May 27, 2019 11:14 PM | Permalink | Reply to this

Re: Recursive functions

Hmm, reading about computing the uncomputable makes me wonder if something like this would work. Define a partial recursive function ff as follows. To compute f(n)f(n), enumerate all proofs in ZFC that are shorter than nn (of which there are finitely many). If any of them ends with the conclusion 0=10=1, enter an infinite loop; otherwise, set f(n)=0f(n)=0. Then this is a total function if and only if ZFC is consistent. Since Con(ZFC) is true in some models and false in others, this function is total in some models and partial in others.

Posted by: Mike Shulman on May 28, 2019 9:44 AM | Permalink | Reply to this

Re: Recursive functions

Your idea sounds good. I’d rather keep things simple and use PAPA.

There is a partial recursive function f:f: \mathbb{N} \to \mathbb{N} of the sort you mention, where f(n)f(n) is undefined if there’s a proof of length nn that 0=10 = 1 and f(n)=0f(n) = 0 otherwise.

Assume PAPA is consistent, so it has models. Then there will be models where Con(PA)Con(PA) holds and models where it doesn’t.

In a model of PAPA where Con(PA)Con(PA) holds, for no nn is there a proof of length nn that 0=10 = 1, so ff is total. In a model of PA where ¬Con(PA)\not Con(PA) holds, ff is partial.

Well-known puzzle: if PAPA is consistent how can there be a model in which for some nn there’s a proof of length nn that 0=10 = 1? Usual answer: this nn is ‘nonstandard’.

Moving right along, consider the other case, where PAPA is inconsistent. In this case it seems clear that ff is partial, since there’s a proof in PAPA that 0=10 = 1.

More precisely: in this case ff is partial in all models. Quick proof: in this case PAPA has no models! So in fact ff is both partial in all models and total in all models.

Posted by: John Baez on May 29, 2019 7:18 AM | Permalink | Reply to this

Cody

A little banal, but the set of functions 𝒞{\mathcal{C}} is going to be the fast-growing functions from the Wainer hierarchy up to ε 0\varepsilon_0, which are defined almost identically to primitive recursive functions, but iterating the definition beyond ω\omega.

Posted by: Cody on June 28, 2019 3:49 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I wonder if one or other axiomatisation of Quine’s predicate functor logic may be of use? It proves exactly what FOPC does and avoids reference to variables which might be of use since presumably it is in the context B rather than in the theory A that you might care what the ‘individuals’ are?

Posted by: degsy on May 24, 2019 10:01 AM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I am a model theorist, who tends to think in a moderately category-theoretic way. I currently have a PhD student who is generalising some theorems of model theory to a category-theoretic setting. However this is in “modern model theory”, in particular stability theory building on Shelah’s work, not in classical predicate logic. In stability theory I think people do often think in quite a category-theoretic way, even if they typically do not use category theoretic language. For example: definable sets are functors from the category of models to the category of sets, the definable subsets of a given definable set form a boolean algebra, and existential quantification corresponds to certain morphisms between these boolean algebras. This is more-or-less Halmos’s polyadic Boolean algebra setup. In examples it is very useful for seeing and using the geometry (for example complex algebraic or analytic geometry) implicit in some problem, but one side-effect is that you often lose the logical syntax, which can also be a useful tool.

On the other hand you seem to want to retain the syntax in some form, but what you end up with seems very similar to the syntactic category of a theory, which is basically the topos-theoretic approach.

Posted by: Jonathan Kirby on May 24, 2019 2:01 PM | Permalink | Reply to this

definable sets are functors from the category of models to the category of sets…

This sounds like the approach of Awodey and Forsell in First-Order Logical Duality, see, e.g., p. 8, in establishing their duality result between theories and groupoids of models.

This was taken in a more algebraic geometric direction by Spencer Breiner in Scheme representation for first-order logic.

Posted by: David Corfield on May 24, 2019 3:13 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

Jonathan wrote:

On the other hand you seem to want to retain the syntax in some form, but what you end up with seems very similar to the syntactic category of a theory, which is basically the topos-theoretic approach.

I want the theories in first-order classical logic, and the contexts in which the theories have models, to be structures of the same sort. This means I need the ability to ‘present’ these structures syntactically, basically by giving generators and relations (that is, a signature and some axioms). However, I want the choice of presentation to be subsidiary to the thing itself — as in group theory, where the concept of ‘group’ comes before the concept of ‘presentation of a group’.

This attitude is straight out of Lawvere’s 1963 thesis, so I’d be amazed if someone hasn’t already done what I described in my blog post: think of first-order classical theories as functors

A:FinInjBoolAlg A : \mathsf{FinInj} \to \mathsf{BoolAlg}

such that each map A(f)A(f) has a left and right adjoint.

I’d be slightly happier if these theories were categories with some extra structure, but I’m not sure what kind of categories we’d get. Certainly not topoi, since those give higher-order logic.

Posted by: John Baez on May 24, 2019 5:14 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

The standard categorical semantics for first-order intutionistic logic is either a first-order hyperdoctrine or a Heyting category, depending on whether you want to demand that propositions be identified with subobjects (hence that “comprehensions” exist). For classical logic you can just impose excluded middle, getting a Boolean hyperdoctrine or Boolean category.

These are for multi-sorted first-order logic, which I would say is the categorical sensible level of generality. If you want to restrict to single-sorted logic, then you’d need to impose some “generated by a single object” condition analogous to the one that identifies Lawvere theories among categories with products (and props among symmetric monoidal categories). In the hyperdoctrine case (which is closer to syntax, making it easier), probably you could just say that the base category is a Lawvere theory (the equational theory of the function symbols in your theory). Since FinSet opFinSet^{op} is the initial Lawvere theory, your version would amount to saying there are no function symbols.

And as you say, having adjoints to restriction along all morphisms in the base category gives you a hyperdoctrine with equality. If instead you require only adjoints to restriction along product projections (in the initial case, these are FinInj opFinInj^{op} inside FinSet opFinSet^{op}), you get quantifiers but not equality. But note that in the latter case, you still need restrictions along non-projections: you want to be able to substitute the same variable for more than one other variable.

Posted by: Mike Shulman on May 24, 2019 5:44 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

Thanks for the tour of concepts! This helps a huge amount. For the particular project I’m fiddling with, I want a setup that has exactly the same “strength” as classical first-order logic, so I can freely apply results from that subject. I get the feeling the Boolean hyperdoctrines are such a setup.

Are Boolean categories? I see the conceptual advantages of introducing subobjects that correspond to propositions. But I wouldn’t want to increase the “strength” of my setup without noticing it. Presumably every Boolean hyperdoctrine freely generates a Boolean category. Can the latter be “stronger”?

Well, what does “stronger” mean? For one thing, you could ask if going to the Boolean category introduces more predicates on the objects that existed already. For current purposes, I’d hope not.

Posted by: John Baez on May 26, 2019 9:40 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I can’t think of a reference at the moment, but I expect that going to the Boolean category doesn’t introduce any more predicates. The objects of the Boolean category are formal comprehensions {x:AP(x)}\{x:A \mid P(x)\}, and a morphism {x:AP(x)}{y:BQ(y)}\{x:A \mid P(x)\} \to \{y:B \mid Q(y)\} in the Boolean category is a relation R(x,y)R(x,y) that’s total and functional and respects the predicates PP and QQ. If Q=Q=\top and such a morphism is mono, then x:A.R(x,y)\exists x:A. R(x,y) (using logical notation for the hyperdoctrine adjoints) ought to be a predicate on BB in the hyperdoctrine that’s equivalent to this subobject in the Boolean category.

Posted by: Mike Shulman on May 27, 2019 5:15 PM | Permalink | Reply to this

Löwenheim-Skolem

I recently heard “Guitart-Lair’s locally free diagram lemma” referred to as a categorial Löwenheim-Skolem (Ageron 1992, Guitart-Lair 1981), and am keen to try and make sense of this statement but I have not had much luck so far in understanding the significance of a locally-free diagram, not to mention the lemma itself. At the very least, it does seem like the framework of accessible categories is the right kind of setting for a LS-type theorem.

Posted by: Matt Earnshaw on May 24, 2019 6:05 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I wish I could add some constructive specific to the post, but don’t know enough CT to do that. That said, I know about polyadic Boolean algebras and (Tarski’s) cylindric algebras. Monk’s 1976 textbook Mathematical Logic describes the latter in some detail. There’s also Quine’s “predicate-functor logic” which eliminates variables (so formulas are generated from the n-ary predicates alone and various operators which implicitly add, delete, permute, etc. variables). There’s a large field of algebraic logic, a central notion of that being the Lindenbaum-Tarski algebra of a logic/theory. A paper that might be useful to take a look at, and that’s developed mostly category-theoretically examining “concept algebras” (of theories; basically, their Lindenbaum algebras), categories of models and theory and morphisms between them, is

T. Gergely 1981, “Algebraic representation of language hierarchies”, Acta Cybernetica.

(I don’t really understand it very well at all.)

Albert Visser has a 2004 paper

“Categories of theories and interpretations”

https://dspace.library.uu.nl/handle/1874/26909

and some of the recent papers have usually phrased these things categorically.

If I get the main drift of John’s post, the aim is to “invariantize” the concept of a theory – so it’s not dependent of some specific “arbitrary choice”. I’d say that the central obstacle to doing that is signature. A theory is usually a set of closed formulas in a language L SL_S over some signature S; but variables can be eliminated (using Quine’s method); so variables are an annoying arbitrary choice.

But this isn’t obviously so with signature. Even so, theories (and models) in different signatures can be interpreted into one another or be definitionally equivalent. This is why interpretability and bi-interpretability play a big role. For example, the system V ωV_{\omega} of hereditarily finite sets and \mathbb{N} with + and x are definitionally equivalent; and the standard (incomplete) theories of them are definitionally equivalent (these are “finite ZF” and PA). Similarly, the free monoid on two generators is definitionally equivalent to arithmetic with + and x, as are the two (incomplete) theories for them (these are the theory of concatenation over two symbols along with induction for strings; and PA). Related to this is Morleyization, where the starting signature is expanded to have primitive symbols for all complex formulas, so every “concept” available or definable is now expressed using a primitive predicate symbol. So, the choice of the original signature is to some extent arbitrary, since, loosely speaking, different choices of starting signature will lead to the same collection of definable sets.

Posted by: Jeff Ketland on May 25, 2019 9:30 AM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

So poking around a little, there seems to be a decent literature already on the theme of nice categorical frameworks for things like Gödel incompleteness. Somewhat frustratingly though, some of the seminal documents, by Joyal and Wraith among others, remain unpublished.

Among the important players in this area (which also include Paul Taylor, Robin Cockett, Steve Vickers, probably also Martin Hyland, and also Roland and Morrison who wrote master’s theses in this area), a name which frequently comes up in more recent (published) studies is Maria Maietti, who has explored in considerable depth some details of Joyal’s program, the one dating from the lectures he gave in 1973, that were intended to give a categorical proof of Gödel incompleteness. Most of my summary below is in fact extracted from Maietti’s article Joyal’s arithmetic universe as list-arithmetic pretopos.

I learned from her articles that a category in which all objects are finite powers of a parametrized natural numbers object is often called a Skolem category, and also that the initial Skolem category (which I referred to above as the “walking parametrized NNO”, using the jazzy word “walking” which is probably not to everyone’s taste) was proven by Wraith in unpublished notes to be equivalent to PRA = primitive recursive arithmetic. If I understand correctly, Wraith’s notes were more generally a working out of the Joyal theory. But regarding the initial Skolem category being essentially the same as PRA, Maietti gives a proof of her own using type theory.

She describes Joyal’s program as first of all developing a theory of “arithmetic universes”, which roughly speaking are pretoposes in which one can do recursion. As you may know, a pretopos is a categorical environment in which to internalize first-order logic (as opposed to toposes which internalize higher-order logic): basically a pretopos is a category with the finitary exactness properties of a topos as given by Giraud’s theorem (finitely complete and cocomplete, extensive, Barr-exact) but not necessarily with power objects or function spaces which belong to higher-order logic.

What the general notion of “arithmetic universe” should be seems to be a subject of technical debate. Maietti’s thesis is that what she calls list-arithmetic pretoposes are the right notion. These are pretoposes where for every object AA there is a parametrized list object A *A^\ast. These are a priori a little stronger than arithmetic pretoposes, which are pretoposes with a parametrized NNO, although it’s apparently an open question whether arithmetic pretoposes are list-arithmetic pretoposes. But what everyone seems to agree on is that what one crucially wants, at least to complete the development of Joyal’s program in some generality, is enough recursion to be able to construct free internal categories, and Maietti shows that her list-arithmetic pretoposes certainly get the job done. I think another of her points is that list-arithmetic pretoposes are local, i.e., slices of such are again such. Apparently arithmetic pretoposes are also local, but according to Maietti they might not be enough to do everything one wants in the Joyal theory.

Whatever they should be in general, Joyal shows how to construct an arithmetic universe starting from any Skolem category SS (for example, the initial one PRA), as follows. First form its category of “predicates”. A predicate is more or less a morphism P:ANP: A \to N whose values are only 00 and 11; these can be defined as multiplicatively idempotent maps, PP=PP \cdot P = P, once multiplication on NN has been defined. Secretly we think of such PP as subobjects. Predicates on AA are naturally ordered and form a Boolean algebra. (And so if you care, you could use these to manufacture a Boolean hyperdoctrine over SS as base category.)

One may go on to define morphisms of predicates, thus defining a category Pred(S)Pred(S) starting from any Skolem category SS, actually a Boolean category. Joyal proved that the category of predicates has some very nice properties:

  • It is a regular category with the additional nice property that the epis of (regular epi)-mono factorizations are split.
  • It is an extensive category, meaning it has disjoint coproducts which are stable under pullback.
  • It has list objects.

Then one can form the exact completion Pred(S) exPred(S)_{ex} to get a pretopos with list objects. I gather that Maietti additionally showed that the list objects are parametrized list objects, so that Joyal’s construction produces list-arithmetic pretoposes in her sense.

What does Joyal do with these things, with regard to Gödel incompleteness? I’m hazy on exactly what he does, but to quote from Maietti,

“He then intended to prove incompleteness by mimicking the diagonal argument of Cantor’s theorem within the initial arithmetic universe [Joy05]. Indeed, the initial arithmetic universe supports the amount of self-reference needed to perform the mentioned argument because it contains an internal copy of itself.”

I surmise from this quotation that the reason one wants arithmetic universes to permit the construction of free internal categories is exactly that: as a stepping stone to producing this internal copy of itself.

(The reference [Joy05] is just a 2005 abstract of a talk Joyal gave for Charles Ehresmann’s centennial. I guess you had to be there. (-: )

Posted by: Todd Trimble on May 27, 2019 5:22 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I guess you had to be there (-:

I was. But I don’t think I took notes.

What everyone who was at that conference remembers from it was a different incident altogether, which I will share with you when I see you in Edinburgh ;-)

Posted by: Tom Leinster on May 27, 2019 10:10 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I wanna know too! I’ll see you there.

Posted by: John Baez on May 27, 2019 11:19 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I’m a bit late to the party but just thought I’d point out for any future readers — hi! — that John’s description of the adjoint quantifiers is flipped: the existential quantifier is the left adjoint while the universal quantifier is the right adjoint.

To see this consider a unary predict S(x)S(x) promoted to a binary predict on x,yx,y via the inclusion {x}{x,y}\{x\} \to \{x,y\}. For an arbitrary binary predict P(x,y)P(x,y) when does P(x,y)P(x,y) implies S(x)S(x) hold?

This implication holds if for each fixed pair x,y, P(x,y)P(x,y) implies S(x)S(x). If P(x,y)P(x,y) is false, then this automatically holds but if there exists any y so that P(x,y)P(x,y) is true then this condition says that S(x)S(x) must also be true. So we see that this is equivalent to yP(x,y)\exists y P(x,y) implies S(x)S(x) and thus the left adjoint carries the binary predicate P(x,y)P(x,y) to the unary predict yP(x,y)\exists y P(x,y).

Posted by: Emily Riehl on June 14, 2022 7:01 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

Thanks, I’ll fix my article. Better late than never!

I often mix up right and left if I don’t think about them. This is even true when talking about driving. If I think about this, I think: existential quantifiers are like sums (you add up the truth values P(x)P(x) over all xx to get the truth value of xP(x)\exists x P(x)), and sums are coproducts, and coproducts are left adjoints. That’s more sloppy than your way of doing it, but it reinforces my mental network of “lefties”.

\exists \sim \sum

Posted by: John Baez on June 14, 2022 7:17 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

I often mix up right and left if I don’t think about them. This is even true when talking about driving.

My wife and I are so bad at this that we have the expressions “true left”/”true right” to mimic true north, which means that we’ve given it some extra thought and really do mean left/right.

Posted by: David Corfield on June 15, 2022 1:06 PM | Permalink | Reply to this

Re: Polyadic Boolean Algebras

Interesting! It would be fun to do a survey to see if some kinds of mathematicians are more prone to left/right confusion than others. What about people who study duality for a living?

Posted by: John Baez on June 15, 2022 4:52 PM | Permalink | Reply to this

Post a New Comment