August 24, 2006

Categories and Computation

Posted by John Baez

My student Mike Stay and I are working on category theory and quantum computation. But, I still have some catching up to do when it comes to category theory and ordinary computation!

The first two appear on Mark’s blog, Good Math, Bad Math.

I’m not going to explain this stuff here. I’m just going to dive in.

Here’s something Mike wrote, and my reply. He wrote:

JB wrote:

We’ve gotta figure out what the heck is really going on here.

I talked to Jim some more and we figured out what it is. You get programs for free in a lambda theory. As soon as you have one type, you get a whole bunch of programs as morphisms in the category, everything you can write using typed S and K combinators. These programs are proofs in intuitionistic logic.

The terms that we mention explicitly in a lambda theory are like ports on the outside of a black box. We know what we can plug in, but not what happens inside. So our programs can now call these things as well, but we don’t know what they *do* yet. From the “programs as proofs” view, they’re additional axioms in the intuitionistic logic. Then the axioms of the lambda theory become equivalences of proofs in the intuitionistic logic.

When you take a model of the theory in Set, you assign sets to the types and functions to the explicit terms, but both of these can (in principle) be uncomputable! For example, using the lambda theory of groups, you could take the free group over the set of all halting programs—this set is only computably enumerable, not computable. If you had the free lambda theory with one type N and one function f:N->N, then the image of N could be the natural numbers and f could be the function that produces the nth bit of an omega number. It would be a programming language with an oracle.

So if we want the sets and functions to be computable, we need to look at *computable functors* from the theory to Set!

I replied:

Good! I’m going to respond to this at the n-category cafe. As long as we’re discussing stuff that’s not “top secret” new research, and perhaps even when we are, I’d like to talk there. Some categorical logicians may come along and help us out.

We need a precise dictionary, something that starts maybe sorta like this:

COMPUTATION       LOGIC        CATEGORY THEORY
data type         proposition  object
program           proof        morphism

A program takes data of one type and spits out data of some other type. A proof takes a proposition as its assumption and spits out another proposition as conclusion.

Note, if an object x is a “data type”, specific data of that type is a morphism f: 1 → x. We can apply a “program” g: x → y to this data and get data of type y, namely gf: 1 → y.

In the “logic” column we say all this stuff differently. We say g: x → y is a proof of y from x, while f: 1 → x is a proof of x from “true”, i.e. just a proof of x. We can compose these to get a proof of y, namely gf: 1 → y.

Note that in the “logic” column, the product means “and”, and 1, or “true”, is the unit for “and”. Using “and” we can reduce a list of propositions to a single proposition.

Figure out what’s going on here in the “computation” column:

COMPUTATION       LOGIC        CATEGORY THEORY
data type         proposition  object
program           proof        morphism
??              and          product
??              true         1

Now, this stuff I sort of understood. I start getting mixed up when people say that computation proceeds by doing alpha- conversion and beta-reduction to a lambda-calculus expression. It actually seems we need a whole new column here:

LAMBDA-CALCULUS   COMPUTATION    LOGIC        CATEGORY THEORY
??           data type      proposition  object
??           program        proof        morphism
??             ??           and          product
??             ??           true         1
expression        ??           ??           ??
alpha-conversion  ??           ??           ??
beta-reduction    ??           ??           ??

Of course this new column is just the “syntax” for which the category theory column is the “semantics”. So, there might not be a simple 1-1 correspondence between each item of the first column and each item of the last, but our syntax-semantics theorem should tell us their relationship.

What is it?

And, what does it mean to think of computation as doing “lazy evaluation” repeatedly to a lambda calculus expression? I mean, what does this correspond to in the other columns? Programs as morphisms are fine, but where in the category-theoretic view do we see the program “grind away” as it does a computation step by step?

The first thread is solely for questions that stay within the existing theory of CCC’s and lambda calculus - for example, questions about formulating the untyped lambda calculus as a CCC, questions about the free CCC on one object, questions about PCF, etc.

The second is for attempts to see computation as a process, for example by seeing β- and η-reduction as 2-morphisms in some kind of “cartesian closed 2-category”.

The third is for attempts to generalize CCC’s for the purposes of quantum computation - mainly in the direction of symmetric monoidal categories, typically with other bells and whistles.

Posted at August 24, 2006 9:26 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/905

Re: Categories and computation

One interesting aspect of untyped $\lambda$-calculus is that objects and morphisms are in a way identical.

Every (untyped) $\lambda$-expression is a little program that eats other little programs of its kind and spits out yet another program of that sort.

Better yet, the way all these programs work is beautifully self-referential, in that every program just encodes a way to evaluate the program it gets as argument on other programs.

So for instance the expression

(1)$\lambda f . \lambda g. \lambda x . \lambda y . f(x)(g(x)(y))$

takes four programs $f$, $g$, $x$ and $y$ as arguments and evaluates them on each other in the way indicated.

As you know, feeding two programs into this beast that are of the form

(2)$\lambda f . \lambda x . f(f(\cdots(f(x))\cdots)$

produces another program of this form, with the number of $f$s being the sum of the $f$s of the two inputs.

So the above program is addition of natural numbers. But you can feed any other program into it, one that does not encode a natural numbers, and the above program still produces a well defined output. It provides a perfectly general notion of what it means to add two programs (of the sort that map programs to programs which map programs to programs which …).

When I was a kid I got pretty excited about this beautiful self-referentiality, which produces lots of interesting structure from virtually nothing.

Of course it is very familiar in computation, that you can regard code as data, but in the untyped $\lambda$-calculus this phenomenon appears in a particularly explicit form. Here one cannot distinguish code from data, even in principle. Unless, that is, you pass to typed $\lambda$-calculus.

I always wondered if untyped $\lambda$-calculus is somehow related to $\omega$-categories, It seems that in both cases there is a certain way in which objects and morphisms are identified.

But I cannot make this hunch precise, unfortunately.

And, it seems, my problem with doing so rests precisely in some of the question marks that you drew into your table above.

So I am very interested in what you will come up with.

But as a first step, I’d suggest that you start by replacing the first two question mark entries in the top left of your table both with then entry “expression”. In $\lambda$-calculus, $\lambda$-expression are both data as well as program.

This would allow to remove the two bottom lines completely.

But that’s just my suggestion.

Posted by: urs on August 24, 2006 12:14 PM | Permalink

Re: Categories and computation

So you’re interested in the lambda-calculus too! Cool.

Right now Mike Stay and I are busy preparing an NSF grant proposal on quantum computation and the “quantum lambda calculus”. We need to get him some funding so he can spend more time on research. That’s one reason I’ve been thinking about this more than, oh, say, eliminating the fake flatness constraint by going from an n-connection to an (n+1)-connection. Anyway, one thing I really want to do us understand the untyped lambda calculus in terms of cartesian closed categories. I believe you just take the free CCC on one object X and then throw in isomorphisms between every object, demanding that they all commute.

So, you start by having a type

X

but also a type

hom(X,X)

and a type

X × X

and a type

hom(hom(X,X),X) × X

and so on… that’s what the free CCC on on object X is like. But then you say all these types are “the same” to get the category appropriate to the untyped lambda calculus.

By the way, you also made me notice a typo in my last little chart, so I’m going to correct that. This will make one of your comments make less sense, I’m afraid.

Posted by: John Baez on August 24, 2006 12:57 PM | Permalink

Re: Categories and computation

free CCC on one object X

OK. In this language, what is quantum lambda calculus?

Posted by: urs on August 24, 2006 1:14 PM | Permalink

Re: Categories and computation

If you take the free CCC over the category arising from an algebraic theory of, say, groups, you get a programming language with group-valued objects. Using C++/Java terminology, these objects have three methods, mult, inv, and unit, that satisfy given pre- and post-conditions. So an algebraic theory defines an interface for a class, and the functor from the CCC to Set provides the implementation.

Simply-typed quantum lambda calculus is the syntax in which we write simply-typed quantum lambda theories, each of which gives rise to a symmetric monoidal closed category (SMCC) instead of a CCC. Structure-preserving functors from a SMCC to Hilb are going to be implementations of the programming language.

A TQFT is a structure-preserving functor from 2Cob to Hilb, but 2Cob isn’t closed, so it can’t be the category arising from a quantum lambda theory. We can, however, view 2Cob as the category arising from the algebraic theory of a commutative Frobenius algebra. Taking a functor from the free SMCC over 2Cob to Hilb would give a quantum programming language with commutative-Frobenius-algebra-valued objects, and the operations on those objects would be implemented by the TQFT.

Re “programs as proofs”: the free CCC on one type has all products and exponentials of the type as objects, and has all programs you can write with typed S and K combinators as programs.

Urs probably knows this, but John needs to know: S, K, and I are pretty straightforward once you see what they’re used for. When applying a lambda term, you want to replace a free variable, say v, with some other expression. There are three possibilities:

1. (I) The term is v
2. (S) The term is an application of one term to another
3. (K) The term does not contain v

Sxyz = xz(yz), that is, “given an application x(y), do the replacement procedure in x and y separately.”

Kxz = x, that is, “given a term x that contains no v, the replacement procedure does nothing.”

Iz = z, that is, “given v, do the replacement here”

Urs: In the simply typed lambda calculus, you can’t just apply arbitrary terms to each other; they have to be of the right type. It’s instructive to work out what the types of each variable, and thus of S and K, have to be. The answer is on Mark’s blog.

Switching columns, the types of these S and K combinators become axioms in a minimal logic by changing the meaning of -> from application to implication. The lazy evaluation of S and K, which is the same as beta-eta reduction, tell what the next step in the proof is.

All this is just from having a type in the lambda theory. We have build-in terms S and K whose types become rules in the minimal logic; adding new terms to the theory gives us new rules we can use in the logic. So if we add the term f, then we can use the type of f as a step in a proof.

Say we have a lambda theory L. If we want our terms to correspond to computable functions, then I think we have to take a structure-preserving functor from L to a lambda theory F with no (explicit) terms. Since all the morphisms in F are built out of S,K, and I, we have to pick one of those for the image of each term in L. This assigns an actual program to each term, a computable implementation.

Returning to the quantum side, there ought to be reversible quantum combinators like S and K (which I bet we can find in or deduce from van Tonder’s work) that give all the morphisms in a SMCC arising from a quantum lambda theory with no terms. They would form the basis of a quantum minimal logic (though whether it would correspond to what’s now called quantum logic, I don’t know). The rest ought to follow in a similar way to the classical case.

Posted by: Mike on August 24, 2006 5:29 PM | Permalink

Re: Categories and computation

Mike: Unless I’m misunderstanding you (which is quite possible), your “quantum minimal logic” is the same as multiplicative intuitionistic linear logic (MILL).

If that’s correct, there’s a combinator presentation using three combinators B, C and I, with

Bxyz = x(yz)
Cxyz = (xz)y
Ix = x,

which are typable in the obvious way.

Posted by: Robin on August 24, 2006 6:33 PM | Permalink

Re: Categories and computation

Mike wrote:

A TQFT is a structure-preserving functor from 2Cob to Hilb….

No, a TQFT is a symmetric monoidal functor from nCob to Hilb. It’s only 2-dimensional TQFTs that are symmetric monoidal functors from 2Cob to Hilb.

2d TQFTs are pretty easy to understand - that’s why I started with them in my Winter 2001 and Fall 2004 quantum gravity seminar notes. The fun starts when you get 3d TQFTs, and it’s these that are relevant for the quantum computers that Microsoft’s Project Q is trying to build. I doubt 2d TQFTs can serve as universal computers; 3d TQFTs can.

One cool fact is that the construction of 3d TQFTs is just a categorified version of the construction of 2d TQFTs! I explained this in the Winter 2005 notes. Jeffrey Morton is working on this in his thesis.

For any n, nCob gives an example of what we’ve taken to calling a quantum lambda theory - which is the syntactic way of thinking about a symmetric monoidal category with duals, just like a lambda theory is the syntactic way of thinking about a cartesian closed category. An n-dimensional TQFT is a symmetric monoidal functor

Z: nCob → Hilb

So, it’s a kind of “model” of nCob in Hilb - but it may not preserve duals of morphisms. An n-dimensional TQFT is unitary if Z preserves duals. These are the physically important ones, and these are full-fledged models of nCob as a quantum lambda theory, because they preserve all the relevant structure - the symmetric monoidal category structure, and the duals.

I’m sorry to publicly pick on your small mistakes instead of focusing on all the new stuff you’re teaching me - like this SKI combinator stuff! But, you’re my grad student so that’s my job. Posted by: John Baez on August 26, 2006 3:37 PM | Permalink

Re: Categories and computation

a TQFT is a symmetric monoidal functor from nCob to Hilb. It’s only 2-dimensional TQFTs that are symmetric monoidal functors from 2Cob to Hilb.

<snip>

An n-dimensional TQFT is unitary if Z preserves duals. These are the physically important ones

Yeah, when I was writing TQFT I meant 2d unitary TQFT, since that’s the kind of TQFT project Q is working with. (Of course, you already told me that it’s likely that there’s a 3d solid-state system with stringy anyons obeying the loop braid statistics you described with Derek.)

For any n, nCob gives an example of what we’ve taken to calling a quantum lambda theory

<snip>

and these are full-fledged models of nCob as a quantum lambda theory, because they preserve all the relevant structure - the symmetric monoidal category structure, and the duals.

But nCob isn’t closed, so it can’t be a quantum lambda theory, just a quantum algebraic theory, or as you and I were calling them earlier, a PROPer theory.

A quantum lambda theory is going to have exponential objects, so it will be a symmetric monoidal closed category. And according to Robin, the syntax for those is already known: it’s multiplicative intuitionistic linear logic.

So we’ve got to check that out and see what it does to our proposal.

Posted by: Mike on August 26, 2006 5:59 PM | Permalink

Re: Categories and computation

Robin: do you know if MILL has provisions for duals? If it doesn’t, then there’s still work to do as far as describing the simply-typed quantum lambda calculus is concerned.

Posted by: Mike on August 26, 2006 6:02 PM | Permalink

Re: Categories and computation

Mike: I’m sure there is plenty of work still to do here, though I reckon you’ll need to spend a bit of time getting up to speed on what’s been done, so you can see which parts need fleshing out.

If intuitionistic logic is the logic of CCCs, then MILL is the logic of symmetric monoidal closed categories, so of course it doesn’t have any provision for duals.

Similarly MLL is the logic of star-autonomous categories, which do have a certain kind of dualisation operation. The category of finite-dimensional Hilbert spaces is star-autonomous, but it’s a very special kind of star-autonomous category called compact closed. Of course there is a logic that corresponds specifically to compact closed categories, though I don’t think there is a standard name for it. (This randomly-chosen paper calls it CMLL, which is as good a name as any.) I don’t know who was the first to explicitly describe this logic, but you could (and I would) argue that it’s implicit in Kelly and Laplaza’s old paper Coherence for compact closed categories. (Warning: many people, including me, find this paper pretty hard to read.) Anyway…

The category of (finite-dimensional) Hilbert spaces is more than just compact closed. For one thing, it has biproducts. (Fairly recently I showed that products always collapse to biproducts in a compact closed category.) But it also has the “adjoint” operation, making it what Abramsky and Coecke called a strongly compact closed category. (Peter Selinger uses the more evocative term dagger compact closed, and this newer terminology seems to be gaining adherents.)

Abramsky’s student Ross Duncan has written a thesis on the corresponding logic, and it’s described in this paper.

(I know that John knows about quite a bit of this stuff, because I’ve seen him listening to talks about it!)

As you can see, a lot of this work is very recent, and there’s still loads to do. You seem to be interested in term calculi for these logics (a la lambda-calculus), and I don’t think anyone has figured out a term calculus for the categorical quantum logic (though people have invented various kinds of “quantum lambda calculus”, e.g. here and here). There is a substantial literature on term calculi for linear logic, which is an obvious starting point.

Posted by: Robin on August 26, 2006 7:42 PM | Permalink

Re: Categories and computation

Abramsky’s student Ross Duncan has written a thesis on the corresponding logic, and it’s described in this paper.

I’ve read all the papers you mention. In this case, I was under the impression that you could only describe quantum circuits (“protocols”) with it, but not loops and branching, which are necessary for Turing complete computation, let alone quantum Turing machines.

Thinking about it more now, though, are quantum circuits the “polynomials”?

though people have invented various kinds of “quantum lambda calculus”

van Tonder’s lambda calculus that you cited has an “untyped” type as well as a qubit type and a natural number type, so it seems rather ad-hoc. His next paper says

For simplicity, in this paper we restrict our attention to a purely linear fragment of the full quantum calculus. This fragment is not universal for quantum computation, but is at least as expressive as the quantum circuit model. Future work should address the full calculus.

This lends support to the idea that the quantum circuits are the only functions you can write in the simply-typed quantum lambda calculus.

Posted by: Mike on August 26, 2006 9:00 PM | Permalink

Re: Categories and computation

I’ve read all the papers you mention.

Cool! You’ve obviously already got pretty deeply into this. That probably means there isn’t much I can say that you don’t already know.

In this case, I was under the impression that you could only describe quantum circuits (“protocols”) with it, but not loops and branching, which are necessary for Turing complete computation, let alone quantum Turing machines.

Sure. Though the same is also true of the ordinary simply-typed lambda-calculus (it can’t represent loops or branching), so it’s not really surprising.

I guess the difference is that in the classical case we have PCF, a nice, simple language based on the simply-typed lambda-calculus that is Turing-complete. As far as I know we don’t yet have anything analogous for quantum computation. (Now there’s a challenge for you :-))

I can’t quite figure out where this discussion is going, it’s ranging pretty widely! We’re certainly getting away from categorical logic here. What I mean is that the connection with logic seems to be lost – or at least diluted – once you start adding enough features to the language that it becomes Turing complete. (For example, recursion/fixpoint looks pretty pathological if you try to interpret it logically: an operation that takes a proof of $A\to A$ and returns a proof of $A$?)

Posted by: Robin on August 26, 2006 10:29 PM | Permalink

Re: Categories and computation

OK, I just read your email where you explain that a symmetric monoidal category with duals is closed, and that $x^{\star} \otimes y = hom(x,y)$. I already knew this, but somehow forgot. It’s easy to see: a morphism $1\to x^{\star} \otimes y$ can be “unbent” into a morphism $x\to y$, and that’s what quantum gate teleportation is all about. I think Abramsky calls the first morphism the name of the second.

So everybody should ignore what I said earlier about “the free SMCC over 2Cob” since it already is one. A morphism from the disjoint union of no circles to two oppositely-oriented circles is a bent version of a morphism from one circle to another, both oriented the same way.

It looks like mean that certain slices of the cobordism won’t be well-oriented, though. Is that OK?

Posted by: Mike on August 26, 2006 7:03 PM | Permalink

Re: Categories and computation

The fun starts when you get 3d TQFTs, and it’s these that are relevant for the quantum computers that Microsoft’s Project Q is trying to build. I doubt 2d TQFTs can serve as universal computers; 3d TQFTs can.

Wait, what? I thought 2d TQFT’s described (2+1)-d spacetime, or in project Q, that the circles correspond to anyons and the cobordisms to creation, annihilation, splitting them and colliding them, and moving them around each other.

Maybe I’m counting the dimensions on the wrong thing…

Posted by: Mike on August 26, 2006 6:11 PM | Permalink

Re: Categories and computation

JB wrote:

The fun starts when you get 3d TQFTs, and it’s these that are relevant for the quantum computers that Microsoft’s Project Q is trying to build. I doubt 2d TQFTs can serve as universal computers; 3d TQFTs can.

Mike Stay wrote:

Wait, what? I thought 2d TQFT’s described (2+1)-d spacetime, or in project Q, that the circles correspond to anyons and the cobordisms to creation, annihilation, splitting them and colliding them, and moving them around each other.

No, in an n-dimensional TQFT the dimension of spacetime is n. In a 2d TQFT the dimension of spacetime is 2. This means the dimension of space is 1, so space is a collection of circles, and as time passes these circles can split, join, be born or die - tracing out a 2-dimensional spacetime in the process.

It’s a pleasant fact that the symmetric monoidal category 2Cob is the free symmetric monoidal category on a commutative Frobenius monoid. As a consequence, a 2d TQFT

Z: 2Cob → Hilb

is the same as a commutative Frobenius algebra. Despite their long and scary name, commutative Frobenius algebras are not very complicated. So, one can work out everything in great detail. By the same token, the computational power of 2d TQFTs is probably quite limited (though I haven’t seen anyone prove this).

To build a computer, we probably need to go to 3d TQFTs. This was done in Freedman, Kitaev and Wang’s paper A modular functor which is universal for quantum computation - they showed that SU(2) Chern-Simons theory is computationally universal when q is a fifth root of unity.

Now for the root of your puzzlement:

In general a 3d TQFT describes a universe where space is a 2-manifold that wiggles around and changes topology as time passes, tracing out a 3-manifold. But, if we get fancier and consider a 3d extended TQFT, space can be a 2-manifold with boundary: for example, a surface with some open disks removed. These removed open disks - often called “punctures” - turn out to act like particles! In particular, they can merge and split, be born and die, and also braid around each other.

As you know, Jeff Morton, Derek Wise, Alissa Crans and I have been working on a class of 3d TQFTs called “BF theory” - a close relative of the class called Chern-Simons theory. A special case of BF theory is quantum gravity in 3 dimensions! In our paper Exotic statistics for strings in 4d BF theory, Derek, Alissa and I described how punctures in 3d quantum gravity act like particles of various types - known stuff, but nobody had ever explained how pathetically simple it was. Then we climbed up a dimension and looked at loop-shaped “punctures” in 4d BF theory.

Now, on to anyonic quantum computers….

The fractional quantum Hall effect involves electrons moving on a thin superconducting film. It’s described using a 3d extended TQFT of Chern-Simons type. At any moment, “space” is the film with certain disks removed; these disks act like particles, and they’re called “anyons”. This is the setup Freedman, Kitaev and Wang are trying to use in actually building a quantum computer.

If you want to dig into this aspect, you’ll need to think about extended TQFTs, which go beyond the math we’ve discussed here: instead of using a mere category of manifolds and cobordisms, they use a “double bicategory” of manifolds, cobordisms between those, and cobordisms between those! This is the topic of Jeff’s thesis, and he’s already written a paper on it: A double bicategory of cobordisms with corners.

All this stuff is supposed to fit into a grand project of categorifying logic, quantum logic, and quantum field theory, hand in hand. As Robin mentions, Seely has already been pushed towards categorifying the theory of cartesian closed categories to understand the lambda calculus more deeply!

But, I think we’ve got our work cut out for ourselves already, even without categorifying everything.

Posted by: John Baez on August 27, 2006 2:32 AM | Permalink

Re: Categories and computation

Urs wrote:

free CCC on one object…

OK. In this language, what is quantum lambda calculus?

Mike already answered this, but I can’t resist giving my own answer. The typed lambda calculus can be seen as a language for writing down “lambda theories”. A lambda theory is just a nitty-gritty syntactical way of specifying a cartesian closed category.

Similarly, the typed quantum lambda calculus is a language for writing down “quantum lambda theories”. A quantum lambda theory is just a nitty-gritty syntactical way of specifying a symmetric monoidal category with duals.

This idea is based on the fact that nCob, and Hilb, and many other “quantum-flavored” categories are all symmetric monoidal categories with duals. James Dolan and I emphasized this when stating the “tangle hypothesis” and “TQFT hypothesis” in our paper Higher-dimensional algebra and topological quantum field theory. Abramsky and Coecke later emphasized the same class of categories (under a different name) as crucial to understanding quantum computation in their paper A categorical semantics of quantum protocols. So, these seem like a good “quantum substitute” for cartesian closed categories.

However, at least when it comes to quantum computation, all this is very new and a bit tentative. In particular, the actual definition of the “quantum lambda calculus” is one thing Mike Stay is supposed to figure out in his thesis! van Tonder has already worked out something he calls A lambda calculus for quantum computation, but this probably doesn’t give the right syntax for symmetric monoidal categories with duals in the way that the typed lambda calculus gives the right syntax for cartesian closed categories. Getting things to match perfectly is the constraint that makes Mike’s job into a well-posed problem.

Posted by: John Baez on August 26, 2006 3:59 PM | Permalink

Re: Categories and computation

I thought the usual way to model the untyped λ-calculus is to find a CCC that has an object X which is isomorphic to X⇒X. Then you can represent an equivalence class of λ-terms as an endomorphism of X.

Is that equivalent to what you said?

Posted by: Robin on August 24, 2006 1:18 PM | Permalink

Re: Categories and computation

This is a subject that I ought to know something about, so let me have a go.

Normally when people talk about λ-calculus in the context of categorical logic, they mean the simply-typed version rather than the untyped one that urs discusses in his comment. If you want to make a precise connection with cartesian closed categories, you need product types as well, i.e. for any types σ and τ there is a type σ×τ, which you think of as being the type of pairs of a σ-thing with a τ-thing. Of course you need corresponding λ-terms as well, typically a pair constructor 〈s, t〉 and appropriate left and right projection primitives.

I’m not really sure what the difference is between the COMPUTATION and LAMBDA-CALCULUS columns in your table. Objects correspond to types, as you say. Morphisms correspond to αβη-equivalence classes of λ-terms.

If you want a terminal type, you need to explicitly introduce one. Conventionally the terminal type is called 1 and its unique inhabitant is called *. There’s an η-rule that says any variable of type 1 can be reduced to *.

Your most interesting question is the last one, where in the category-theoretic view do we see the program “grind away” as it does a computation step by step?

The simplest answer would just be to say “nowhere”, since after all a morphism is an equivalence class of terms. But since this is the n-category café, I don’t think that’s the answer you were looking for! I think it’s fair to say that there isn’t yet a better answer that’s universally agreed upon, but certainly people have thought about it. One obvious approach is to represent the (typed) λ-calculus by a lax cartesian closed category, where the 1-cells are just terms and the 2-cells represent computations. This was first suggested in print (AFAIK) by Robert Seely in his 1987 LICS paper Modelling computation: a 2-categorical framework, but sadly that idea has not (yet?) proved fruitful. Another approach is suggested by Turi and Plotkin’s Towards a mathematical operational semantics. I’m sure there is a lot more to be said about this, and I look forward to reading other people’s comments!

PS. Is there any way to prevent this system from converting Greek letters and mathematical symbols into question marks? I’ve had to manually encode them all as HTML entities.

Posted by: Robin on August 24, 2006 1:03 PM | Permalink

Is there any way to prevent this system from converting Greek letters and mathematical symbols into question marks?

When editing a comment, choose “itex to MathML” from the “TextFilter” pulldown menu that appears somewhere over the edit pane.

Then include LaTeX commands as usual in your comment, enclosed in “$…$” or “$…$”.

This allows you to all these greek letters as well as \mathcal, \mathfrak, etc.

Only some (but not all) \mathcal and \mathfrak symbols will be displayed as question marks or otherwise unreadable, unless the user installs by hand one of the additional fonts, as explained on the top right of the index page of this blog.

Posted by: urs on August 24, 2006 1:20 PM | Permalink

That’s useful to know, thanks!

A little more info on what I was trying to do: I have a custom keyboard layout (which I use for writing posts on my own blog) which makes the Greek alphabet and various mathematical symbols easily accessible from the keyboard. For example, to type a lambda I just press option-l. This is very convenient, and even faster than typing $\lambda$. It works on most web sites that I’ve tried it on. But this one seems to actually go out of its way to convert these “weird” characters into question marks, rather unhelpfully!

Anyway using TeX markup is a decent workaround, so thanks.

Posted by: Robin on August 24, 2006 1:28 PM | Permalink

For a list of the TeX commands supported here, look at this page. If there’s something that’s not supported, but you think should be, let me know.

Also, there are several TeX-enabled filters to choose from, both for comments and for posts (hint, hint, John!). A list, together with short explanations, is available on the comment-entry page. Personally, I tend to recommend the Markdown with itex to MathML filter, because markdown syntax is fairly intuitive for most people.

Posted by: Jacques Distler on August 24, 2006 4:45 PM | Permalink | PGP Sig

One alternative to using MathML, which requires downloading a plugin for some users, is jsMath. It’s a script that replaces tex with typeset characters. It looks best when you download the TeX fonts, but it has several fallback modes, including using the browser’s built-in unicode fonts or, if all else fails, using images. Since it replaces the tex with characters, you can select the formulas like normal text.

Here’s the site:
http://www.math.union.edu/~dpvc/jsMath/

Here are some examples:
http://www.math.union.edu/~dpvc/jsMath/examples/welcome.html

Posted by: Mike on August 24, 2006 5:38 PM | Permalink

Re: Categories and computation

The data type corresponding to 1 in C/C++ is void. For example, the C function prototype int f(void); corresponds to a lambda term $f:1\rightarrow int$.

Morphisms correspond to αβη-equivalence classes of λ-terms.

Ack! Evil! I hope that 2-category paper had morphisms being actual programs and $\alpha$-, $\beta$-, and $\eta$-reduction as 2-isomorphisms. In fact, though you can probably interpret what I wrote to Urs in a forgiving way, I was mentally distinguishing between $\alpha\beta\eta$-equivalent expressions.

As a programmer, I don’t write equivalence classes of programs, I write particular representatives. Bubblesort is typically NOT a suitable replacement for quicksort, even though they’re extensionally equivalent.

Posted by: Mike on August 24, 2006 6:25 PM | Permalink

Re: Categories and computation

Ack! Evil! I hope that 2-category paper had morphisms being actual programs and $\alpha$-, $\beta$-, and $\eta$-reduction as 2-isomorphisms.

Well, I’d separate $\alpha$ from the other two. It’s usual to regard $\alpha$-equivalent terms as being syntactically identical, and I doubt there’s anything to be gained by distinguishing them. Also, if you want to represent the process of computation, then direction is important, so you don’t want the 2-cells to all be isomorphisms. Interestingly, it turns out that the category-theoretically natural choice is to define the 2-cells to represent $\beta$-reductions and $\eta$-expansions. There is some evidence that this choice of orientation has practical benefits too, e.g. The virtues of eta-expansion.

One reason this idea has never (or not yet) really taken off is that we don’t have any interesting examples of such 2-categories!

Posted by: Robin on August 24, 2006 6:46 PM | Permalink

Re: Categories and computation

Well, I’d separate $\alpha$ from the other two. It’s usual to regard $\alpha-$equivalent terms as being syntactically identical, and I doubt there’s anything to be gained by distinguishing them.

OK, sure.

Also, if you want to represent the process of computation, then direction is important, so you don’t want the 2-cells to all be isomorphisms.

So we’re moving from a groupoid-like thig to a 2-category. That’s good.

The virtues of eta-expansion

Thanks, I’ll take a look.

Posted by: Mike on August 24, 2006 8:08 PM | Permalink

Re: Categories and computation

You seem to be assuming that extensionally equivalent programs are necessarily $\beta\eta$-equivalent. That’s not really true in the sense you seem to mean. For example, I very much doubt that bubblesort is $\beta\eta$-equivalent to quicksort, if you write them as PCF programs.

(For anyone who’s unused to the jargon, PCF is simply typed $\lambda$-calculus with integers and booleans as base types, extended with primitives for arithmetic and recursion. It’s possibly the simplest typed language that is Turing complete.)

There are models of PCF that have a lot of intensional content, such as Hyland and Ong’s category of games, where a morphism is a strategy for carrying out the computation.

Posted by: Robin on August 24, 2006 7:31 PM | Permalink

Re: Categories and computation

Isn’t $\eta-$equivalence extensional equivalence? They give the same answer on all inputs, so they’re extensionally equivalent. I’m positive that they’re not $\beta-$equivalent, but two things are $\beta\eta-$equivalent if they’re one or the other or both, right?

Posted by: Mike on August 24, 2006 7:53 PM | Permalink

Re: Categories and computation

It comes down to what you mean by “all inputs”. For example, consider the following two PCF programs of type $bool \to bool$:

$f := \lambda x: bool$. if x then true else true fi
$g := \lambda x:bool$. true

Do they behave the same on all inputs? They certainly behave the same on the ordinary boolean values true and false. But $f\Omega = \Omega$ whereas $g\Omega = true$.

Similarly you could write two different programs for deciding whether $a+b<c$, say, that behave the same on all actual natural numbers but which are distinguished by some term such as $succ(succ(\Omega))$, which is the “number” which will admit to being $>1$, but go into an infinite loop if you press it for any further information.

Posted by: Robin on August 24, 2006 8:24 PM | Permalink

Re: Categories and computation

Yes; if they output different things on identical inputs, then they’re not extensionally equivalent. Whether quicksort and bubblesort are or not depends on what datatypes are available and how compare behaves on them.

Posted by: Mike on August 24, 2006 8:44 PM | Permalink

Re: Categories and computation

If you mean an “input” to be something you can write in the language, that’s still not true of PCF.

There’s a famous example, due to Gordon Plotkin, of two terms that behave the same in all syntactic contexts but which are different in the continuous domain model. It’s explained quite nicely in section 6.4 of Amadio and Curien’s book Domains and Lambda-Calculi, or I can try and explain it tomorrow if you prefer.

The moral is that the relevant “inputs” are arrows in the category, which may not be expressible in the language.

Posted by: Robin on August 24, 2006 11:47 PM | Permalink

Re: Categories and computation

There’s a famous example, due to Gordon Plotkin, of two terms that behave the same in all syntactic contexts but which are different in the continuous domain model. It’s explained quite nicely in section 6.4 of Amadio and Curien’s book Domains and Lambda-Calculi, or I can try and explain it tomorrow if you prefer.

Please explain. What’s the continuous domain model? I vaguely recall reading something about trying to find a set such that $X^X \cong X$, and that some particular construction uses continuous functions.

Posted by: Mike on August 25, 2006 7:09 AM | Permalink

Re: Categories and computation

Mike: I’ll see what I can do. :-)

The “continuous model” is just a certain cartesian closed category of posets. The objects are special posets called Scott domains, and the morphisms are the co-called continuous functions, i.e. monotone functions that preserve directed suprema. (The reason for calling them “continuous” is that every domain can be endowed with a certain topology called the Scott topology, and the continuous functions are the ones that are continuous with respect to the Scott topology.)

Since this is a CCC, of course it can be used to interpret simply-typed lambda-calculus. Also every continuous function has a least fixed point, which means (among other things) that it’s possible to represent a Y combinator operation, and so to interpret the whole of PCF.

The way that the base types are interpreted is interesting. The boolean type, for example, is interpreted as a set with three elements $\{\bot, true, false\}$ with $\bot\lt true$ and $\bot\lt false$. The natural numbers are similarly incomparable, and augmented with a bottom element $\bot$ that is below all of them in the order. Every type has a bottom element, and it’s used to interpret the non-terminating program $\Omega$.

Now, whilst every PCF-definable function is (of course) continuous, the converse fails in an interesting way. The canonical couterexample is the parallel or function

por $\bot$ $\bot$ = $\bot$
por $\bot$ true = true
por true $\bot$ = true
por false false = true

Intuitively, the reason this function is not PCF-definable is that it cannot be computed sequentially. There’s an obvious parallel implementation: evaluate both arguments in parallel, and as soon as one of them returns true, you return true. But in PCF you have to choose one of the arguments to evaluate first, and if that one doesn’t terminate then you’re stuck!

Even more interestingly, it’s possible to write two PCF functions that behave identically on all PCF-definable arguments, but behave differently on parallel or! This is Plotkin’s counterexample, that I mentioned in my earlier post. Here are the two functions, both of type ($(bool \to bool \to bool)\to bool$):

$\lambda g.$ if (g true $\Omega$) then
if (g $\Omega$ true) then
if (g false false) then $\Omega$ else true fi
else $\Omega$ fi else $\Omega$ fi

$\lambda g.$ if (g true $\Omega$) then
if (g $\Omega$ true) then
if (g false false) then $\Omega$ else false fi
else $\Omega$ fi else $\Omega$ fi

The first one returns true if you feed it por, and the second one returns false if you feed it por. But they both diverge on any PCF-definable input. So we have two functions that behave the same on every input that you can write in the language, yet which are denoted differently in the continuous model. It follows that they must be denoted differently in the free model, since an equation that holds in a free category must hold in all such categories.

Does this make sense?

Posted by: Robin on August 25, 2006 12:55 PM | Permalink

Re: Categories and computation

Argh, the last line in the definition of the por function ought to say

por false false = false

Also to my description of the natural parallel implementation you should add “If both arguments return false, then return false”.

Sorry for the confusion.

Posted by: Robin on August 25, 2006 1:50 PM | Permalink

Re: Categories and computation

Does this make sense?

I think so. Let me try to say what I understand: If we augment the PCF lambda theory with a term por:bool$\rightarrow$bool, then we can choose a functor that maps the term por to the por function you described. Since in that model, the two lambda terms give different results, they must be different morphisms in straight PCF. Is that right?

Posted by: Mike on August 25, 2006 5:31 PM | Permalink

Re: Categories and computation

The por function takes two arguments, so its type is actually $bool \to bool \to bool$. I guess that was just a typo though.

More seriously, I’d be wary of putting it the way you did. You can augment the theory with a ‘por’ term of course, and that is an interesting thing to do. (It turns out that if you add por (and its infinitary cousin) to PCF, then the definable functions are exactly the continuous ones.)

On the other hand, for the argument I was making, I don’t think that’s the right way to think about it. The point is that there is a ‘por’ morphism in the category, i.e. the por function is continuous. If you now consider the denotations of the two functions I gave, they are both (of course) denoted as continuous functions from the domain of continuous functions ($bool\to bool\to bool$) to the three-element domain that represents the booleans.

Now, two functions are different iff they differ on some element of their source set. (I’m saying “source” rather than “domain” to avoid confusion with the other meaning of the word domain here.) Since the source domain (see!) contains the por function, and our two functions give different answers when presented with the por function, they are therefore different functions, i.e. different arrows in the category.

Am I making any more sense now?

Posted by: Robin on August 25, 2006 6:24 PM | Permalink

Re: Categories and computation

its type is actually bool→bool→bool. I guess that was just a typo though.

Yes, that was a typo.

Am I making any more sense now?

OK, yeah. The two lambda terms are definable in the lambda calculus, and so have images under the modelling functor. But those images can now apply to any continuous function, specifically por, and since they give different answers in the model, they must be different morphisms in the theory.

Thanks!

Posted by: Mike on August 25, 2006 7:46 PM | Permalink

Re: Categories and computation

Thanks for the long answer. I will need to digest that.

On your blog I found this:

The typical CCC in which models of a lambda theory are interpreted is $\mathbf{Set}$. The typical SMCC in which models of a quantum lambda theory will be interpreted is $\mathbf{Hilb}$

I needed that piece of information. For starters.

If you take the free CCC over the category arising from an algebraic theory of, say, groups, you get a programming language with group-valued objects.

Are we talking about the free CCC generated by the category $\mathbf{Grp}$ of groups?

Using C++/Java terminology, these objects have three methods, mult, inv, and unit, that satisfy given pre- and post-conditions. So an algebraic theory defines an interface for a class, and the functor from the CCC to Set provides the implementation.

Interesting. The category-theoretic interpretation of object-oriented programming. I was wondering about that once.

But I don’t quite understand it yet.

Taking a functor from the free SMCC over $2\mathbf{Cob}$ to $\mathbf{Hilb}$ would give a quantum programming language with commutative-Frobenius-algebra-valued objects, and the operations on those objects would be implemented by the TQFT.

I am a little lost. I will say something, though, in the hope that it triggers corrections and explanations.

Objects in $2\mathbf{Cob}$ are disjoint unions of circles.

In order to form the free SMCC over $2\mathbf{Cob}$, I guess we have to throw in extra objects that represent the $\mathrm{hom}$-things between given input and output circles, i.e. the spaces of cobordisms with given inputs and outputs.

So, I guess, a functor from that free SMCC now assigns not just a Hilbert space to the circle, but one Hilbert space to each of these spaces of cobordisms.

Hm, is that right so far? Not sure in which sense we now get a quantum programming language.

Sorry for being so dense. Explanations are appreciated.

Posted by: urs on August 24, 2006 6:29 PM | Permalink

Re: Categories and computation

Are we talking about the free CCC generated by the category Grp of groups?

Of course I’m only guessing at what Mike meant, but my hunch is that he’s talking about the free CCC generated by the theory of groups (in Lawvere’s sense). Furthermore, when he says “free” I think he means it to be only relatively free, in the sense that the inclusion functor should preserve the existing products.

To put it another way, the CCC that you end up with will be the free CCC on a group object.

(Is that right, Mike?)

Posted by: Robin on August 24, 2006 7:41 PM | Permalink

Re: Categories and computation

If you take the free CCC over the category arising from an algebraic theory of, say, groups, you get a programming language with group-valued objects. Are we talking about the free CCC generated by the category Grp of groups?

No, by the category arising from the algebraic theory of a group. This category, Th(Grp), is a category with finite products. It has a distinguished object $G$ and all other objects are finite powers of it. In addition to all the morphisms every category with finite products has, it has three morphisms $m:G\times G\rightarrow G$, $\cdot^{-1}:G\rightarrow G$, and $e:1\rightarrow G$, along with all the morphisms you get from taking products of these and composing them. It has various diagrams involving $m$, $\cdot^{-1}$, $e$, $\Delta:G\rightarrow G\times G$, $!:G\rightarrow 1$, and isomorphisms $G\rightarrow G\times 1$ $G\rightarrow 1\times G$ that encode left- and right-unit laws, associativity, and left- and right-inverses.

Product-preserving functors from Th(Grp) to Set assign G to a set and the other morphisms to functions that satisfy the group axioms. So “the models of the theory of groups are groups.” I think the functor category with objects being functors from Th(Grp) to Set and morphisms being natural transformations between those is equivalent to the category Grp.

Objects in 2Cob are disjoint unions of circles.

In order to form the free SMCC over 2Cob, I guess we have to throw in extra objects that represent the hom-things between given input and output circles, i.e. the spaces of cobordisms with given inputs and outputs.

So, I guess, a functor from that free SMCC now assigns not just a Hilbert space to the circle, but one Hilbert space to each of these spaces of cobordisms.

Hm, is that right so far? Not sure in which sense we now get a quantum programming language.

Sorry for being so dense. Explanations are appreciated.

Heck, I only figured out that stuff yesterday. What you’ve written looks right to me.

Let’s call the data type T. After adding this data type, but before imposing any structure on it, we should have a quantum programming language. Judging from one of Robin’s responses, the quantum programs are written using the typed combinators B, C, and I. So, for example, there’s a program that takes an endomorphism on a circle and produces a circle.

When we add the terms $m$, $\Delta$, $\iota$, and $\epsilon$, we get “new combinators” that we can use in the programs.

Posted by: Mike on August 24, 2006 7:47 PM | Permalink

Re: Categories and computation

This is a subject that I ought to know something about, so let me have a go.

Oh, good - someone who is actually an expert on this stuff! That’s why I decided to move this conversation with Mike out into the open. I should warn you, I’m very confused about certain things, but maybe not the things you think.

Normally when people talk about lambda-calculus in the context of categorical logic, they mean the simply-typed version rather than the untyped one that urs discusses in his comment.

Right. But, when people like Church or even modern folks like Mark Chu-Carroll explain how you can set up the natural numbers, the theory of recursive functions and so on using the lambda-calculus, they tend to use the untyped version. So, one thing that’s confusing me is: is that really necessary? The untypedness seems to allow for bizzaro self-referential tricks like the Y combinator. But is the untypedness really needed?

Here’s another way to ask my question. Let’s not use this bizarre untyped business. Instead, start with the free CCC on one object, say X. Then I believe

hom(hom(X,X),hom(X,X))

is just the natural numbers - it’s a monoid, of course, but it consists of all the ways you could take a morphism

f: X -> X

and build a new morphism from X to X, and the only ways I can think of is by forming fn for a natural number n.

So, let’s just make the abbrevation

N = hom(hom(X,X),hom(X,X))

Then, what’s

hom(N,N)

like? Any “point” in here

1 -> hom(N,N)

determines a function from the natural numbers to the natural numbers. But: what functions do we get? All recursive functions, or some smaller class?

Morphisms correspond to alpha/beta/eta-equivalence classes of lambda-terms.

Thanks - I keep getting confused about this. This is why I need to complete that dictionary!

Anyway, if this is how the game goes, then yes, we’ll never see “the process of computation grind away step by step” in the category-theoretic view, since that process is a bunch of alpha and beta moves. But okay, so we go to a 2-category like you say. I don’t yet need to understand the details of how this might work… I just need to flesh out the basic dictionary, to keep from getting hopelessly befuddled. So: in the 2-categorical approach, the “process of computation” is a 2-morphism between morphisms that would be declared equal in the lowly categorical approach? How does this square with the idea that a morphism is a “program”?

Is computation a process going between isomorphic programs???

Posted by: John Baez on August 25, 2006 5:29 AM | Permalink

Re: Categories and computation

But: what functions do we get? All recursive functions, or some smaller class?

If I understand right, you get a larger one: any function $N\rightarrow N$ at all! So you could have a function that returns the $n$th bit of $\Omega$.

This is because the functor assigning functions to these morphisms only has to pick a function in Set. See the second-to-last paragraph in this post.

So: in the 2-categorical approach, the “process of computation” is a 2-morphism between morphisms that would be declared equal in the lowly categorical approach?

Yes.

How does this square with the idea that a morphism is a “program”?

Well, in the lowly approach, a morphism is an equivalence class of programs: a set of programs that are indistinguishable by their input-output relations. There’s some subtlety to this; hopefully Robin will address it in his explanation of the continuous domain model. But that’s the basic idea.

Is computation a process going between isomorphic programs???

Well, Robin suggests elsewhere that direction of computation is important, so you don’t want them to be isomorphic, but if you make all the 2-arrows into isomorphisms, then yes.

Posted by: Mike on August 25, 2006 7:23 AM | Permalink

Re: Categories and computation

Wow, what a mammoth thread! My laptop is on the verge of death here in Shanghai - I spent the afternoon taking it to an Asus repair store, which then said they couldn’t repair it because it wasn’t an off-the-shelf Asus. I’m actually keeping it going by putting it in the refrigerator now and then; my computer wizard says this can help the hard drive survive a bit longer, and it seems to be working (though I can’t help but wonder if he’s having a laugh at my expense). But anyway… it’s nice to come back and see that the theory of computation is developing rapidly, even though it’s not helping me.

A while back I showed how to define the natural numbers as an object of T, the free CCC on one object. I noted that elements of

hom(N,N)

could thus be interpreted as functions from N to N, and I asked which functions we get.

Mike suggested that we can get any function, if I take the right model of T. However, this doesn’t answer my question. I’m not asking about any old model of T!

John: Right, in lambda-jargon what you’re saying is that it’s possible to define a typed version of the Church numerals.

But it’s definitely NOT true that all recursive functions are definable in the simply-typed lambda-calculus when you use these numerals. The class of definable functions is very small; certainly the definable functions are all primitive recursive, so e.g. the Ackermann function is not definable, but it’s way smaller than that. You can’t even define the predecessor function, I don’t think.

For Mike’s sake let me restate my original question a bit more clearly. I’ll phrase it terms of models, since he seems to have models on his mind - even though my original statement didn’t mention models at all!

CCC’s usually have lots of models, but every CCC has a god-given model which assigns to any object x its set of “elements”. An element of x is simply any morphism

f: 1 -> x

Let El(x) be the set of elements of x. Then given a morphism

g: x -> y

we get a function

El(g): El(x) -> El(y)

simply by composing any element of x with g to obtain an element of y! If our CCC is called X, one can easily check that we get a functor

El: X -> Set

and if I’m not too confused, this functor is an example of what we’ve taken to calling a “model” of X.

Now, let T be the free CCC on one object. What is

El: T -> Set

like? Well, if we define

N = hom(hom(x,x),hom(x,x))

as an object of T, then I claim El(N) is the natural numbers! I explained why in my previous question: an element of N is just some way to turn a morphism f: x -> x into a new morphism f: x -> x, and all you can construct is fn for some natural number n.

Similarly,

El(hom(N,N))

is some monoid of functions from the natural numbers to themselves, and I was hoping maybe it was all recursive functions - but it’s not, according to Robin. In fact all I’ve managed to get so far are polynomial functions, but I’m very slow at this and would be sort of surprised if that’s all one can get. I’d like to know exactly which functions one can get.

But, more importantly, it’s nice to hear that people using the typed lambda calculus don’t let this stop them; they just define a slightly fancier CCC that’s powerful enough to do recursion.

Posted by: John Baez on August 25, 2006 11:45 AM | Permalink

Re: Categories and computation

an element of $N$ is just some way to turn a morphism $f: x \to x$ into a new morphism $f: x \to x$, and all you can construct is fn for some natural number $n$.

Hm, there must be some reason why I am not allowed to choose any arbitrary function of sets $\mathrm{El}(\mathrm{Hom}(x,x)) \to \mathrm{El}(\mathrm{Hom}(x,x))$. You seem to be claiming that the only kind of function compatible with the functoriality of $\mathrm{El}$ is $f \mapsto f^n$.

Let me see:

What is $\mathrm{El}(x)$ if $T$ is the free CCC on $x$? How many morphisms $1 \to x$ are there in the free CCC on $X$?

All we know is probably that there is an object $\mathrm{Hom}(1,x)$ of such morphism. And that $\mathrm{El}(x) = \mathrm{El}(\mathrm{Hom}(1,x)) = \mathrm{El}(\mathrm{Hom}(1,\mathrm{Hom}(1,x))) = \cdots$.

Is it really true that there is a canonical choice for this set? Or is maybe $\mathrm{El}$ only uniquely determined after this one choice has been made?

In any case, I know that $\mathrm{El}(x)$ is some set.

Given that, what is $\mathrm{El}(\mathrm{Hom}(x,x))$? Is there any reason why this shouldn’t be equal to $\mathrm{Hom}_\mathrm{Set}(\mathrm{El}(x),\mathrm{El}(x))$?

But if so, what is the reason that $\mathrm{El}(\mathrm{Hom}(\mathrm{Hom}(x,x),\mathrm{Hom}(x,x)))$ isn’t equal to all of $\mathrm{Hom}_\mathrm{Set}(\mathrm{Hom}_\mathrm{Set}(\mathrm{El}(x),\mathrm{El}(x)),\mathrm{Hom}_\mathrm{Set}(\mathrm{El}(x),\mathrm{El}(x)))$?

Posted by: urs on August 25, 2006 12:25 PM | Permalink

Re: Categories and computation

I think the canonical choice is for Hom(1,x) to be empty! Since there are CCCs in which Hom(1,X) is empty for some X, it must be empty in the free CCC.

Then Hom(x,x) has one element (the identity), etc.

Posted by: Robin on August 25, 2006 1:31 PM | Permalink

Re: Categories and computation

I think the canonical choice is for Hom(1,x) to be empty!

[…]

Then Hom(x,x) has one element (the identity), etc.

But then John’s construction breaks down, doesn’t it?

If $\mathrm{Hom}(x,x)$ contains just the identity, then any endomorphism of that of the form $f \mapsto f^n$ is the identity endomorphism.

So in this case the “natural numbers” $N$ would consist just of the number 1, nothing else.

… unless, that is, you are now going to tell me that we have to work 2-categorically and distinguish $x \stackrel{\mathrm{Id}}{\to} x \stackrel{\mathrm{Id}}{\to} x$ from $x \stackrel{\mathrm{Id}}{\to} x$.

Posted by: urs on August 25, 2006 2:08 PM | Permalink

Re: Categories and computation

But then John’s construction breaks down, doesn’t it?

No, I don’t think so. Free categories are odd beasts.

A morphism exists in the free CCC just when it exists in all CCCs, and two morphisms of the free CCC are equal just when they’re equal in all CCCs. So there aren’t any morphisms $1\to x$, because there are CCCs that don’t have any such morphisms (e.g. the category of sets with x the empty set). However, for every natural number n, every CCC has a morphism $(X\Rightarrow X) \to (X\Rightarrow X)$ which “takes a morphism and composes it with itself n times”. Therefore corresponding arrows must exist in the free CCC. Furthermore there are CCCs and morphisms in which all these are distinct (e.g. the category of sets and the successor function), hence they must be distinct in the free CCC.

Posted by: Robin on August 25, 2006 2:36 PM | Permalink

Re: Categories and computation

However, for every natural number $n$, every CCC has a morphism $X \Rightarrow X) \to (X \Rightarrow X)$ which “takes a morphism and composes it with itself n times”. Therefore corresponding arrows must exist in the free CCC.

All right. Now apply $\mathrm{El}$. Don’t all these morphisms get sent to the identity by $\mathrm{El}$?

$\mathrm{El}(x)$ should be the empty set, you said.

Then $\mathrm{El}(\mathrm{Hom}(x,x))$ is the set $\{\mathrm{Id}_{\{\}}\}$. The set containing the identity function on the empty set. Right?

So then $\mathrm{El}(\mathrm{Hom}(\mathrm{Hom}(x,x),\mathrm{Hom}(x,x)))$ is (what else could it be?) the single element set

(1)$\left\{ \mathrm{Id}_{\mathrm{Id}_{\{\}}} \right\} \,.$

Let me know if I am wrong. Maybe I am lost in recursion.

Posted by: urs on August 25, 2006 2:51 PM | Permalink

Re: Categories and computation

Don’t all these morphisms get sent to the identity by El?

I think you’re right, and that John’s attempt to reformulate his claim in terms of models doesn’t quite work, for that reason.

(John? Are we missing something?)

Posted by: Robin on August 25, 2006 3:24 PM | Permalink

Re: Categories and computation

Urs wrote:

Don’t all these morphisms get sent to the identity by El?

Robin wrote:

I think you’re right, and that John’s attempt to reformulate his claim in terms of models doesn’t quite work, for that reason.

(John? Are we missing something?)

I’m in a rush now - gotta try to get my laptop fixed - so I could be making a mistake, but:

In the free CCC on one object x, there are definitely lots of different elements of

hom(hom(x,x), hom(x,x))

because for any n, in any cartesian closed category, there’s a morphism

n : hom(x,x) -> hom(x,x)

called “raising an endomorphism to the nth power”, and in the free cartesian closed category on one object these are all different. These all give different elements of

hom(hom(x,x), hom(x,x))

This is true even though hom(x,x) has only one element!

So, I think the mistake Urs is making is this. He’s assuming an isomorphism

El(hom(x,y)) = hom(El(x), El(y))

This isn’t true - and by the way, we don’t want this to be true in our definition of “model”. Putting this in the definition would eliminate all sorts of interesting and useful models of a CCC. E.g., we don’t want to require a model

Z: C -> Set

to have an isomorphism

Z(hom(x,y)) = hom(Z(x),Z(y))

This would force all functions from Z(x) to Z(y) to show up by applying Z to morphisms from x to y. We could never have models where only computable functions showed up!

So, we just want a map

Z(hom(x,y)) -> hom(Z(x),Z(y))

not necessarily an isomorphism. I’m sure this must be standard stuff, but I just noticed it recently.

Posted by: John Baez on August 26, 2006 3:12 AM | Permalink

Re: Categories and computation

Ah yes, of course. I should have spotted that!

A product-preserving functor $F:\mathbf{C}\to\mathbf{D}$ between two CCCs automatically induces a natural transformation with components

(1)$F(A\Rightarrow B)\to (FA\Rightarrow FB),$

but it’s not generally invertible, and in this case it isn’t.

Posted by: Robin on August 26, 2006 5:36 PM | Permalink

Re: Categories and computation

He’s assuming an isomorphism

(1)$\mathrm{El}(\mathrm{hom}(x,y)) = \mathrm{hom}(\mathrm{El}(x), \mathrm{El}(y))$

I was just assuming an inclusion.

(2)$\mathrm{El}(\mathrm{hom}(x,y)) \subset \mathrm{hom}(\mathrm{El}(x), \mathrm{El}(y)) \,.$

Together with $\mathrm{El}(x) = \{\}$ this would yield to the problem I indicated.

Assuming $\mathrm{El}(x) = \{\}$ we certainly find that $\mathrm{El}$ sends every morphism

(3)$g : x \to x$

to

(4)$\mathrm{Id}_{\{\}} \,.$

You seem to be saying that we still can have

(5)$\mathrm{El}(\mathrm{hom}(x,x))$

different from the 1-element set.

Posted by: urs on August 27, 2006 4:12 PM | Permalink

Re: Categories and computation

Urs wrote:

I was just assuming an inclusion. $El(hom(x,y)) \subset hom(El(x),El(y))$. Together with El(x)={} this would yield the problem I indicated.

Right. Assuming an inclusion is just as bad as assuming an isomorphism, here.

There are two questions, and we shouldn’t mix them up:

1. If CCC[x] is the free CCC on one object x, how does the “elements” functor

$El: CCC[x] → Set$

behave? Is it faithful (an inclusion on homsets)?

2. If X is any CCC, what should the definition of a “model”

$Z: X → Set$

be? Should we require it to be faithful?

The first question has a well-defined correct answer, and I think the answer is “no”. The second question is a matter of mathematical esthetics, so there’s a lot more room for argument, but I think the answer is also “no”.

We cannot use the answer to the second question to answer the first one. We really should agree on the first one before tackling the second!

But, I’ll say a word about the second one first. A “model” of a CCC in the category of sets should be the same thing as a morphism of CCC’s:

$Z: X \to Set$

just as a representation of a group G on a vector space V is a group homomorphism

$Z: G \to Aut(V)$

or an algebra of an operad is an operad homomorphism, etc. This way of thinking has proved itself over the years.

So, when we’re asking what a model of a CCC should be, we’re asking what a morphism of CCC’s

$Z: X \to Y$

should be. There are various possible answers. One is that it’s a functor preserving finite products and internal homs, so that we have an isomorphism

$Z(hom(x,x')) \cong hom(Z(x),Z(x'))$

for all x,x’ in X. This is very drastic, as I explained earlier. Another is that it’s a functor preserving finite products. As Robin points out, this implies we get morphisms

$Z(hom(x,x')) \to hom(Z(x),Z(x'))$

but not necessarily isomorphisms.

Another option is that we require that these morphisms are “inclusions” - i.e., monics. But this goes against the tao of mathematics! When defining morphisms between algebraic gadgets, one should usually either require that structure be preserved, or let it not be preserved; requiring that some map be an inclusion is too fussy and awkward. It’s better to put in such a condition later, when you want to pick out especially nice morphisms. For example, we say a representation of a group is faithful if

$Z: G \to Aut(V)$

is monic. The idea of a “faithful” model of a CCC sounds useful, but I’d be loath to demand that all models be faithful!

Since CCC’s have an internal hom, it may seem against the tao of mathematics to let morphisms of CCC’s fail to preserve this internal hom. It is a bit odd. However, there’s a precedent for it: there are two famously important morphisms of topoi, the “logical morphisms” and the “geometric morphisms”. The logical morphisms preserve everything, including the internal hom. The geometric morphisms don’t need to preserve the internal hom. The geometric morphisms turn out to be more important (they show up naturally when considering topoi of sheaves). We could be facing something similar here, especially since topoi are CCCs.

I’m sure someone has already figured this out, but it’s fun to do oneself.

Anyway, now we should talk about question 1, which is a question of fact rather than esthetics. But this post is getting too long. In fact, this whole thread is getting a bit unwieldy. I think I’ll turn off commments and start up a couple new threads where we can discuss some of the different things we’re talking about.

Posted by: John Baez on August 28, 2006 2:41 AM | Permalink

Re: Categories and computation

In fact all I’ve managed to get so far are polynomial functions, but I’m very slow at this and would be sort of surprised if that’s all one can get. I’d like to know exactly which functions one can get.

As you’ve formulated the question, I rather suspect that is all you can get. Taking a slightly more generous notion of definability (as in Harold Simmons’s notes) it’s possible to go a bit further and define exponentiation, where the exponent argument has type $N \to N$ instead of just $N$. This sounds rather odd at first, but of course the Church numeral construction can be carried out at any type of the form $(T\to T)\to(T\to T)$ for some type $T$, so in particular you can define numerals of type $N\to N$.

Posted by: Robin on August 25, 2006 1:26 PM | Permalink

Re: Categories and computation

The usual technical notion of “slightly fancier CCC that’s powerful enough to do recursion” is a cpo-enriched CCC, i.e. a CCC whose hom-sets are complete partial orders and whose composition operation is continuous. The canonical example is of course the category of cpos itself, but there are many others.

I don’t know whether anyone has thought about what a free such thing is like. Presumably it’s something quite a lot like the syntactic model of PCF (without the arithmetic and if/then apparatus).

Posted by: Robin on August 25, 2006 1:43 PM | Permalink

Re: Categories and computation

Is there some way to get a tree view of the thread on this system? And when replying to a post, I’d prefer to have only that post appearing while I’m composing–or at least have it quoted in the compose box. And for those using Firefox, I’m working on a greasemonkey script to automagically select “markdown with itex to mathml” every time. Are there any other features you’d be interested in?

Mike suggested that we can get any function, if I take the right model of T. However, this doesn’t answer my question.

Yeah, sorry. I realized after I went home that you weren’t talking about adding a term of that type to the theory and looking at the models, but rather, what terms of that type you could construct from the typed S and K combinators.

Posted by: Mike on August 25, 2006 5:08 PM | Permalink

Re: Categories and computation

Is there some way to get a tree view of the thread on this system?

There’s a button at the top and bottom of the comments to swap between threaded and chronological views.

Posted by: Aaron Bergman on August 25, 2006 5:12 PM | Permalink

Re: Categories and computation

There’s a button at the top and bottom of the comments to swap between threaded and chronological views.

Yes; for me the view defaults to threaded. I just wanted to see less of the thread all at once. I guess I’m going to have my greasemonkey script modify the css of the page to have the nodes in the thread view expandable, so I can just expand the posts I want to see.

Posted by: Mike on August 25, 2006 7:52 PM | Permalink

Re: Categories and computation

Posted by: Mike on August 26, 2006 10:05 PM | Permalink

Re: Categories and computation

John: Right, in lambda-jargon what you’re saying is that it’s possible to define a typed version of the Church numerals.

But it’s definitely NOT true that all recursive functions are definable in the simply-typed lambda-calculus when you use these numerals. The class of definable functions is very small; certainly the definable functions are all primitive recursive, so e.g. the Ackermann function is not definable, but it’s way smaller than that. You can’t even define the predecessor function, I don’t think.

That’s why we invent languages like PCF that have explicit recursion, so that we can define all the recursive functions.

There are some notes by Harold Simmons which go into more detail about what functions you can and can’t define using simply-typed lambda-calculus and the Church numerals.

Posted by: Robin on August 25, 2006 10:25 AM | Permalink

Re: Categories and computation

I think this is the paper that first answered John’s question about which functions are definable:

H. Schwichtenberg: Functions definable in the simply-typed lambda calculus, Arch. Math Logik 17 (1976) 113-114

Posted by: Robin on August 25, 2006 10:31 AM | Permalink

Re: Categories and computation

I think this is the paper that first answered John’s question about which functions are definable:

H. Schwichtenberg: Functions definable in the simply-typed lambda calculus, Arch. Math Logik 17 (1976) 113-114

And according to Wikipedia,

Schwichtenberg showed in 1976 that in $\lambda^{\rightarrow}$ exactly the extended polynomials are representable as functions over Church numerals; these are roughly the polynomials closed up under a conditional operator.

Posted by: Mike on August 25, 2006 5:21 PM | Permalink

Mockingbird

I am clearly the Oz among the Wizards here in this comment section, and I do need to get some work done this morning. But after that I get back to you all with lots of stupid questions.

Sorry, that’s lie, I have one stupid question already now:

In

David Keenan,
To Dissect a Mockingbird:
A Graphical Notation for the Lambda Calculus with Animated Reduction

the author invents and demonstrates a graphical notation for computations in untyped lambda calculus.

My question: can we understand these diagrams somehow from the point of view of ordinary category theory diagrams for CCCs?

Posted by: urs on August 25, 2006 9:23 AM | Permalink

Re: Mockingbird

Every bird is a morphism in the category; there’s only one object up to isomorphism. The movies illustrate how to show that the starting frame and the ending frame are in the same equivalence class. (Or, if you want to be 2-categorical, they’re 2-morphisms.)

Posted by: Mike on August 26, 2006 2:22 AM | Permalink

Re: Mockingbird

Every bird is a morphism in the category; there’s only one object up to isomorphism. The movies illustrate how to show that the starting frame and the ending frame are in the same equivalence class. (Or, if you want to be 2-categorical, they’re 2-morphisms.)

Maybe I didn’t phrase my question well. Clearly, every box (bird) represents an element in $\mathrm{Hom}(X,X)$, as you say, and the movies show how a “program grinds away”.

But the point of Keenan’s text is a special graphical calculus, which directs these movies.

I was wondering if one can derive this graphical calculus by starting with the usual graphics for computations in monoidal categories (“string diagrams”), specializing to a suitable monoidal category.

Let’s see.

The main thing is that move which is depicted by a fat bullet, with two lines coming in (one entering vertically, one horizonatally) and one going out.

The corresponding string diagram is apparently this one

(1)$\array{ X \otimes X \\ \; \downarrow \bullet \\ X } \;\;\; := \;\;\; \array{ X & & X \\ \;\;\downarrow \mathrm{Id} && \sim \downarrow\;\; \\ X &\otimes& \mathrm{Hom}(X,X) \\ & \;\;\downarrow \mathrm{ev} \\ & X } \,.$

The other important move is the duplication of an argument

(2)$\array{ X \\ \;\;\downarrow \wedge \\ X \otimes X } \,.$

The $\omega$-“bird” is hence the element in $\mathrm{Hom}(X,X)$ given by composing these

(3)$\array{ X \\ \;\;\;\downarrow \omega \\ X } \;\;\; := \;\;\; \array { X \\ \;\;\downarrow \wedge \\ X \otimes X \\ \;\; \downarrow \bullet \\ X } \,.$

The only other thing needed is the braiding

(4)$\array{ X \otimes X \\ \;\;\; \downarrow \times \\ X\otimes X } \,.$

For instance the “crossed warbler” ($\lambda a . \lambda b . (ba)a$) is

(5)$\array { X && X \\ &\;\; \downarrow \times & \\ X && X \\ \wedge \downarrow\;\; && \;\;\downarrow \mathrm{Id} \\ X & X & X \\ && \;\; \downarrow \bullet \\ X & X \\ \;\; \downarrow \bullet \\ X } \,.$

Hm. Given this, doesn’t Keenan provide a very clear picture of

the process of computation grind away step by step in the category-theoretic view #

Posted by: urs on August 27, 2006 3:51 PM | Permalink

Re: Categories and computation

I’m enjoying this discussion immensely, but it’s gotten so big that it’s a bit hard to follow all the branches. So, I’m going to try something a bit dictatorial, and we’ll see if it’s a good idea or not. I’ll turn off comments on this blog entry and make new entries on:

The first thread is solely for questions that stay within the existing theory of CCC’s and lambda calculus - for example, questions about formulating the untyped lambda calculus as a CCC, questions about the free CCC on one object, questions about PCF, etc.

The second is for attempts to see computation as a process, for example by seeing β- and η-reduction as 2-morphisms in some kind of “cartesian closed 2-category”.

The third is for attempts to generalize CCC’s for the purposes of quantum computation - mainly in the direction of symmetric monoidal categories, typically with other bells and whistles.

Posted by: John Baez on August 28, 2006 2:58 AM | Permalink
Read the post CCCs and the λ-calculus
Weblog: The n-Category Café
Excerpt: Let's discuss cartesian closed categories and the λ-calculus!
Tracked: August 28, 2006 4:59 AM
Read the post Categorifying CCCs: seeing computation as a process
Weblog: The n-Category Café
Excerpt: This blog entry is an excuse for discussing ways to categorify the notion of a cartesian closed category (CCC), so we can see computation in the λ-calculus as a 2-morphism, or something like that. To get a quick idea of...
Tracked: August 28, 2006 6:53 AM