Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 20, 2006

Classical vs Quantum Computation (Week 3)

Posted by John Baez

This week we had a guest lecture in our course on Classical versus Quantum Computation:

If you read this stuff you’ll learn what Star Trek has to do with categories - and you’ll learn how to play

chess go chess^{go}

Last week’s notes are here; next week’s notes are here.

In a cartesian closed category, or CCC, we can take products of objects, and also take exponentials. For example, in the category of sets, the product A×BA \times B, or

ABA B

for short, is the usual cartesian product of the sets AA and BB, while the exponential

A BA^B

is the set of functions from BB to AA.

We can consider an abstract sort of CCC consisting of one object xx and all the stuff we can build from it using the CCC axioms. This is called the free CCC on one object.

James Dolan’s great idea was to study this gadget using games. Recently he and Todd have pretty much completely worked out how. This is what Jim talked about. Todd’s notes cover the same material in a somewhat more formal way. But here I’ll give a less formal introduction, taken from "week240" of This Week’s Finds.

Let’s play a game. I have a set X in my pocket, and I’m not telling you what it is. Can you pick an element of X in a systematic way? No, of course not: you don’t have enough information. X could even be empty, in which case you’re clearly doomed! But even if it’s nonempty, if you don’t know anything about it, you can’t pick an element in a systematic way.

So, you lose.

Okay, let’s play another game. Can you pick an element of

XX

in a systematic way? Here AB means the set of functions from B to A. So, I’m asking if you can pick a function from X to itself in a systematic way.

Yes! You can pick the identity function! This sends each element of X to itself:

x |→ x

You don’t need to know anything about X to describe this function. X can even be empty.

So, you win.

Are there any other ways to win? No.

Now let’s play another game. Can you pick an element of

X( XX )

in a systematic way?

An element in here takes functions from X to itself and turns them into elements of X. When X is the set of real numbers, people call this sort of thing a functional, so let’s use that term. A functional eats functions and spits out elements.

You can scratch your head for a while trying to dream up a systematic way to pick a functional for any set X. But, there’s no way.

So, you lose.

Let’s play another game. Can you pick an element of

(XX)( XX )

in a systematic way?

An element in here eats functions and spits out functions. When X is the set of real numbers, people often call this sort of thing an operator, so let’s use that term.

Given an unknown set X, can you pick an operator in a systematic way? Sure! You can pick the identity operator. This operator eats any function from X to itself and spits out the same function:

f |→ f

Anyway: you win.

Are there any other ways to win? Yes! There’s an operator that takes any function and spits out the identity function:

f |→ (x |→ x)

This is a bit funny-looking, but I hope you get what it means: you put in any function x |→ y, and out pops the identity function z |→ z.

This arrow notation is very powerful. It’s usually called the "lambda calculus", since when Church invented it in the 1930s, he wrote it using the Greek letter lambda instead of an arrow: instead of

x |→ y

he wrote

λ x.y

But this just makes things more confusing, so let’s not do it.

Are there more ways to win this game? Yes! There’s also a "squaring" operator that takes any function f from X to itself and "squares" it - in other words, does it twice. If we write the result as f2, this operator is

f |→ f2

But, we can express this operator without using any special symbol for squaring. The function f is the same as

x |→ f(x)

so the function f2 is

x |→ f(f(x))

and the squaring operator is

f |→ (x |→ f(f(x)))

This looks pretty complicated. But, it shows that our systematic way of choosing an element of

(XX)( XX )

can still be expressed using just the lambda calculus.

Now that you know "squaring" is a way to win this particular game, you’ll immediately guess a bunch of other ways: "cubing", and so on. It turns out all the winning strategies are of this form! We can list them all using the lambda calculus:

f |→ (x |→ x)

f |→ (x |→ f(x))

f |→ (x |→ f(f(x)))

f |→ (x |→ f(f(f(x))))

etc. Note that the second one is just a longer name for the identity operator. The longer name makes the pattern clear.

o far, all these methods of picking an element of (XX)( XX ) for an unknown set X can be written using the lambda calculus. There are other sneakier ways. For example, there’s the operator that sends those functions that have fixed points to the identity function, and sends functions without fixed points to themselves. But from now on, to keep things simple, let’s just consider methods that can be written using the lambda calculus. By “systematic way”, that’s what we will mean.

Now let’s play another game. Can you pick an element of

X ( X( XX ) )

in a systematic way?

An element in here eats functionals and spits out elements of X. So, it’s called a functionalal on X. At least that’s what Jim calls it.

If I have an unknown set in my pocket, can you pick a functionalal on this set in a systematic way?

Yes! You just need to pick a recipe that takes functionals on X and turns them into elements of X. Here’s one recipe: take any functional and evaluate it on the identity function, getting an element of x.

In lambda calculus notation, this recipe looks like this:

f |→ f(x |→ x)

Can you think of other ways to win this game? I hope so: there are infinitely many! Jim and Todd figured out a systematic way to list them all.

Now let’s play another game. Can you pick an element of

X(X(X(XX)))

in a systematic way? The parentheses are getting a little obnoxious by now, so let’s adopt the convention that ABC

is short for A(BC), which lets us write the above expression as

XXXXX

A thing in here eats functionalals and spits out elements of X, so it’s called a functionalalal.

So, can you find a functionalalal on an unknown set in some systematic way?

The answer is no: you lose.

How about picking an element of

((XX)(XX))((XX)(XX))

in a systematic way? Such a thing eats operators and spits out operators, so it’s called an operatorator.

The answer is yes: there are lots of ways to win this game. The real challenge is listing them! This is the sort of question Dolan and Trimble’s work answers: for any game of this sort, it lets you list all the ways to win.

In fact, instead of moving on to functionalators, operatorals, operatoralatorals, and so on, let me just tell you trick for instantly deciding which of all these games you can win.

You just take your game, like this:

((XX)(XX))((XX)(XX))

and evaluate it by setting X = 0. If you get 0, there’s no way to win. If you get 1, there’s at least one way to win.

To use this trick, you need to know that

00 = 1

This is something they don’t teach in school! In analysis, XYcan approach anything between 0 and 1 when X and Y approach 0. So, teachers like to say 00 is undefined. But XX approaches 1 when X → 0. More importantly, in set theory, AB stands for the set of functions from B to A, and the number of elements in this set is

|AB| = |A||B|

When A and B are empty, there’s just one function from B to A, namely the identity. So, for our purposes we should define 00 = 1.

Consider the case of functionals, which are elements of XXX. If we evaluate this at X = 0 we get

000 = 01 = 0

So, there are no functionals when X is the empty set. So, you can’t pick a functional on a unknown set in a systematic way. That’s why you lose when your game evaluates to 0. It’s more interesting to prove that for games evaluating to 1, there’s a way to win.

But we’d really like to understand all the ways to win. And for this, Dolan and Trimble needed the theory of holodeck games.

In Star Trek, the "holodeck" is a virtual reality environment where you can play various games.

On the holodeck, if you regret a move you made, you can back up to any earlier point in the game and make a new move.

Actually I’m deviating from the technical specifications of the holodeck on Star Trek, as explained here:

6) Wikipedia, Holodeck, http://en.wikipedia.org/wiki/Holodeck

So, if you’re a Star Trek purist, it’s better to imagine a video game where you can save your game at any state of play, and go back to these saved games whenever you want. And, you have to imagine being so paranoid that you always save your game before making a move. This allows games to go on forever, so we only say you have a winning strategy if you can win in a finite number of moves, no matter what the other player does.

To make this completely precise, we consider two-player games where the players take turns making moves. When a player can’t make a move, they lose. Any such game can be written as a "game tree", like this:

     | \/ 
   \ | /
    \|/ 

In this example, the first player has three choices for her first move. If she picks the middle branch, the second player has one choice for his first move. Then the first player has one choice for her second move. Then the second player has no choice for his second move - so he loses.

So, in this particular example the second player has no winning strategy.

A cool thing about such a game is that we can take its game tree and turn it into an expression built from some variable X using products and exponentials. To do this, just put an X at each vertex of the tree except the root:

     X  X  X 
     |   \/ 
 X   X   X
  \  |  /
   X X X
    \|/ 

Then blow on the tree with a strong westerly wind, so strong that the branches blow away and only the X’s are left:

        X   XX
   X   X   X
  X   X   X 

This is just a way of writing an expression built from X using products and exponentials:

XX XXX XXXX

Conversely, any such expression can be turned back into a tree, at least after we simplify it using these rules:

(AB)C = AC BC

(AB)C = ABC

For example, consider the set of operators:

(XX)(XX)

If we simplify this, we get

XX XX

or

     X
  X X
 X
giving the tree
        X
       /
  X   X
   \ / 
    X
    |
or in other words
      /
   \ /
    |

And here’s a cool fact: if you take any expression built from X using products and exponentials, and evaluate it at X = 0, you can tell which player has a winning strategy for the game described by the corresponding tree! If you get 1, the second player has a winning strategy; if you get 0, they don’t.

It’s pretty easy to prove: try it.

But if you’ve been paying attention, you’ll have noticed something weird.

I’ve told you two ways to get a game from any expression built from X using products and exponentials. First, the game of systematically picking an element of the resulting set. Second, the game we get by turning this expression into a game tree, like I just did.

For both these games, you can decide if there’s a winning strategy by evaluating the expression at X = 0.

But are they the same game? No! One is the holodeck version of the other!

Let’s look at the familiar example of operators:

(XX)(XX) = XX XX

This evaluates to 1 at X = 0. So, if we turn it into a tree

      /
   \ /
    |
we get a game where the second player has a winning strategy.

This game is not very exciting, but it becomes more exciting if you call it "The Lady or the Tiger". In this game, the first player has only one first move: he takes the second player to a room with two doors, corresponding to the two branches of the above tree.

Then it’s the second player’s turn.

If he opens the left door, a beautiful lady pops out and they instantly get married and live happily ever after. If he opens the right door, the first player opens a tiger cage. Then the tiger jumps out and eats the second player.

In this game, the second player has just one winning strategy: on his first move he should choose the left door.

Next look at the game of defining an element of

(XX)(XX) = XX XX

using the lambda calculus. We’ve seen there are infinitely many many strategies for winning this:

f |→ (x |→ x)

f |→ (x |→ f(x))

f |→ (x |→ f(f(x)))

f |→ (x |→ f(f(f(x))))

and so on. These correspond to 2nd-player winning strategies for the holodeck version of The Lady or the Tiger.

What are these strategies?

One is just to play the game and win by choosing the left door.

Another is to choose the right door - and then, just when the tiger is about to eat you, back up and choose the left door!

Another is to choose the right door - and then, just when the tiger is about to eat you, back up and choose… the right door!

Then, when the tiger is about to devour you again, back up again, and this time choose the left door.

And so on: for each n, there’s a strategy where you choose the right door n times before wising up and choosing the left door.

Now, if you want a really nice math project, ponder the pattern relating all these strategies to the corresponding lambda calculus expressions:

f |→ (x |→ x)

f |→ (x |→ f(x))

f |→ (x |→ f(f(x)))

f |→ (x |→ f(f(f(x))))

etc.

Then, figure out how to prove that for any 2-person game, say:

     | \/
   \ | /
    \|/
there’s a 1-1 correspondence between winning second-person strategies for the holodeck verson of this game and ways of using the lambda calculus to define elements of the corresponding set:

XX XXX XXXX

Apparently this result goes back to work of Hyland and Ong in the early 1990s. Dolan rediscovered the idea, and Trimble and he have recently worked out a new proof.

If you get stuck, first try these notes from Dolan’s talks, for some hints:

Then try Trimble’s more rigorous, technical treatment, and the original paper by Hyland and Ong:

Posted at October 20, 2006 2:44 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/997

20 Comments & 3 Trackbacks

Read the post Classical vs Quantum Computation (Week 2)
Weblog: The n-Category Café
Excerpt: An intro to closed monoidal categories.
Tracked: October 20, 2006 3:23 AM

Re: Classical vs Quantum Computation (Week 3)

Whenever I see trees for two-player games, I think of Conway’s construction of the surreal numbers (which were born out of Go endgames).

Posted by: Blake Stacey on October 20, 2006 2:33 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

Blake Stacey wrote:

Whenever I see trees for two-player games, I think of Conway’s construction of the surreal numbers (which were born out of Go endgames).

Yes, this stuff about games and CCCs is related to Conway’s work on combinatorial games. In a 1977 paper, André Joyal modified Conway’s work a bit and related it explicitly to category theory; an English version of his paper has been prepared by Robin Houston. Later, Martin Hyland studied Conway games and linear logic - though apparently the main reference here is an unpublished lecture - the kind of thing I’d like to get my hands on and make available at the n-Category Café! This triggered a lot of work on game semantics for linear logic, including a paper by Hyland and Ong.

It would be nice to relate all this stuff more closely to what Dolan and Trimble are doing. The obvious guy to do this is Robin Houston, a frequent patron of this café, since he wrote what seem to be the best introductions to this whole line of thought:

Posted by: John Baez on October 20, 2006 6:40 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

So many new things to read, so much to figure out — it’s a good time to be alive!

Posted by: Blake Stacey on October 22, 2006 9:00 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

In another comment (later today, perhaps), I’ll say something about how Dolan and Trimble’s ideas fit into the conputer science tradition of game semantics. The “holodeck” operator is related to linear logic in a rather wonderful way. For now, I just wanted to mention the following paper:

Martin Hyland. Game semantics. In A. Pitts and P. Dybjer, editors, Semantics and Logics of Computation, pages 131–182. Cambridge University Press, 1997.

Not only is it a wonderful introduction to game semantics, but it also contains (Theorem 4.11) a gamey description of the free cartesian-closed category on a set of generators.

The category studied by Dolan and Trimble is there as well: if I’m not mistaken, it’s essentially the category WG of Definition 3.6. (To obtain Dolan and Trimble’s category precisely, you need only restrict the set of moves to a single element.)

[Note: The numbering above refers to a preprint version of the paper that I have on my computer. It’s possible that it’s different in the final version.]

Posted by: Robin on October 23, 2006 1:18 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

I’m not mistaken, it’s essentially the category WG of Definition 3.6.

Oh, but I am! So dreadfully mistaken. It’s more interesting than that. Dolan and Trimble’s category is actually a kind of minimal category of innocent strategies, without the question-answer discipline or the bracketing condition. The technique of defining innocence in terms of a “holodeck” construction has been described by Martin Hyland in an unpublished lecture called Categorical construction of innocence, of which there’s a very brief description in my continuation report. (This remark probably only makes sense to game semantics specialists. I’ll try to explain more expansively in the Other Comment I promised.)

Posted by: Robin on October 23, 2006 5:11 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

I’m afraid that your note gives a highly incomplete, and hence seriously misleading impression of the history of this subject.

Game semantics has now been extensively developed both for various logics and type theories, and for programming languages, since around 1992.
There were various precursors, among whom Lorenzen and his school on the proof theory side should be mentioned (and many others too). Joyal does seem to have been the first person to organize (Conway) games into a category, but his treatment was very brief, and did not lay the basis for a substantial approach. (It was also pretty inaccessible at the time - I didn’t get to see his short note till the mid 90’s).

Yves Lafont was possibly the first person to mention games in connection with Linear Logic, in what was actually an early paper on what are now called Chu spaces (extensively developed by Vaughan Pratt and his colleagues). I do recall Martin Hyland mentioning Joyal’s category of Conway games in a lecture in the early 90’s, but it was essentially a mention, rather than a substantial development. (And I recall Martin mentioning that the category was compact closed, hence not a good match for Linear Logic. Basically his stance appeared to be that it was a nice idea which didn’t seem to work out as one would wish).

An important development was a paper by Andreas Blass which appeared in 1992, on `Game Semantics for Linear Logic’. This is a fascinating paper, which built on Blass’s really pioneering work on games in a set-theoretic context from the early 1970’s. One of the fascinating things about it is that he builds a really natural seeming structure which fails to be a category because composition is non-associative! (This point in not discussed in Blass’s paper. You can find a detailed discussion in my paper `Sequentiality vs. Concurrency in Games and Logic’, available from my web pages - as are my other publications.)

It was Blass’s paper which got me into Game Semantics. My paper with Radha Jagadeesan on `Games and Full Completeness for Multiplicative Linear Logic’, written in 1992, had a number of rather basic contributions:
- A thorough-going syntax-free compositional approach, by organizing games and strategies into a structured category of the appropriate kind.
- In particular, we fixed the Blass problems with composition, and gave an analysis of composition of strategies as `parallel composition + hiding’ which has been basic to all subsequent work.
- We also picked out a sub-category of `history-free’ strategies and related it to Girard’s geometry of Interaction.
- We introduced the idea of Full Completeness as a way of characterizing the space of proofs of a logic, and proved the first such result.
(This can be seen as characterizing the free category of some kind corresponding to a logic).

There is a link to this paper in your note, but you evidently didn’t think it worthwhile to mention the authors by name. It is absolutely incorrect to say that this paper was in any sense `triggered’ by Martin’s lecture - whereas it was certainly directly prompted by Blass’s paper.

Subsequently, Martin Hyland and Luke Ong wrote a paper (on Full Completeness without Mix) very much following the same general lines as my paper with Radha, but with a nice tweak which meant that Full Completeness was obtained without admitting the MIX rule. (I think it would be quite accurate to say that their paper was `triggered’ by ours.)

After these results, it was clear that the most notorious issue in programming language semantics, the `full abstraction problem for PCF’, was in range. Remarkably enough, two different teams:
- Abramsky, Jagadeesan and Malacaria
- Hyland and Ong
produced really quite different constructions which yielded in the end the same result: a synthetic construction of the fully abstract model. (The technical issue in both cases was how to accomodate the linear exponentials, i.e. the possibility to copy and delete inputs to functions. It turned out there are two very different approaches which can be taken. The HO approach (also independently found by Hanno Nickau, incidentally) is quite related to the ideas of the Lorenzen school, but, crucially, done compositionally. The AJM approach is related to the Geometry of Interaction - but takes the quite demanding step of making an honest CCC out of it.)

After that, the next key step was to see that the whole space of programming languages and computational features opened up to a game theoretic analysis in a very systematic way, by varying the conditions on strategies. This step was taken by myself and my students, and has led to a substantial further development.

More recently, Luke Ong, Dan Ghica, Andrzej Murawski and myself have developed algorithmic game semantics, as a basis for compositional program analysis and verification, and - in Luke and Andrzej’s hands - as a beautiful meeting point between semantics and algorithmics.

There have of course been many other developments too, and many people have contributed. There have been recent workshops on these topics, e.g. in Seattle as part of the Federated Logic Conference in August.

I should say that I don’t think any of the above is at all controversial. I know all the main participants well.

Blogs are of course unfiltered by refereeing and the usual checks and balances. All the more reason for being careful when sketching the history of some ideas, particularly ones you are not familiar with.

Posted by: Samson Abramsky on October 26, 2006 2:33 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

Thanks for the detailed account. I’m sorry for any offense my sketchy remarks caused. I certainly didn’t intend to slight your well-known work on game semantics, but I see that what I wrote had that effect.

This café is nothing like a refereed paper. It’s a forum for free-wheeling chat sessions. So, we say plenty of things here that turn out to be wrong, misleading or otherwise suboptimal - just as in a normal conversation. The great advantage is that when I screw up here, anyone in the world can instantly give me the public knuckle-rapping I deserve!

Posted by: John Baez on October 27, 2006 2:14 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

Fine. Thanks for the gracious response.

On the matter of characterizing the free ccc, here is an additional reference.
In the paper `Axioms for definability and full completeness’,
http://web.comlab.ox.ac.uk/oucl/work/samson.abramsky/axiomatic.pdf
I give axioms which characterize `internally’ what it is for a ccc to be the free ccc, and show that suitable variants of both the HO and AJM games satisfy these axioms.

The axioms, and variants of them, have proved quite useful in proving various definability and full completeness results.

Posted by: Samson Abramsky on October 27, 2006 8:20 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

While people are explaining things, what does all this have to do with Martin’s theorem (that every “quasi-Borel” game [someone else please define(!)], has a winning strategy)?

I’ve read the start of Robin’s papers above and seen mention of the axiom of determinacy, which apparently goes hand-in-hand with the axiom of dependent choice (AC/DC?) to explain all one needs to know about “games you can win”

You are all (gentle contributors) hiding something from us (gentle readers) for sure – what is it?

Posted by: Allan Erskine on October 23, 2006 7:06 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

Allan Erskine:

You are all (gentle contributors) hiding something from us (gentle readers) for sure – what is it?

I don’t know anything about these game-theoretic interpretations of axioms of set theory, though I’ve heard of them. So, I’m not hiding something from you. There are lots of places to read about this kind of thing, starting with the Wikipedia, but I’ve never gotten very interested: it all seems too infinitary and ethereal for me.

Posted by: John Baez on October 27, 2006 1:50 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

After I wrote week240, from which the above material was taken, I found out that Dolan and Trimble’s work overlaps with previous work on the same subject.

I learned this from Dominic Hughes, who posted the following on the category theory mailing list - and permitted me to post it here:

This “backtracking game” characterisation has been known since around ‘93-‘94, in the work of Hyland and Ong:

  • M. Hyland and L. Ong, On full abstraction for PCF, Information and Computation, Volume 163, pp. 285-408, December 2000. [Under review for 6 years!]

(PCF is an extension of typed lambda calculus.) My D.Phil. thesis extended the lambda calculus (free CCC) characterisation to second-order, published in:

To characterise the free CCC on an arbitrary set {Z,Y,X,…} of generators (rather than a single generator, as you discuss), one simply adds the following Copycat Condition:

  • Whenever first player plays an occurrence of X, the second player must play an occurrence of X.

[Try it: see how X → Y → X has just one winning strategy.] Although the LICS’97 paper cited above appears to be the first place the Copycat Condition appears in print, I like to think it was already understood at the time by people working in the area. Technically speaking, winning strategies correspond to eta-expanded beta-normal forms. See pages 5-7 of my thesis for an informal description of the correspondence.

It sounds like you’ve reached the point of trying to figure out how composition should work. Proving associativity is fiddly. Hyland and Ong give a very elegant treatment, via a larger CCC of games in which both players can backtrack. The free CCC subcategory is carved out as the so-called innocent strategies. This composition is almost identical to that presented by Coquand in:

  • Proceedings of the CLICS workshop, Aarhus, 1992.

Dominic

PS A game-theoretic characterisation with an entirely different flavour (winning strategies less “obviously” corresponding to eta-long beta-normal forms) is:

  • Abramsky, S., Jagadeesan, R. and Malacaria, P., Full Abstraction for PCF, Info. & Comp. 163 (2000), 409-470. [Announced concurrently with Hyland-Ong, around ‘93-‘94.]

I’ve tried to rewrite week240 to take this information into account.

Posted by: John Baez on October 23, 2006 9:35 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

There’s a programming style called “continuation passing style.” It’s useful when building compilers for functional languages, since it allows you to turn recursion into iteration, which saves a lot of memory.

Continuations are the context of a computation: what has been computed before this point and what’s going to happen to the result after this point.

Chris Barker has written some papers on continuations in natural language. One example arises in the phrases “John saw Mary” versus “John saw everyone.” In the first phrase, “John saw” corresponds to something like

(1)(zsaw(John,z))(z\mapsto saw(John, z))

so that “John saw Mary” ends up as

(2)(zsaw(John,z))(Mary)=saw(John,Mary).(z\mapsto saw(John, z))(Mary) = saw(John , Mary).

The second phrase causes trouble, though: we should end up with

(3)ysaw(John,y),\forall y \; saw(John, y),

but there’s no zz we can pass in to make that happen. Barker suggests that instead of applying the current state to the next input, we should continuize and pass the current continuation to the continuized input:

(4)Marybecomes(xx(Mary)) Mary \quad becomes \quad (x \mapsto x(Mary))
(5)everyonebecomes(xyx(y)) everyone \quad becomes \quad (x \mapsto \forall y \; x(y))

so we get

(6)(xx(Mary))(zsaw(John,z))=saw(John,Mary) (x \mapsto x(Mary))(z \mapsto saw(John,z)) = saw(John, Mary)

and

(7)(xyx(y))(zsaw(John,z))=ysaw(John,y) (x \mapsto \forall y \; x(y))(z \mapsto saw(John,z)) = \forall y \; saw(John, y)

Continuations play a big role in the holodeck games. In the holodeck, the second player is given a continuation of the current game every time it’s his move.

Consider the game “Operator^Operator,”

(8)(X XX X) (X XX X)=X XX XX XX X, (X^{X X^X})^{(X^{X X^X})}=X^{X X^X X^{X X^X}},

as played by Alice and Bob—but player 1, Alice, is a holodeck character. In order to talk about each point of this game, I’ll rewrite it as

(9)A BC ED FG H. A^{B C^E D^{F G^H}}.

Alice has only one move available, A, so she plays it. Bob is given the continuation of the game and has three choices: he can win immediately with B or play C or D.

If he plays C, then Alice must play E and in the usual game, Bob would lose—but this is the holodeck, so he still has the three choices from his continuation.

If he plays D, then Alice finally has a choice. If she plays F, Bob would lose in the usual game, but here he has his continuation to call on, so has three choices. If Alice plays G, then Bob gets the continuation of the current game and now has *four* choices: back up and play B, C, or D, or win with H.

If Bob chooses to go back and play C, and if Alice plays G again, then Bob is again given a continuation of the current game. He now has *five* choices: three from the first continuation, one from the second continuation (go back and win at H then) and one from the current continuation (win at H now).

The rules in Dolan’s talk tell how to keep track of the continuations at each point in the tree.

A strategey for a holodeck game consists of choosing a continuation along every branch until you win each one. The correspondence with lambda calculus arises when you notice that the continuations you’re given are exactly the unbound variables at each point in the lambda term, each branch is one of the inputs out of which the term must make a result of type XX, and choosing a branch is applying the continuation.

In the game “Operator^Operator,” one can use currying and read it as a term that takes an operator D, a function C, and a variable B of type XX and produces a variable of type XX. Therefore, every lambda term from this tree starts

(10)D(C(B...)) D\mapsto (C\mapsto (B\mapsto ...))

The function C takes an input E of type XX and produces an ouput of type XX.

The operator D takes a function G and an input F, while the function G takes an input H.

If Bob wants to win straight away, choosing B immediately, we get

(11)D(C(BB)) D\mapsto (C\mapsto (B\mapsto B))

If Bob chooses to echo each move of Alice—that is,

- D when Alice plays A

  - C if Alice plays G (note that when she plays G he gets a continuation allowing him to play the move H)

    - H when Alice plays E

  - B if Alice plays F

we get

(12)D(C(BD(HC(H))(B))) D \mapsto (C \mapsto (B \mapsto D(H \mapsto C(H))(B)))
(13)=D(CD(HC(H))) = D \mapsto (C \mapsto D(H \mapsto C(H)))
(14)=D(CD(C)) = D \mapsto (C \mapsto D(C))
(15)=DD = D \mapsto D

which is the identity operatorator.

Posted by: Mike Stay on October 23, 2006 9:39 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

I wrote

identity operatorator

I meant “meta-operator,” i.e. “operator^operator”

Posted by: Mike Stay on October 23, 2006 11:13 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

It was pointed out to me that when John wrote operatorator, he defined it to be a map from operators to operators. But Jim Dolan used a different set of conventions, so I used John’s convention, got confused, and then “corrected” it to Jim’s.

Anyway, here’s how Jim uses the terms:

value = XX

function (or meta-value) = X XX^X

functional = X (X X)X^{(X^X)}

functionalal = X (X (X X))X^{\left(X^{(X^X)}\right)}

operator (or meta-function) = (X X) (X X){(X^X)}^{(X^X)}

operatorator = (X X) ((X X) (X X)){(X^X)}^{\left({(X^X)}^{(X^X)}\right)}

meta-operator = ((X X) (X X)) ((X X) (X X)){\left({(X^X)}^{(X^X)}\right)}^{\left({(X^X)}^{(X^X)}\right)}

Posted by: Mike Stay on October 24, 2006 12:29 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

In the first version of what I wrote above, I claimed that all systematic ways of picking an element of (XX)(XX) could be defined using the lambda calculus. I was disabused of this notion by Vaughan Pratt, who wrote on the category mailing list:

Hi, John,

In your Week 240 post to categories, you said

The moral of this game is that all systematic methods for picking an element of (XX)(XX) for an unknown set X can be written using the lambda calculus.

What is unsystematic about the contagious-fixpoint functional? This is the functional that maps those functions that have any fixpoints to the identity function (the function that makes every element a fixpoint) and functions without fixpoints to themselves (thus preserving the absence of fixpoints). It’s a perfectly good functional that is equally well defined for all sets X, its statement in no way depends on X, and conceptually the concept of contagious fixpoints is even intuitively natural, but how do you write it using the lambda calculus?

Many more examples in this vein at JPAA 128, 33-92 (Pare and Roman, Dinatural numbers, 1998). The above is the case K = {0} of Freyd’s (proper) class of examples.

Vaughan

Here Pratt is using “functional” to mean what I called an “operator”.

I’ve fixed this in a quick-and-dirty way above, and a bit more carefully in week240.

Posted by: John Baez on October 24, 2006 7:58 AM | Permalink | Reply to this

Systematic elements.

Vaughn Pratt wrote in part:

What is unsystematic about the contagious-fixpoint [operator]? This is the functional that maps those functions that have any fixpoints to the identity function (the function that makes every element a fixpoint) and functions without fixpoints to themselves (thus preserving the absence of fixpoints).

This is systematic in a precise sense: it is canonical. That is, we have two functors from the groupoid of sets to itself, the constant functor X |-> 1 and the “operator” functor X |-> XXXX.
(Notice that the latter is not a functor on the full category of sets, but such a requirement would go beyond being merely systematic.) The contagious-fixpoint operator is a canonical transformation (like a natural transformation, but only respecting isomorphisms) between these two functors.

I notice that (AFAICT) this operator cannot be defined constructively. (I suppose that Jim would say that this is because it is merely canonical, not natural … but the same is true of the various lambda-calculus terms, which are fine constructively.) IOW, this canonical transformation cannot be defined in an arbitrary topos.

Are all the canonical transformations (betweens functors given by types in the lambda calculus) definable in an arbitrary topos given by the lambda calclus? One might want to weaken topos here to LCC pretopos, on the grounds that this better reflects mathematical constructivism (or just because one wants to do centipede mathematics). Of course, if you weaken further to CCC, then we know that the answer is yes!

Posted by: Toby Bartels on October 27, 2006 8:00 AM | Permalink | Reply to this

Re: Systematic elements.

Pratt is talking about *dinatural* transformations X^X –> X^X, and there is no need to restrict to the groupoid. As I see it, the key point which allows one to construct these weird dinaturals is that if you have two functions f: X –> Y, g: Y –> X, then the composite fg has the same “looping” behavior as gf (e.g. if (fg)^2(y) = y then (gf)^2(x) = x where x = g(y); here x and y are part of the same “necklace”).

Off-hand, it looks like the definition of the contagious fixpoint dinatural and others of that ilk just uses some obvious case-splitting definable in any Boolean topos or Boolean pretopos. OTOH, I’m tempted to make the wild guess that for a typical non-Boolean topos, like the topos whose objects are functions and whose morphisms are commutative squares, the only dinaturals X^X –> X^X are the ccc-definable ones. [But I’d be delighted to be proven wrong here.]

Posted by: Todd Trimble on October 27, 2006 3:32 PM | Permalink | Reply to this

Re: Systematic elements.

Pratt is talking about *dinatural* transformations X^X –> X^X, and there is no need to restrict to the groupoid.

But I would argue that it’s legitimate to consider canonical transformations between groupoid functors, which are more general; any of these should be accepted as a “systematic” definition.

Nevertheless, it’s a good point that Vaughn’s example is a dinatural transformation. (This did not occur to me, perhaps I should read the paper that he cited.)

I agree with all three statements (including all their modal operators) in your second paragraph.

Posted by: Toby Bartels on October 30, 2006 12:35 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

James gave a second talk on this stuff, available here:

Posted by: John Baez on October 25, 2006 5:50 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation (Week 3)

James Dolan pointed out:

you describe holodeck strategies for “lady or tiger” where you take back “just when the tiger is about to eat you”, but that’s not the way it works. you take back just _after_ the tiger has eaten you.

(i guess that this is partially because of your lack of experience with computer games with a “saved game” feature. typically you die in the game and the computer plays some sort of funeral or at least funereal music; then you’re taken to the reincarnation gallery where you select one to return to from your catalog of previous lives. or something like that.)

Posted by: John Baez on October 25, 2006 6:14 AM | Permalink | Reply to this
Read the post Lauda on Topological Field Theory and Tangle Homology
Weblog: The n-Category Café
Excerpt: Aaron Lauda's thesis on topological field theory and tangle homology.
Tracked: October 25, 2006 10:02 PM
Read the post Classical vs Quantum Computation (Week 4)
Weblog: The n-Category Café
Excerpt: Currying and uncurrying, evaluation and coevaluation: basic aspects of the "quantum lambda-calculus".
Tracked: October 26, 2006 9:28 PM

Post a New Comment