Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 8, 2009

The Pi Calculus II

Posted by John Baez

guest post by Mike Stay

John Baez has been struggling to understand the pi calculus — a formalism for modelling concurrency in computer science.

I’m going to try to explain it nice and slowly. I’ll use the “polyadic asynchronous” variant of the pi calculus.

In the pi calculus, there are mailboxes, each with a name like x. You should think of the name x like the key to open the mailbox–if you don’t have the key, you can’t deposit or withdraw messages. Messages consist of tuples of keys: (x, z, x, y) is a typical message. I’ll get to how we deposit and withdraw messages below.

You can get a new box with the ν\nu operator, but I’ll spell it “new”. So

  new (x) { ... }

creates a context in which the box x can be used.

The simplest thing you can do is make a deposit:

  new (x) { Deposit (x) in x. }

This puts a copy of the key for opening x in the mailbox. But there’s nobody to withdraw mail, so it’s pretty useless.

I’ve been saying “you”, but really there are lots of things going on at once, called “processes”. If two things P and Q are happening at the same time, we write P | Q. So a slightly more useful setup is:

  new (x,y) {

     Deposit (x) in x. |

     Wait until you can withdraw (z) from x,

      then deposit (z) in y.

  }

Here there are two mailboxes and two processes.

Now it’s time for some rewrite rules. In the example above, the obvious thing to do is let the deposit trigger the withdrawal; this rule is called “synchronization”. It says

  Deposit (z1, z2, ..., zn) in x |

  Wait until you can withdraw (y1, y2, ..., yn) from x,

  then P

 

  P { with z1, z2, ..., zn replacing y1, y2, ..., yn }

In this case, n = 1, so the previous setup evolves under synchronization to

  new (x, y) {

    Deposit (x) in y.

  }

A process can do any number of withdrawals, but only one deposit. So the language describing how to construct a term in the pi calculus is

  P ::=

    Deposit (y1, y2, ..., yn) in x.   |

    Wait until you can withdraw      (y1, y2, ..., yn) from x, then P   |

    P | P   |

    new (x1, x2, ..., xn) { P }   |

    Replicate P

The last one of these process constructors is involved in a rewrite rule called replication. It says

  Replicate P ⇒ (Replicate P) | P

which means that we can repeat this to get arbitrarily many copies of this process.

Now, of course, they had to make it hard to understand or they wouldn’t get published, so they usually use this shorthand (though I won’t here):

Deposit (y1, y2, ..., yn) in x.
becomes x¯y 1y 2y n\bar{x}y_1y_2\ldots y_n.

Wait until you can withdraw (y1, y2, ..., yn) from x, then P
becomes x(y 1,y 2,,y n)Px(y_1, y_2, \ldots, y_n)P.

new (x1, x2, ..., xn) { P }
becomes ν(x 1,x 2,,x n)(P)\nu(x_1, x_2, \ldots, x_n)(P).

Replicate P
becomes !P!P.

The pi calculus is just as powerful as the lambda calculus; in fact, there’s a straightforward translation of lambda calculus into the pi calculus. The first step of the translation is to move to the “name-passing” variant of lambda calculus.

In the lambda calculus, we have three ways to make a term (here I’m ignoring types, but it’s easy to figure out where they should go):

  t ::=

    x (a variable, the base case) |

    λx. t (abstraction–the λ should be a lambda) |

    t1 t2 (application)

In the name-passing variant, the term constructors look like this:

  t ::=

    x (a variable, the base case) |

    λx. t (abstraction) |

    t x (“small” application) |

    Let x be t1 in t2 (new name)

Note that we can only apply terms to variables, not to arbitrary terms. Why might this be useful? Well, imagine if t1 is some enormous term; it would save a lot of space to use x as an abbreviation for or reference to t1. It’s not too far off to call x the “address” of t1. Only when x is being applied to some other variable—like in x y—do we replace it by t1.

To translate from the usual lambda calculus into the name-passing variant, we use the first two term constructors as they are. An application

  t1 t2

translates to

  Let x be t2 in (t1 x).

Next, we apply each term constructor to a dummy name u:

  x u

  (λx.t) u

  t x u

  (Let x be t1 in t2) u

Finally, we do this embedding into pi calculus:

  [[ x u ]] = Deposit (u) in x.

  [[ (λx.t) u ]] = Wait to withdraw (x, v) from u, then ([[ t v ]])

  [[ t x u ]] = new (v) { [[ t v ]] | Deposit (x, u) in v. }

  [[ (Let x be t2 in t1) u ]] =

     new (x) {

     [[ t1 u ]] |

    Replicate (Wait to withdraw (v) from x, then [[ t2 v ]])

  }

This has a strong resemblance to the continuation passing style transform, also known as the Yoneda embedding.

Posted at September 8, 2009 4:04 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2054

117 Comments & 0 Trackbacks

Re: The Pi Calculus II

Wow — that was ‘nice and slow’? It flew by so fast I wouldn’t even have noticed it, except for the sonic boom and the lingering smell of burnt neurons!

You have to realize that as a mathematical physicist I deal with familiar, comfortable things like Lagrangians and spinors, not mysterious abstractions like ‘mailboxes’ and ‘addresses’. And since I programmed mainly back when I was in high school, using BASIC in a little room with a line printer, all modern pseudocode looks like extraterrestrial graffiti to me.

But I think I’m sort of roughly getting the point, in a vague impressionistic manner… sort of like I how I can see the trees in this picture, upside-down and woozily distorted:

Let me try to think of some questions…

I’m trying to understand this weird ‘ν\nu’ or ‘new’ operator, as in ‘new (x)’.

At first glance new (x) looks like a relative of quantifiers I know and love, like x\exists x or x\forall x or λx\lambda x, but apparently it’s an operator that opens a new ‘mailbox’ named xx.

What’s a ‘mailbox’? It’s a place where you can store and presumably remove ‘messages’, whatever those are. Well, okay: you say a message consists of a tuple of ‘keys’, where a ‘key’ is the name of a mailbox. Suspiciously circular… but not viciously so.

Anyway, here comes my question. You say:

new (x) { Deposit (x) in x. }

puts a copy of the key for opening x in the mailbox. It’s a bit weird to put a key for opening a mailbox inside that very same mailbox — especially if it’s locked — but maybe you’re just trying to freak me out with a self-referential example. So my question is: would

new (x) { Deposit (y) in x. }

be a legal thing to say?

I guess that

new (x) { Deposit (x) in y. }

would not be legal, since by saying new (x) you’ve opened a mailbox called x, but not one called y.

Posted by: John Baez on September 8, 2009 5:48 PM | Permalink | Reply to this

Re: The Pi Calculus II

We need to figure out what boost to apply, because Lagrangians involving spinors give off Cherenkov radiation as they hit my brain.

So my question is: would

new (x) { Deposit (y) in x. }

be a legal thing to say?

Yes, just like in the lambda calculus term λx.yx\lambda x.yx, where xx is bound and yy is free. In lambda calculus, the only way to get a bound variable is with the λ\lambda operator, while in pi calculus, you can bind x either using new (x) or Wait until you can withdraw (x), then P.

I guess that

new (x) { Deposit (x) in y. }

would not be legal, since by saying new (x) you’ve opened a mailbox called x, but not one called y.

This is a legal term, but like before, y is a free name.

Posted by: Mike Stay on September 8, 2009 6:44 PM | Permalink | Reply to this

Re: The Pi Calculus II

Not saying that I am anywhere near the point where I understand this, but to temper the abstractions a bit, consider this:

With lambda calculus, we can encode the natural numbers using “only” abstractions, using the Church numerals. Then we have:
0 = λf . λ x . x
1 = λf . λ x . f x
2 = λf . λ x . f (f x)
and so on. Addition can be done by
plus = λm . λn . λf . λx . m f (n f x)
If we use this, we’ll do something like
plus 1 0 =
(λm . λn . λf . λx . m f (n f x)) (λf . λ x . f x) (λf . λ x . x) = (inserting into m)
(λn . λf . λx . (λf . λ x . f x) f (n f x)) (λf . λ x . x) = (inserting into n)
λf . λx . (λf . λ x . f x) f ((λf . λ x . x) f x) = (applying the inserted n to the available f)
λf . λx . (λf . λ x . f x) f ((λ x . x) x) = (applying the result to the available x)
λf . λx . (λf . λ x . f x) f x = (applying the f and x available through the next embedded lambda expression, from the m)
λf . λx . f x = 1

It is all unwieldy, and pedantic, but it brings us where we expect to go. Indeed, 1+0=1.

Now, what the pi calculus, in the exposition here, gets us is a way to phrase this whole thing, and more, as a matter of message passing between entities, and with tags identifying both message contents and access rights to messages. It is a way of thinking that’s not very close to the model you’d be used to as more mathematically inclined, but it is close to how, say, Smalltalk or MPI think about the world.

Here, we could - I’m guessing - carry over the idea of the Church numeral, or just use Peano numerals instead. With the given embedding, the first few Church numerals would be:

0 = λ f . λ x . x = 
Wait for (f, v) from (somewhere) then {
  Wait for (x, w) from (somewhere) then {
    (x w)
  }
}
1 = λ f . λ x . f x = 
Wait for (f, v) from (somewhere) then {
  Wait for (x, w) from (somewhere) then {
    (f (x w))
  }
}

Where I’m being deliberately vague, since the lambda expressions in the Church numerals are meant to sit around and consume something once you feed them what they need to execute as lambda expressions.

All this is more speculation than solid explanation - since I, too, am trying to understand pi calculus from the blog posts here - but it unravels at least one of the thoughts I had while reading.

Posted by: Mikael Vejdemo Johansson on September 8, 2009 7:00 PM | Permalink | Reply to this

Re: The Pi Calculus II

Mikael, what are the “v” and “w” in your examples? They seem to come out of nowhere.

Both you and Mike seem to be giving a whole lot of grammar but almost no meaning. Maybe some people can talk in terms of pure grammar, but I think of a noun as a representation of a thing, not as something which can be modified by adjectives and which can stand as the subject of a verb.

Posted by: John Armstrong on September 8, 2009 8:40 PM | Permalink | Reply to this

Re: The Pi Calculus II

I tried to make it replete with nouns, with mailboxes and messages and people sorting mail!

Let me try again, more nounfully: we’ve got an infinite mailroom, and each mailbox is labeled with a natural number. There’s a counter that says how many mailboxes have been put to use. The mailroom is staffed by magical elves, each with a set of instructions. Since the instructions have to be coordinate-free, they use nicknames like x and y to refer to the boxes; each elf keeps a list of (nickname, mailbox number) pairs around to keep them all straight. Elves live until they deliver one message, and then they die.

When an elf sees an instruction Do P and Q in parallel, he gets very excited, clones himself and his list of nicknames; like all replicating creatures, elves find this very pleasurable. Then the original follows the instructions P and the clone follows the instructions Q. So even if a particular elf dies, it’s possible for one of his clones (or great-great-grandclones or whatever) to go on living.

When an elf sees the instruction new (x) { P }, he checks to see if he’s already got a box nicknamed x. If so, he crosses that pair out; if not, he does nothing. Next, he increments the counter and writes down the pair (x, current counter value) on his list. After doing all that, he follows the instructions P.

When an elf sees the instruction Deposit (y1, ..., yn) in x, he mournfully takes a slip of paper, looks up each of the numbers whose nicknames are y1, ..., yn and writes those numbers in order on the paper. Then he looks up the number whose nickname is x, deposits the slip in that box, and vanishes in a puff of sparkly dust.

When an elf sees the instruction Wait until you can withdraw (y1, ..., yn) from x, then do P, he first crosses out the pairs containing the nicknames y1, ..., yn. Then he looks up the number whose nickname is x, and checks it repeatedly for mail. Some elves have been checking their mailboxes their entire lives; others find the box full of messages and draw one out at random. When an elf gets a message, he writes down the numbers next to their nicknames and follows the instructions P.

Finally, when an elf sees the instruction Replicate P, he squeals with joy, because he’s just been told he’s going to live forever and spend it very pleasurably. He spends the rest of eternity cloning himself and his list, then sending his clones off to do P.

Is that any better?

Posted by: Mike Stay on September 8, 2009 9:29 PM | Permalink | Reply to this

Re: The Pi Calculus II

Is that any better

Some, but the sarcasm is really turning me off. Just because I ask for meaning doesn’t mean I need to be talked down to.

Todd comes a lot closer in glossing my question as being about semantics. Here, you’re turning “point, line, plane” into “table, chair, beer mug”, and your only improvement in explanation is taking more time to do it (and talking a tiny bit more about the process than you did at first).

I understand that there may not really be a “Hello World” here, but Mikael’s λ-calculus examples illustrate using the formalism to do something. Even then, though, I’d be confused without having seen his examples worked through elsewhere.

You may be able to just process abstract strings in your head with no referents whatsoever, but when I first learned the λ-calculus I was thinking “λx.xy” is a function that takes its first argument in and applies it to y. I understand that this is an interpretation of the λ-calculus and not the λ-calculus itself, but it gives meaning and semantics to the bare syntax, and without a semantic hook I just can’t hold a big collection of BNF rules in my head.

Posted by: John Armstrong on September 9, 2009 12:25 AM | Permalink | Reply to this

Re: The Pi Calculus II

Some, but the sarcasm is really turning me off. Just because I ask for meaning doesn’t mean I need to be talked down to.

I’m sorry if I offended you–I certainly can’t talk down to you from where I’m standing! I really wanted to know if my explanation was any better (clearly not!) The elves were a rather poor attempt to inject some lightheartedness into the very tedious, dense notation.

You may be able to just process abstract strings in your head with no referents whatsoever, but when I first learned the λ-calculus I was thinking “λx.xy” is a function that takes its first argument in and applies it to y. I understand that this is an interpretation of the λ-calculus and not the λ-calculus itself, but it gives meaning and semantics to the bare syntax, and without a semantic hook I just can’t hold a big collection of BNF rules in my head.

I thought I was giving a semantics: x¯y\bar{x}y is a process deposits the number y refers to in the box labeled by the number x refers to.

I understand that there may not really be a “Hello World” here, but Mikael’s λ-calculus examples illustrate using the formalism to do something. Even then, though, I’d be confused without having seen his examples worked through elsewhere.

If you’d like, I can give an example term and the steps in its reduction.

Posted by: Mike Stay on September 9, 2009 12:39 AM | Permalink | Reply to this

Re: The Pi Calculus II

So how does putting messages in mailboxes compute something?

Giving step-by-step examples that actually do something would, indeed, help.

Posted by: John Armstrong on September 9, 2009 2:37 AM | Permalink | Reply to this

Re: The Pi Calculus II

I posted an example here.

Posted by: Mike Stay on September 9, 2009 6:32 PM | Permalink | Reply to this

Re: The Pi Calculus II

I haven’t been following this discussion too closely yet, but I have a sense of what John A is asking: what is π\pi-calculus about, exactly? (Or maybe I’m just projecting a question I have myself.) For example, at a naive level the λ\lambda-calculus is about a world of functions in which the inputs and outputs of functions are other functions. At a more sophisticated level, it’s about cartesian closed categories with an object XX for which there are isomorphisms

XX×XXX XX \cong X \times X \qquad X \cong X^X

I think I might like what you’ve written above once I have a chance to look more closely, but my sense is that you are describing the syntax in vivid concrete terms, but some people, let’s say John, would like to understand the intended semantics. It would be sort of like describing how free groups (or free somethings) work in terms of colorful metaphors of magical elves, and having John ask: yes, but what’s a group?

Usually a logical syntax is really describing a free thingamajig of some sort, and one wants to have a sense of what are thingamajigs themselves. Was this basically what John B was asking in his earlier post?

Posted by: Todd Trimble on September 8, 2009 10:39 PM | Permalink | Reply to this

Re: The Pi Calculus II

Todd wrote:

Usually a logical syntax is really describing a free thingamajig of some sort, and one wants to have a sense of what are thingamajigs themselves. Was this basically what John B was asking in his earlier post?

Yes, I was trying to figure out what the π\pi-calculus is about. Both in the naive sense of:

HUH?

and also in the more sophisticated sense of:

DON’T JUST GIVE ME SYNTAX! GIVE ME SEMANTICS!

In particular, having finally understood the sense in which it’s true — and also the sense in which it’s not true — that

THE λ\lambda-CALCULUS IS ABOUT CARTESIAN CLOSED CATEGORIES

I was hoping for some deus ex machina to descend from on high and tell me something like

THE π\pi-CALCULUS IS ABOUT ———— CATEGORIES

Of course even after people told me that the λ\lambda-calculus was about cartesian closed categories, I didn’t understand it until I did a lot of work. But in the case of the π\pi-calculus, the work seems harder, because I think that nobody knows quite how to fill in the blank.

So it’s up to us.

Robin Milner’s book on the π\pi-calculus is actually very nice and helpful, but I’m impatient so I’m demanding help from everyone I meet.

Posted by: John Baez on September 8, 2009 11:07 PM | Permalink | Reply to this

Re: The Pi Calculus II

I’m not sure I’d offer very good odds on you finding a simple term to insert in

THE Π\Pi-CALCULUS IS ABOUT —- CATEGORIES,

unless it be ‘Baez-Stay’ once you’ve found the right definition.

I certainly lost the beauty contest as to posts. This one already has 33 comments (34 once I’ve finished), to the round 0 for mine from the day before, but if I could draw your attention to a paper I referred to – Pi-Calculus in Logical Form, in section 5 you’ll see how they want to model π\pi-calculus processes using coalgebras for a complicated functor on a certain presheaf category.

That functor looks mighty specific, not a ‘natural’ choice with an air of inevitability to it. In fact, part of the coalgebraic message is to show that you can construct the semantics piecemeal by building it up from known semantics for the components of the complicated functor.

Maybe then at best the next step functor for the π\pi-calculus belongs to the collection of functors which are generated from a set of fairly simple components, and maybe categories of coalgebras for all such functors have a nice description. But maybe not.

Posted by: David Corfield on September 9, 2009 9:12 AM | Permalink | Reply to this

Re: The Pi Calculus II

Benjamin Pierce said of the pi calculus

The lambda-calculus holds an enviable position: it is recognized as embodying, in miniature, all of the essential features of functional computation. Moreover, other foundations for functional computation, such as Turing machines, have exactly the same expressive power. The “inevitability” of the lambda-calculus arises from the fact that the only way to observe a functional computation is to watch which output values it yields when presented with different input values.

Unfortunately, the world of concurrent computation is not so orderly.
Different notions of what can be observed may be appropriate for different circumstances, giving rise to different definitions of when two concurrent systems have “the same behavior”: for example, we may wish to observe or ignore the degree of inherent parallelism of a system, the circumstances under which it may deadlock, the distribution of its processes among physical processors, or its resilience to various kinds of failures. Moreover, concurrent systems can be described in terms of many different constructs for creating processes (fork/wait, cobegin/coend, futures, data parallelism, etc.), exchanging information between them (shared memory, rendezvous, message-passing, dataflow, etc.), and managing their use of shared resources (semaphores, monitors, transactions, etc.).

This variability has given rise to a large class of formal systems called process calculi (sometimes process algebras), each embodying the essence of a particular concurrent or distributed programming paradigm…

(Benjamin C. Pierce, Foundational Calculi for Programming Languages, CRC Handbook of Computer Science and Engineering, Chapter 136, CRC Press, 1996.)

So you’re probably right.

Posted by: Mike Stay on September 9, 2009 6:25 PM | Permalink | Reply to this

Re: The Pi Calculus II

Perhaps this variability is indicated by people out there who are not at all convinced about the merits of the π\pi-calculus, e.g., as regards Web Service Composition Languages see Pi calculus versus Petri nets: Let us eat “humble pie” rather than further inflate the “Pi hype”.

I see this sparked a workshop attended by the author, van der Aalst, and Robin Milner

For an interview with some using the π\pi-calculus in practice see this.

Posted by: David Corfield on September 10, 2009 9:26 AM | Permalink | Reply to this

Re: The Pi Calculus II

The van der Aalst article correctly points to an expressivity gap in the pi-calculus. Pi-calculus communication has binary synchronisation, i.e. each action of receiving a message receives exactly one message. Petri-nets (caveat: there are many different notions of Petri-nets; what I say below may not hold for all of them) allow (a form of) n-ary synchronisation, which means n messages are consumed in each act of receiving messages. This cannot be expressed (see my definition in a different post regarding the difficulties defining the concept of expressivity) using binary communication. This was proved by Iain Phillips (I don’t think he published this result). On the other hand there are many things Petri-nets cannot express that pi-calculus can, so the expressivity of pi-calculus and petri-nets is incomparable.

It is straightforward to generalise pi-calculus interaction to n-ary interaction.

The precise form of interaction a calculus uses has deep effects on the expressivity the calculus offers.

Posted by: Martin Berger on September 10, 2009 1:42 PM | Permalink | Reply to this

Re: The Pi Calculus II

The interview David mentioned is quite interesting to me, since as an outsider I really want to understand what all this abstract stuff is good for. I can’t resist quoting a bit… but I highly recommend reading the whole thing:

Steve Ross-Talbot: I left SpiritSoft in about 2001, because, like all start-ups, you have to focus on the things that you can deliver immediately, and SpiritSoft needed to focus on messaging, not on the high-level stuff that I very much wanted to do.

I got a dispensation to leave and do more work on event-condition-action, whereupon I met Duncan Johnston-Watt, who persuaded me to join a new company called Enigmatec.

At Enigmatec we went back to fundamentals. This really is the thread upon which the pi-calculus rests for me. When you do lots of event-condition-actions, if the action itself is to publish, you get a causal chain. So one event-condition-action rule ends up firing another, but you do not know that you have a causal chain—at least the system does not tell you.

It troubled me, for a considerable time, that this was somewhat uncontrollable, and certainly if I were a CIO and somebody said they were doing stuff and it’s terribly flexible, I’d be seriously worried about the fragility of my infrastructure with people subscribing to events and then onward publishing on the fly.

So causality started to trouble me, and I was looking for ways of understanding the fundamentals of interaction, because these subscriptions to events and the onward publishing of an event really have to do with an interaction between different services or different components in a distributed framework.

Many years before I did any of this, I studied under Robin Milner, the inventor of the pi-calculus, at Edinburgh University. I came back to the pi-calculus at Enigmatec and started to reread all of my original lecture notes, and then the books, and finally started to communicate with Robin himself. It then became quite obvious that there was a way of understanding causality in a more fundamental way.

One of the interesting things in the pi-calculus is that if you have the notion of identity so that you can point to a specific interaction between any two participants, and then point to the identity of an onward interaction that may follow, you’ve now got a causal chain with the identity token that is needed to establish linkage. This answered the problem that I was wrestling with, which was all about causality and how to manage it.

At Enigmatec, we told the venture capitalists we were doing one thing, but what we actually were doing was building a distributed virtual pi-calculus fabric in which you create highly distributed systems and run them in the fabric. The long-term aim was to be able to ask questions about systems, and the sorts of questions that we wanted to know were derived from causality. For example: Is our system free from livelocks? Is our system free from deadlocks? Does it have any race conditions?

These are the sorts of things that consume about half of your development and test time. Certainly in my experience the worst debugging efforts that I’ve ever had to undergo had to do with timing and resource sharing, which showed up as livelocks, deadlocks, and race conditions. Generally, what Java programmers were doing at the time to get rid of them, when they were under pressure, was to change the synchronization block and make it wider, which reduced the opportunity for livelocks and deadlocks. It didn’t fix the problem, really; what it did was alleviate the symptom.

Stephen Sparkes: Just made it slightly less likely to occur?

SR-T: That’s it. And I became obsessed with perfection, because I really felt you must be able to do this.

SS: It should be an absolute proof.

SR-T: Completely. We can prove something is an integer. Surely, we should be able to prove something about interaction. I started looking at other papers that leveraged the work of Robin Milner and the pi-calculus and found some fundamental work by Vasco Vasconcelos and Kohei Honda. Their work looked at something called session type, which, from a layman’s perspective, is this notion of identity tokens pre-pended to the interactions.

Layman’s perspective, eh? Anyway, I like the following passage a lot:

If you have that, then you can establish a graph—a wait-for graph or a causality graph—and statically analyze the program to determine whether it will exhibit livelocks, deadlocks, or race conditions.

I started to work with Kohei Honda on putting this into systems. That’s when the Choreography Working Group was established, and that’s when I became involved in the W3C.

It was quite natural to get the academics involved, which is why Robin Milner became an invited expert on that working group, along with Kohei Honda and Nobuko Yoshida. The three of them really formed the fulcrum for delivering these thoughts of very advanced behavioral typing systems, which the layman—the programmer—would never see. You would never need to see the algorithm. You would never need to read the literature—unless you were having trouble sleeping at night.

That’s the only reason I can think of for people reading it, unless you want to implement it. It should be something that’s not there, in the same way that we don’t have to know about Turing completeness and Turing machines and lambda-calculus to write software. That’s all dealt with by the type system in general.

I managed to establish early on a bake-off between Petri net theory and the pi-calculus within the Choreography Working Group, and clearly the pi-calculus won the day. It won because when you deal with large, complex, distributed systems, one of the most common patterns that you come across is what we call a callback, where I might pass my details to you, and you might pass it to somebody else, in order for them to talk back to me. In Petri net theory, you can’t change the graph: it’s static. In the pi-calculus, you can. That notion of composability is essential to the success of any distributed formalism in this Internet age, where we commonly pass what are called channels in the pi-calculus, but we might call them URLs, between people. We do it in e-mails all the time, and we’re not conscious of what that may mean if there is automated computing at the end of the e-mail doing something with our URL that was just passed. There’s a real need to understand the formal side of things in order to have composability across the Internet, let alone within the domains of a particular organization.

Posted by: John Baez on September 10, 2009 3:58 PM | Permalink | Reply to this

Re: The Pi Calculus II

More answers to ‘what is the π\pi-calculus good for?’: Process algebras in systems biology and The Stochastic Pi Machine.

Posted by: David Corfield on September 11, 2009 2:41 PM | Permalink | Reply to this

Re: The Pi Calculus II

The Bonsangue and Kurz Pi-Calculus in Logical Form work is very impressive, but it considers the pi-calculus only up to strong bisimilarity. This relation (strong bisimilarity) is far away from being a contextual notion of equality of processes, i.e. it distinguishes processes that could not be distinguished by other processes. To me this says that it is not pi-calculus that is really modelled in this paper but rather the syntax of pi-calculus. The coalgebraic approach to modelling processes has a problem with reaching contextual notions of process equality.

Posted by: Martin Berger on September 10, 2009 1:24 PM | Permalink | Reply to this

Re: The Pi Calculus II

There’s some work I think trying to capture weaker bisimulation for some systems using trace semantics. In some cases this can be understood as coinduction/final coalgebras in a Kleisli category.

Ah yes, slide 10 here shows an instance of weaker bisimulation. I’ve no idea if anything similar could be used to model the right bisimulation for process calculi.

Posted by: David Corfield on September 10, 2009 3:41 PM | Permalink | Reply to this

Re: The Pi Calculus II

Martin, is the preferred bisimilarity what is called open bisimilarity? If so, does the approach of Ghani et al. in Relationally Staged Computations in Calculi of Mobile Processes satisfy you?

We apply the recently developed techniques of higher order abstract syntax and functorial operational semantics to give a compositional and fully abstract semantics for the π\pi-calculus equipped with open bisimulation. The key novelty in our work is the realisation that the sophistication of open bisimulation requires us to move from the usual semantic domain of presheaves over subcategories of Set to presheaves over subcategories of Rel. This extra structure is crucial in controlling the renaming of extruded names and in providing a variety of different dynamic allocation operators to model the different binders of the π\pi-calculus.

Posted by: David Corfield on September 11, 2009 2:04 PM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks David for the pointer. The problem is only partially which bisimulation is right, it’s even more complicated: what labelled transition system is correct? If you look at asynchronous π-calculi, which are the most studied and used variant of π-calculi, then any bisimulation based on the transition systems proposed by Milner et al. are ‘wrong’ in the sense of being too discriminating: they distinguish processes that cannot be distinguished contextually. Consider the following example.

   !x(v).\overline{x}

(Here ! is replication, x(v).P is the usual input and \overline{x} is output of v along x.) This process cannot be distinguished by any processes running in parallel, or even by any context from 0 (the inactive process) in asynchronous calculi. Yet this process is not (weakly or strongly) bisimilar to 0 using the usual labelled semantics.

It took about a decade after asynchronous π-calculus was proposed by Honda, Tokoro and Boudol that Merro came up with a labelled semantics such that bisimulation isn’t too discriminating.

Posted by: Martin Berger on September 11, 2009 3:50 PM | Permalink | Reply to this

Re: The Pi Calculus II

Category theoretic work on modelling name-passing process calculi continues apace, see e.g. Sam Stanton’s Relating coalgebraic notions of bisimulation with applications to name-passing process calculi and thesis summary Name-passing process calculi: operational models and structural operational semantics.

Posted by: David Corfield on September 18, 2009 11:53 AM | Permalink | Reply to this

Re: The Pi Calculus II

David wrote:

I’m not sure I’d offer very good odds on you finding a simple term to insert in

THE π\pi-CALCULUS IS ABOUT ———— CATEGORIES

unless it be ‘Baez–Stay’ once you’ve found the right definition.

At this point that’s what I’m sort of hoping. Or Armstrong–Baez–Bartels–Corfield–Johansson–Shulman–Trimble categories, if that’s what it takes.

I certainly lost the beauty contest as to posts.

Sorry — I should have replied to yours by saying “Huh??? That flew by so fast I wouldn’t have noticed it except for the sonic boom and the lingering smell of burnt neurons!” and then asking some really basic questions. In fact we should probably take turns doing this to each other’s posts. It might reduce people’s inhibitions and make for more interesting conversations.

… if I could draw your attention to a paper I referred to – Pi-Calculus in Logical Form, in section 5 you’ll see how they want to model π\pi-calculus processes using coalgebras for a complicated functor on a certain presheaf category.

I’ll look at it sometime, but right now I just want to understand the π\pi-calculus, and I have the feeling this won’t help much yet.

Hmm: I just dared to peek at the paper, and — perhaps unsurprisingly — it looks a bit more accessible than your description made it sound. But even so, I think what I need most are elves and mailboxes.

Maybe then at best the next step functor for the π\pi-calculus belongs to the collection of functors which are generated from a set of fairly simple components, and maybe categories of coalgebras for all such functors have a nice description. But maybe not.

I get the feeling that something roughly like this is true. Not so much the business about coalgebras (I just don’t know), but the business about the π\pi-calculus being just one of a collection of ‘process calculi’, without a strong air of inevitability about it. Sort of like ‘compact symmetric monoidal cocomplete abelian categories’: not a concept you should try to define all in one go; rather, a concept that’s built from very natural concepts that deserve to be defined separately.

Posted by: John Baez on September 9, 2009 5:27 PM | Permalink | Reply to this

Re: The Pi Calculus II

My post was one of those hanging on by the fingertips ones. I have a reasonable grip on adjunctions between syntax and semantics for propositional logic, e.g., Boolean algebras and sets of models or valuations. And I kinda get the coalgebraic trick of sticking in endofunctors on each side of the adjunction to capture modalities. And I get a small glimpse of the adjunction between first order theories and their groupoids of models of Forssell. So a very distant flicker suggests adding endofunctors to Forssell’s adjunction to capture first-order modal logic.

But then Steve Awodey has modelled first-order modal logic with sheaves. So I figured that some useful triangulation could be done. The sort of thing some of the regulars could figure out while completing their tax returns.

Posted by: David Corfield on September 10, 2009 9:16 AM | Permalink | Reply to this

Re: The Pi Calculus II

but the business about the π-calculus being just one of a collection of process calculi, without a strong air of inevitability about it. Sort of like compact symmetric monoidal cocomplete abelian categories: not a concept you should try to define all in one go; rather, a concept that’s built from very natural concepts that deserve to be defined separately.

Here is my take on this. The π-calculus, especially its asynchronous variant, is a minimal model of something every (reasonably powerful) theoretical model of some kind of interaction should do. Usually you need to add further conceps (e.g. a lot of computing systems use timing to steer their operations) if you are producing a model of something that interacts, but you’ll always find π-calculus inside. The key concepts the π-calculus introduces are:

  • The algebra: parallel composition, input, output, hiding.
  • Names as interaction points
  • Names as only data to be communicated between processes
  • Scope extrusion.

Of those, the last two were introduced by π-calculus.

Posted by: Martin Berger on September 10, 2009 1:57 PM | Permalink | Reply to this

Re: The Pi Calculus II

Roughly, processes are things that send and receive messages, and change behavior according to the messages they see. They’re a model of computation that encompasses all computable functions, but tosses in some nondeterminism to model the race conditions you find in actual physical systems. If I send out two checks but only one can clear because I don’t have enough money in the account to cover both, which check gets satisfied? The rewrite rule isn’t confluent. In the same way,

(1)x(y)P|x(y)Q|x¯z. x(y)P | x(y)Q | \bar{x}z.

can use the synchronization rule to become either

(2)P{z/y}|x(y)Q P\{z/y\} | x(y)Q

or

(3)Q{z/y}|x(y)P Q\{z/y\} | x(y)P

Nobody seems to know the “right” good description of what the pi calculus is. We’ve asked here and on the categories mailing list,

“Lambda calculus is to cartesian closed categories as pi calculus is to what?”

and didn’t get any authoritative answers. People are using double categories and presheaves and linear logic and coends and all kinds of stuff to try to understand it, but there’s no nice proof that says “the bicategory of sorted pi theories is equivalent to the bicategory of cartesian closed categories equipped with a thingamabob” or anything of that sort.

So part of the reason for this post is to try to figure out “what the heck is a process?” from a category-theoretic point of view.

What I know is that in untyped lambda calculus we have

  • one type XX XX \cong X^X,

  • lots of terms

  • confluent rewrite rules

and when we mod out the terms by the rewrite rules, we get a cartesian closed category.

In the pi calculus, we have

  • one type XX

  • lots of terms

  • nonconfluent rewrite rules

If we mod out by these rewrite rules, we’ll lose a lot of the structure. So I figure that a “pi theory” will be some kind of bicategory, perhaps even cartesian closed. Also, there’s an embedding of the lambda calculus into the pi calculus.

Posted by: Mike Stay on September 8, 2009 11:28 PM | Permalink | Reply to this

Re: The Pi Calculus II

“We’ll lose a lot of the structure” is an understatement; the very idea of modding out by non-confluent rewrite rules gives me the willies. It feels like trying to quotient a group by an equivalence relation that isn’t invariant under the group structure: you ain’t gonna get nothing good. I mean, non-confluence means that one term PP could reduce to “morally different” terms QQ and RR, but modding out by rewrites would then identify QQ and RR. I wouldn’t be much surprised if modding out by rewrites turned out to make everything trivial.

So maybe before finding a good semantics for the π\pi-calculus we first need to really understand the models for lambda calculus where we don’t mod out by the rewrites.

Posted by: Mike Shulman on September 9, 2009 1:44 AM | Permalink | Reply to this

Re: The Pi Calculus II

So maybe before finding a good semantics for the π-calculus we first need to really understand the models for lambda calculus where we don’t mod out by the rewrites.

That’s why we’re looking at symmetric monoidal closed bicategories!

Posted by: Mike Stay on September 9, 2009 6:12 AM | Permalink | Reply to this

Re: The Pi Calculus II

Michael Shulman wrote:

So maybe before finding a good semantics for the π\pi-calculus we first need to really understand the models for lambda calculus where we don’t mod out by the rewrites.

That’s exactly what Mike Stay’s thesis is about — so don’t anyone dare think about it!

Posted by: John Baez on September 9, 2009 5:34 PM | Permalink | Reply to this

Re: The Pi Calculus II

Could you elaborate on that? I mean in particular, how will you use symmetric monoidal closed bicategories to keep track of rewrites?

Posted by: Tobias Heindel on September 16, 2009 3:58 PM | Permalink | Reply to this

Re: The Pi Calculus II

Could you elaborate on that? I mean in particular, how will you use symmetric monoidal closed bicategories to keep track of rewrites?

Objects are types, morphisms are lambda terms (not equivalence classes of terms!), and 2-morphisms are rewrite rules.

Posted by: Mike Stay on September 16, 2009 11:38 PM | Permalink | Reply to this

Re: The Pi Calculus II

we first need to really understand the models for lambda calculus where we don’t mod out by the rewrites.

It might be interesting in this respect to consider the subcalculi of π-calculus that correspond precisely to (typed) λ-calculi. The fragments of π-calculus are well behaved and have canonical characterisations via typings. Here is a good starting point for this kind of work. I can supply more referernces if you are interested.

Posted by: Martin Berger on September 10, 2009 2:06 PM | Permalink | Reply to this

Re: The Pi Calculus II

http://hal.inria.fr/inria-00073220/en/
95 page tutorial on lambda and pi calculi.

Abstract:
“This paper is concerned with the relationship between lambda ­calculus and the pi-calculus. The lambda calculus talks about functions and their applicative behaviour. This contrasts with the pi-calculus, that talks about processes and their interactive behaviour. Application is a special form of interaction, and therefore functions can be seen as a special form of processes. We study how the functions of the lambda calculus (the computable functions) can be represented as pi-calculus processes. The pi-calculus semantics of a language induces a notion of equality on the terms of that language. We therefore also analyse the equality among functions that is induced by their representation as pi-­calculus processes.”

Posted by: Stephen Harris on September 14, 2009 7:51 AM | Permalink | Reply to this

Re: The Pi Calculus II

Mikael, what are the “v” and “w” in your examples? They seem to come out of nowhere.

They're dummy variables.

Posted by: Toby Bartels on September 8, 2009 10:41 PM | Permalink | Reply to this

Re: The Pi Calculus II

First answer: See Toby above.

Second answer: Lambda calculus is all about constructing anonymous functions that represent things. A lambda expression is something that waits for an input until it can do something with it. Hence, Church numerals only make sense as unapplied functions, waiting to get applied to something (for instance the pair of inputs succ and 0, to generate a Peano numeral out of a Church numeral), and thus a Church numeral is kinda unresolved on its own.

So when I try to translate the Church numerals into Pi calculus constructs, using Mike’s dictionary above, I end up pulling things out of the particular thin air that the Church numeral is hovering around, waiting for. Probably, this all means that Church numerals are ill fitted for translation into Pi calculus; but that’s what I did, and that’s why it looks odd.

The reason _I_ don’t give any semantics is that I’m still digesting the syntax, the grammar. And I tried to give an answer in the vein of lambda calculus since John was complaining about the chosen metaphors being undigestible.

Posted by: Mikael Vejdemo Johansson on September 8, 2009 11:30 PM | Permalink | Reply to this

Matlab

My weapon of choice for rapidly coding up models is Matlab. What you just described reminds me of “function handles” and (probably not coincidentally) “anonymous functions”.

Given a function ff, we get its function handle @ff using the “@” symbol, i.e.

h=@fh = \text{@}f

is a function handle. Feeding the function handle an argument xx gives the same thing as f(x)f(x), i.e.

h(x)=f(x).h(x) = f(x).

Anonymous functions (in Matlab) are useful when you need a function with one argument g(x)g(x) from a function with two arguments f(x,y)f(x,y). In this case, the syntax is

y =something; h =@(x)f(x,y);\begin{aligned} y &= something; \\ h &= \text{@}(x)\quad f(x,y); \end{aligned}

Now we can use hh as a function of one argument and

h(z)=f(z,y).h(z) = f(z,y).

This comes in particularly handy when you are trying to do things like optimize ff with respect to xx for some yy that you don’t know until the code is running.

Does this ring any bells? Is this at all related to this π\pi-calculus?

If so, it would be fun to do a tutorial on π\pi-calculus for Matlab users (which there are a lot of).

Posted by: Eric Forgy on September 9, 2009 12:17 AM | Permalink | Reply to this

Re: Matlab

Pi calculus channels are in many ways like Unix pipes, which Matlab (or at least, octave) users have access to via the popen command. In unix, though, there’s only one process that can write to any given pipe and only one that can read from it. In pi calculus, any number of processes can read and write to the pipe; the channel name is like the file handle.

Posted by: Mike Stay on September 9, 2009 1:01 AM | Permalink | Reply to this

Re: Matlab

Unix pipes can be read from or written to my multiple processes – it’s just that when they’re originally opened, only the process that first opened them has access to them. This is in the form of “file descriptors” which are “abstract handles” that the process can refer to the pipes with while communicating with the kernel. This mapping is per-process, so knowing the name another process uses for a pipe doesn’t let you read from it or write to it. If the process spawns other processes (or “fork”s) it can let its child keep the pipe open. This is, in fact, how the shell (a process like any other) sets up pipelines. In some unixes it can even send special messages which let other processes open a given pipe.

Overall, unix processes map fairly closely to the Pi calculus, but this isn’t terribly apparent to the end user – only unix programmers.

Posted by: Aaron Denney on September 9, 2009 8:44 AM | Permalink | Reply to this

Re: Matlab

EF: the syntax is y =something; h =@(x)f(x,y); Now we can use h as a function of one argument and h(z)=f(z,y). This comes in particularly handy when you are trying to do things like optimize f with respect to x for some y that you don’t know until the code is running. Does this ring any bells? Is this at all related to this π-calculus?

It is related to the lambda calculus. @(x)(x) is essentially the same as λx.\lambda x. !

Posted by: Arnold Neumaier on September 16, 2009 9:54 AM | Permalink | Reply to this

Re: The Pi Calculus II

So when I try to translate the Church numerals into Pi calculus constructs

This paper (long version, page 17) analyses the representation of Church numerals in the π-calculus. Maybe you find it interesting.

Posted by: Martin Berger on September 10, 2009 2:11 PM | Permalink | Reply to this

Re: The Pi Calculus II

Actually, your (somewhere)s should be filled in like this:

0 = \fx.x =>  (definition of 0)

(\fx.x) u =>  (translation to name-passing)

Wait to withdraw (f, v) from u, then [[ (\x.x) v ]] =>

Wait to withdraw (f, v) from u, then
  Wait to withdraw (x, w) from v, then [[ x w ]] =>

Wait to withdraw (f, v) from u, then
  Wait to withdraw (x, w) from v, then 
    Deposit (x) in w.

and

1 = \fx.fx => 

(\fx.fx) u =>

Wait to withdraw (f, v) from u, then [[( \x.f x) v]] =>

Wait to withdraw (f, v) from u, then 
  Wait to withdraw (x, w) from v, then [[f x w]] =>

Wait to withdraw (f, v) from u, then 
  Wait to withdraw (x, w) from v, then 
    new (y) { [[ f y ]] | Deposit (x, w) in y. } =>

Wait to withdraw (f, v) from u, then 
  Wait to withdraw (x, w) from v, then 
    new (y) { Deposit (y) in f | Deposit (x, w) in y. }
Posted by: Mike Stay on September 9, 2009 12:29 AM | Permalink | Reply to this

Re: The Pi Calculus II

Here’s one question, though, getting closer to the calls for Actually Doing Things with pi-calculus:

With the Church numerals, I can see how to actually get Peano numerals out of them, by doing something like

3 (succ) (Nil)
to get
(λ f . λ x . f f f x) (succ) (Nil) =
succ (succ (succ Nil))

and thus making it all look like a slightly more familiar arithmetic.

How do we do this with the translated Church numerals in pi calculus, though? If I have our 1, above, given as

Wait to withdraw (f, v) from u, then
Wait to withdraw (x, w) from v, then
new (y) { Deposit (y) in f | Deposit (x, w) in y. }

is there some way of depositing things in the right places so that we end up with
succ Nil
somewhere?

Posted by: Mikael Vejdemo Johansson on September 9, 2009 11:34 AM | Permalink | Reply to this

Re: The Pi Calculus II

So here’s the embedding of one = λfx.fx\lambda fx.f x:

Wait to withdraw (u) from one, then 
  Wait to withdraw (f, v) from u, then 
    Wait to withdraw (x, w) from v, then 
      new (y) { Deposit (y) in f | Deposit (x, w) in y. }

That first step is only necessary if you want to give it a particular name, like “one”. Now let’s put that term in parallel with

new (t) { 
  Deposit (t) in one | 
  new (k) { 
    Deposit (succ, k) in t. | 
    Deposit (zero, m) in k.
  }
} | 
[[succ]]

Here, succ is the nickname of a mailbox at which the process that computes the successor is waiting, and k is the nickname of the mailbox at which the continuation of the current process should wait. Similarly, zero is the nickname of a mailbox at which a process encoding the numeral “0” is waiting, and m is where the process getting the result of this whole computation should wait. The first rule that applies is scope migration, which moves the “new (t)” outward around both the deposit and the withdrawal.

new (t) { 
  Wait to withdraw (u) from one, then 
    Wait to withdraw (f, v) from u, then 
      Wait to withdraw (x, w) from v, then 
        new (y) { Deposit (y) in f. | Deposit (x, w) in y. } |
  Deposit (t) in one | 
  new (k) { 
    Deposit (succ, k) in t. | 
    Deposit (zero, m) in k.
  }
} | 
[[succ]]

Now the first withdrawal synchronizes with the deposit in one, t replaces free occurrences of u, and we get

new (t) { 
  Wait to withdraw (f, v) from t, then 
    Wait to withdraw (x, w) from v, then 
      new (y) { Deposit (y) in f. | Deposit (x, w) in y. } |
  new (k) { 
    Deposit (succ, k) in t. | 
    Deposit (zero, m) in k.
  }
} | 
[[succ]]

Next, another scope migration; “new (k)” moves outward:

new (t) { 
  new (k) { 
    Wait to withdraw (f, v) from t, then 
      Wait to withdraw (x, w) from v, then 
        new (y) { Deposit (y) in f. | Deposit (x, w) in y. } |
    Deposit (succ, k) in t. | 
    Deposit (zero, m) in k.
  }
} | 
[[succ]]

Now the first withdrawal synchronizes with the deposit in t, (succ, k) replace free occurrences of (f, v), and we get

new (t) { 
  new (k) { 
    Wait to withdraw (x, w) from k, then 
      new (y) { Deposit (y) in succ. | Deposit (x, w) in y. } |
    Deposit (zero, m) in k.
  }
} | 
[[succ]]

Since t isn’t referred to anywhere anymore, we drop the “new (t)”:

new (k) { 
  Wait to withdraw (x, w) from k, then 
    new (y) { Deposit (y) in succ. | Deposit (x, w) in y. } |
  Deposit (zero, m) in k.
} | 
[[succ]]

The withdrawal and deposit synchronize:

new (k) { 
  new (y) { Deposit (y) in succ. | Deposit (zero, m) in y. } |
} | 
[[succ]]

Since k isn’t referred to anywhere anymore, we drop the “new (k)”:

new (y) { Deposit (y) in succ | Deposit (zero, m) in y. } |
[[succ]]

Scope migration:

new (y) { 
  Deposit (y) in succ. | 
  Deposit (zero, m) in y. |
  [[succ]]
}

Here, the first deposit tells succ where to find the pair (zero, m), just like the first deposit in one told where to find (f, v). The term above effectively reduces to

new (y) { 
  Deposit ([[succ(zero)]] in m.
}

and since y is no longer used, we get

Deposit ([[succ(zero)]] in m.

There’s a “linked list” pattern that shows up in lambda term embeddings:

linked list

A process reads a pair consisting of the first input and the continuation, or pointer, to the next input/continuation pair, and so on, then outputs the result to the final continuation.

Posted by: Mike Stay on September 9, 2009 6:09 PM | Permalink | Reply to this

Re: The Pi Calculus II

The Church Numerals embed the naturals into the lambda calculus as terms with no free variables. Is there such an embedding into the pi calculus? Your example above seems to have lots of free variables.

Posted by: Adam B on January 3, 2016 7:53 PM | Permalink | Reply to this

Re: The Pi Calculus II

The Church Numerals embed the naturals into the lambda calculus as terms with no free variables. Is there such an embedding into the pi calculus? Your example above seems to have lots of free variables.

The example above was very sloppy with respect to the scope of names.

There are different embeddings that encode different reduction strategies, but all of them use a single “free name” on which the result of the computation will be sent.

See this slide deck for call-by-name and call-by-value embeddings.

Posted by: Mike Stay on January 4, 2016 11:04 PM | Permalink | Reply to this

Re: The Pi Calculus II

Hello, I’m a long-time lurker. I’m coming at things from the (wide) angle of linguistics and type theory, so most of the physicsy category theory here goes flying past me, but this end of things is what I know so I figured I should jump in.

The ν quantifier is, as you say, fairly similar to other quantifiers; especially the λ quantifier. The standard story is about message passing and mailboxes (or “channels”), but there’s another way to think about things, namely with continuations.

When we have ν(x).P we can think of x as being the continuation for some parallel processes, and P is a process which will manipulate that continuation. By using the ν binder we bring the continuation into scope (or create it) which allows us to define it (by “putting something in it”) and allows us to hand if off to another process (by “putting it in another channel”). There’s a separation here between bringing a name/variable into scope and giving it a definition, much like the dataflow variables in Oz or futures in AliceML.

Of course these aren’t exactly continuations in the same sense as continuations in the λ-calculus. When a process is waiting on a channel it will eventually receive some message on that channel, and in this sense the process is waiting to continue with (whatever comes in on) that channel/continuation. The messages themselves are tuples of these “continuations” that the process can then either use (listen on) or hand off to another channel.

A more operational way to think about it is that ν(x) allocates some memory and returns its address in x. Then the two operations are for looking up the contents at x, or for storing something (i.e. other addresses) at x. Unlike pointers in C, there are some additional safety measures built in. For example, any process that tries to read from an address which hasn’t been written to yet will block (again, much like dataflow variables). And each address is really a stream, so writing to it repeatedly will push more elements onto the end of the stream (this is often rephrased by discussing tuples instead of streams). It is because of these streams that the new dummy variables get introduced when encoding the λ-calculus.

You’re right that it’s a bit strange to put the key for the mailbox into the mailbox, since anyone who could open the mailbox (could listen on the channel) must already have the key (to know where to listen). But there’s nothing wrong with it per se, it’s just creating some overhead in the system.

Both of your examples are well-formed despite the free variables. Just like in the λ-calculus these free variables are only a problem for determining the interpretation of the expression, not for determining well-formedness. That is, they must be bound somewhere, or else the program will be malformed or meaningless.

Posted by: wren ng thornton on September 9, 2009 5:21 AM | Permalink | Reply to this

Re: The Pi Calculus II

Is lambda calculus a prerequisite? If so, I’m out. My neurons are burnt as well :)

Posted by: Eric Forgy on September 8, 2009 7:19 PM | Permalink | Reply to this

Re: The Pi Calculus II

The λ\lambda-calculus isn’t intrinsically a prerequisite for the π\pi-calculus, but Mike wrote his explanation for me. He knows I understand the λ\lambda-calculus, so he wanted to show how everything you can do in the λ\lambda-calculus, you can also do in the π\pi-calculus. I then persuaded him to let me post his explanation.

Why is it so great that “everything you can do in the λ\lambda-calculus, you can also do in the π\pi-calculus”? It’s because you can do everything in the λ\lambda-calculus: it serves as a universal programming language.

The idea, for those with a few remaining unburnt neurons, is that

λx.f(x)\lambda x . f(x)

means

the function that, when you feed it the variable xx, spits out f(x)f(x)

where f(x)f(x) stands for any sort of expression with some number of occurences of the variable xx in it.

I’m being pretty sloppy, but I hope this gets the idea across.

Of course you might ask “Whaddya mean, ‘the function that, when you feed it the variable xx, spits out f(x)f(x)’? Why are you beating around the bush so much? Why don’t you just call it ff?

And indeed there’s a rule in the λ\lambda-calculus that says

λx.f(x)=f \lambda x . f(x) = f

But then you might ask “Well if it’s just ff, what’s the point???

And the point is that where I’m writing f(x)f(x), I could have written any sort of expression with some number of occurences of an xx in it — more than one, or one, or even none!

So

λx.2 \lambda x . 2

would mean

the constant function that equals 2

while

λx.x(y) \lambda x . x(y)

would mean

the function that takes any function xx and evaluates it at yy

and so on. And using such tricks we can write any program whatsoever, as Mikael has begun to explain with his example of Church numerals.

Posted by: John Baez on September 8, 2009 10:51 PM | Permalink | Reply to this

Re: The Pi Calculus II

you can do everything in the λ\lambda-calculus

To be fair, this is because the definition of ‘everything you can do’ is ‹what you can do in the λ\lambda-calculus›. But it is telling that this is equivalent to ‹what you can do with a Turing machine› and a few other definitions of ‘everything you can do’ that people came up with independently. The claim that any (hence all) of these definitions are correct is the Church–Turing thesis.

Here is an example of something that you cannot do in the λ\lambda-calculus: given an expression in the λ\lambda-calculus, decide whether, whenever it is applied to a Church numeral, the result may be reduced to a Church numeral. But since there are infinitely many Church numerals and no obvious uniform method of testing, it's not clear that you can do this at all!

Posted by: Toby Bartels on September 8, 2009 11:10 PM | Permalink | Reply to this

Re: The Pi Calculus II

Ok. Something just clicked. The “λ\lambda” in λ\lambda-calculus is the same thing as (or very similar to) the “@” in Matlab.

When you write

λx.f(x)=f\lambda x.f(x) = f

you are describing an anonymous function.

The same thing in Matlab would be written

@(x)f(x)=f.\text{@}(x)\quad f(x) = f.

Cool.

Posted by: Eric Forgy on September 9, 2009 12:35 AM | Permalink | Reply to this

Re: The Pi Calculus II

Yes, exactly. All functions in lambda calculus are anonymous functions. And as you noted, you can just replace ‘λ\lambda’ by ‘@’.

Posted by: Mike Stay on September 9, 2009 12:51 AM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks, Mike! Now I understand the Π\Pi-calculus (not deeply, of course, but I followed you).

I'm afraid that the BNF bit may be a little confusing, since it's using || in two different ways (once for the Π\Pi-calculus, the other times for the BNF formalism). I hope that people could follow that, or perhaps you should turn the Π\Pi-calculus's || into words just like you turned its other constructions into words.

Posted by: Toby Bartels on September 8, 2009 8:00 PM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks, Mike! Now I understand the Π-calculus (not deeply, of course, but I followed you).

Oh, good. I’m glad someone did.

I’m afraid that the BNF bit may be a little confusing, since it’s using ∣ in two different ways (once for the Π-calculus, the other times for the BNF formalism). I hope that people could follow that, or perhaps you should turn the Π-calculus’s ∣ into words just like you turned its other constructions into words.

I had formatted the BNF differently, with all the pipes lined up on the right, but somehow the formatting got messed up. But sure, instead of P | Q we could write

Do both P and Q in parallel.
Posted by: Mike Stay on September 8, 2009 8:33 PM | Permalink | Reply to this

Re: The Pi Calculus II

Mike wrote:

I had formatted the BNF differently, with all the pipes lined up on the right, but somehow the formatting got messed up.

That’s my fault. It took a lot of work to format your blog entry to look at all right, perhaps because I was doing something dumb.

Luckily, from my limited previous exposure to this stuff I already knew that | was being used both to separate different ‘cases’ for definitions in Backus–Naur form, and as a connective to build a new process from old ones by running both concurrently in the pi-calculus.

So that was one of the few things I had no problem with.

Posted by: John Baez on September 8, 2009 10:31 PM | Permalink | Reply to this

Re: The Pi Calculus II

This all looks very much like the approach to concurrency that I’m familiar with in Haskell (and this is presumably no accident). My first instinct upon reading this description was to think of the following translation into Concurrent Haskell and MVars:

pi-calulus term                                |  Haskell value of type IO ()
-----------------------------------------------|----------------------------------------
new (x) { P }                                  |  newEmptyMVar >>= (\x -> P)
Deposit (y) in x.                              |  putMVar x y
Wait until you can withdraw (y) from x, then P |  takeMVar x >>= (\y -> P)
P | Q                                          |  forkIO P >> Q
Replicate P                                    |  sequence (repeat (forkIO P))

Note that \x -> P is the Haskell notation for the λ\lambda-calculus λx.P\lambda x. P, and >> and >>= are the “monadic bind” operation for the IO monad. (If anyone isn’t familiar with this use of monad, yes, it is actually a monad in the categorical sense, and figuring out how is a fun exercise. Do we have any nCafe discussions about functional programming monads I can link to?)

That can’t be quite correct, though, because it doesn’t deal correctly with depositing multiple things in the same box—but I don’t fully understand how depositing multiple things in the same box is supposed to work. What happens when I try to deposit something in a box that already contains something? Under the above translation, I would have to wait until someone else empties a box before putting something into it, but from your elf description it sounds like in the intended semantics of π\pi-calculus I just add it to the box.

A specific question along these lines is, how is

Deposit (y) in x | Deposit (z) in x

related to

Deposit (y,z) in x

? Are they equivalent? Also, what happens when I then withdraw one or more things from a box that may contain multiple things? Specifically, what happens when I run either of the above instructions in parallel with

Wait until you can withdraw (u) from x, then P

? I’m guessing that uu gets set to yy or zz (in PP), but we can’t predict which, with xx then still containing whichever one uu didn’t get set to. Is that right? And if I try to withdraw two things from a box containing only one, presumably I have to wait until someone puts a second thing in the box? If so, and someone actually comes along and puts two things in the box at once, so that there are then three, which two do I end up with? Is that unpredictable too, or am I guaranteed to have the one that’s been there longer?

Anyway, if some translation like this is true, it suggests that there should be a mathematical semantics for the π\pi-calculus along the same lines as the well-known semantics for monadic computations (such as premonoidal categories). Have people looked into this?

Posted by: Mike Shulman on September 9, 2009 2:36 AM | Permalink | Reply to this

Re: The Pi Calculus II

Do we have any nCafe discussions about functional programming monads I can link to?

No, but we should!

Posted by: Toby Bartels on September 9, 2009 3:24 AM | Permalink | Reply to this

Re: The Pi Calculus II

Each channel is a stream. Reading from the channel pops the head off the stream or, on an empty stream, blocks until it can. Writing to the channel adds something to the end of the stream.

“deposit(y) in x | deposit(z) in x” will nondeterministically have (y,z) or (z,y) in x. Whereas “deposit(y,z) in x” aka “deposit(y) in x, deposit(z) in x” will always have (y,z) in x. Similarly, if you have parallel processes withdrawing from the same channel it’s nondeterministic who gets which elements. To enforce determinism while allowing multiple access it’s common to set up a secondary channel which acts as a lock and alternates between containing the key for the primary channel or being empty (and the key for the primary channel is otherwise unavailable).

Posted by: wren ng thornton on September 9, 2009 5:39 AM | Permalink | Reply to this

Re: The Pi Calculus II

For those that don’t know, Haskell’s I/O (“input/output”) monad is a functor that maps

  • a data type XX to the data type X×(EE)X \times (E \to E), where EE represents the “environment” of the computation
  • an arrow f:XYf:X \to Y to the arrow IOf:IOXIOYIO f:IO X \to IO Y.

The image of IO is equivalent to the Kleisli category whose

  • objects are datatypes and
  • morphisms f:XYf:X \to Y are Kleisli arrows f:XIOYf:X \to IO Y.

Composition of Kleisli arrows f:XIOYf:X \to IO Y and g:YIOZg:Y \to IO Z is given by

(1)μ ZIOgf \mu_Z \circ IO g \circ f

(where μ\mu is the monad multiplication). Composition of Kleisli arrows is denoted g >>= f.

The multiplication of the monad takes (x,e 1,e 2)(x,e 1e 2)(x, e_1, e_2) \mapsto (x, e_1 \circ e_2)—that is, it chooses the order in which the effects (i.e. environment endomorphisms) occur. This way a programmer can say, “First print ‘What is your name?’ and then read in characters from the keyboard.”

In the pi calculus, you have no control over the order of events except for causal dependencies.

Do we have any nCafe discussions about functional programming monads I can link to?

I don’t think so, but the paper “Monads for functional programming” by Wadler is excellent.

That can’t be quite correct, though, because it doesn’t deal correctly with depositing multiple things in the same box—but I don’t fully understand how depositing multiple things in the same box is supposed to work.

There’s one way to put a message in a box with nn messages, and nn ways to draw a message out of a box with nn messages. The pi calculus emphatically leaves it unspecified how to determine which of the nn gets drawn.

A specific question along these lines is, how is

Deposit (y) in x | Deposit (z) in x

related to

Deposit (y,z) in x

? Are they equivalent?

No. The first deposits two messages that consist of a single number each, while the second deposits a message that consists of two numbers. A well-sorted pi term will never try to read off more or fewer numbers than what is written on the message. This pi term is not well sorted:

Deposit (y) in x. | Wait until you can withdraw (y, z) from x, then do P

because it expects two numbers on the message but only gets one.

Also, what happens when I then withdraw one or more things from a box that may contain multiple things? Specifically, what happens when I run either of the above instructions in parallel with

Wait until you can withdraw (u) from x, then P

?

It’s unspecified, nondeterministic. That’s called a “race condition”: which deposit will synchronize first with the withdrawal?

Posted by: Mike Stay on September 9, 2009 6:08 AM | Permalink | Reply to this

Re: The Pi Calculus II

Mike Stay wrote:

For those that don’t know, Haskell’s I/O (“input/output”) monad is a functor that…

… maps XX to X×Y YX \times Y^Y. Is this the same as what people call the ‘state monad’, or (maybe more loosely) the trick for incorporating ‘state’ in functional programming? Or is that some other monad?

A well-sorted pi term will never try to read off more or fewer numbers than what is written on the message. This pi term is not well sorted:

Deposit (y) in x. | Wait until you can withdraw (y, z) from x, then do P

because it expects two numbers on the message but only gets one.

So what does it do? Deposit (y) in x, sit around for someone to deposit (y,z), and then do P if this happens?

Or: deposit (y) in x, sit around for someone to deposit (z), and then do P if this happens?

I’m still confused about the difference between two messages with one number on them and one with two — that is, how much difference it makes.

Posted by: John Baez on September 9, 2009 6:27 PM | Permalink | Reply to this

Re: The Pi Calculus II

Is this … the trick for incorporating ‘state’ in functional programming?

Yep.

So what does [the ill-sorted term] do?

That’s like asking what gfg \circ f does, where f:XYf:X \to Y and g:RSg:R\to S. An ill-sorted term just doesn’t make sense.

I’m still confused about the difference between two messages with one number on them and one with two — that is, how much difference it makes.

Well, one of those is ill-sorted. Even ignoring that, a message with multiple numbers on it is an ordered list.

Posted by: Mike Stay on September 9, 2009 6:59 PM | Permalink | Reply to this

Re: The Pi Calculus II

Oh, so we’re talking about the typed π\pi-calculus? I thought, since you didn’t say anything about types at first, that we were talking about an untyped (or single-typed) calculus. How does the typing work? Is each box labeled with a type, and you can only put or withdraw things of that type from that box? That’s what the Haskell translation would do, since MVar is a type operator.

Posted by: Mike Shulman on September 9, 2009 9:26 PM | Permalink | Reply to this

Re: The Pi Calculus II

Not exactly typed, just well-sorted. We don’t say that each mailbox has an arity on it, and only messages of that length are permitted—although that would suffice, it’s not necessary. All we require of a term for it to be well-sorted is that whenever a process tries to withdraw a message of arity nn, it must be the case that all the messages in the box at that time have arity nn.

Posted by: Mike Stay on September 9, 2009 10:15 PM | Permalink | Reply to this

Re: The Pi Calculus II

I am used to using “type” and “sort” interchangeably, but I think I get what you are saying. However, how can the well-sortedness of a term involve a word like “whenever” which is time-dependent, when the semantics is nondeterministic about what order things happen in? For example, is the following term well-sorted?

Deposit (y) in x.  |  Deposit (z1, z2) in x.  |  Wait until you can withdraw (w) from x, then ...

If the first deposit happens first, then the withdrawal, and then the second deposit, it seems that everything works out fine. But if it happens in any other order, then we have a sort violation.

Posted by: Mike Shulman on September 10, 2009 2:49 AM | Permalink | Reply to this

Re: The Pi Calculus II

I am used to using “type” and “sort” interchangeably, but I think I get what you are saying. However, how can the well-sortedness of a term involve a word like “whenever” which is time-dependent, when the semantics is nondeterministic about what order things happen in? For example, is the following term well-sorted?

Deposit (y) in x.  |  
Deposit (z1, z2) in x.  |  
Wait until you can withdraw (w) from x, then ...

If the first deposit happens first, then the withdrawal, and then the second deposit, it seems that everything works out fine. But if it happens in any other order, then we have a sort violation.

Yeah, the definition of well-sorted I gave isn’t decidable. The particular example you gave is not well sorted because there exists (as you noted) an order in which the messages can be delivered that causes a problem. If, on the other hand, you conditioned the second deposit on finding a counterexample to the Riemann hypothesis, then we couldn’t say whether it’s well-sorted or not.

Assigning an arity to each mailbox is a decidable way of determining well-sortedness.

Posted by: Mike Stay on September 10, 2009 11:41 AM | Permalink | Reply to this

Re: The Pi Calculus II

The multiplication of the [IO] monad … chooses the order in which the effects (i.e. environment endomorphisms) occur.

In the pi calculus, you have no control over the order of events except for causal dependencies.

This juxtaposition sounds like you’re saying that for this reason IO can’t model the pi calculus. Is that what you’re saying? Because I don’t think it’s true, since the nondeterminism can happen in the environment. Saying forkIO P >> Q creates a single environment endomorphism whose result is nondeterministic. You can then choose to sequence this environment endomorphism with other ones, deterministic or not, as you choose.

There’s one way to put a message in a box with nn messages, and nn ways to draw a message out of a box with nn messages. The pi calculus emphatically leaves it unspecified how to determine which of the nn gets drawn.

So it sounds like every box has a type (such as “number” or “pair of numbers” or maybe “key to a mailbox containing pairs of numbers” or even “key to a mailbox contaning keys to a mailbox containing keys to a mailbox containing …”), and at any time a box of type X contains a multiset of objects of type X. Adding something to a box inserts it into the multiset, and withdrawing something from a nonempty box takes something out in an unspecified way, but blocks if the box is empty. Is that right? (It contradicts what wren said above.)

Are terms in the typed pi calculus typed, or are only the boxes typed? I don’t see a way to assign meaningful types to the terms… they all look like IO ().

Posted by: Mike Shulman on September 9, 2009 9:42 PM | Permalink | Reply to this

Re: The Pi Calculus II

This juxtaposition sounds like you’re saying that for this reason IO can’t model the pi calculus. Is that what you’re saying? Because I don’t think it’s true, since the nondeterminism can happen in the environment. Saying forkIO P >> Q creates a single environment endomorphism whose result is nondeterministic. You can then choose to sequence this environment endomorphism with other ones, deterministic or not, as you choose.

OK, sure. I really don’t know if a nondeterministic monad suffices to describe concurrent systems. Can you get deadlock using the IO monad?

Pi calculus gets nondeterminism in two ways. The first way deposits two messages in the same box, and it’s undetermined which will be withdrawn:

(1)x¯y.|x¯z.|x(v).P \bar{x}y. | \bar{x}z. | x(v).P

The other way has a single message in the box and two processes trying to withdraw it, and it’s undetermined which process succeeds:

(2)x(y).P|x(y).Q|x¯z x(y).P | x(y).Q | \bar{x}z

This pi term is deadlocked:

(3)x(y).z¯y.|z(y).x¯y x(y).\bar{z}y. | z(y).\bar{x}y

Each process is waiting for the other to deposit something. What’s the corresponding Haskell program?

Posted by: Mike Stay on September 9, 2009 11:15 PM | Permalink | Reply to this

Re: The Pi Calculus II

Using some syntactic sugar for monads to make it readable:

do x <- newEmptyMVar
   z <- newEmptyMVar
   forkIO (takeMVar x >>= putMVar z)
   (takeMVar z >>= putMVar x)

I just tried it; it really does deadlock. (Actually, Haskell is smart enough to detect this simple deadlock, so the program doesn’t actually hang; instead it exits saying “thread blocked indefinitely”.)

Posted by: Mike Shulman on September 10, 2009 3:03 AM | Permalink | Reply to this

Re: The Pi Calculus II

> (It contradicts what wren said above.)

For what it’s worth, there are a few different versions of the π-calculus. The version I’m most familiar with (which is the one described on Wikipedia) doesn’t have tuples and thus must rely on sequential writes and locking-channels to ensure enough determinism to be useful. The tuples in the OP’s version resemble the polyadic π-calculus, which can be encoded in my version (via sequential writes and locking-channels). The variant of π-calculus the OP had in mind may differ in other regards however.

Do note that in the basic version of the π-calculus, even though technically each channel contains a stream of values, in the absence of a locking technique we can only really reason about the stream as if it were a multiset since we aren’t ensuring enough determinism to prohibit other readers and thus can’t make use of the stream properties.

Posted by: wren ng thornton on September 10, 2009 4:31 AM | Permalink | Reply to this

Re: The Pi Calculus II

Why do we kill off the poor elves as soon as they deposit something? Conversely, mightn’t we ever want to kill off an elf without having him deposit anything? It would make more sense to me to have separate instructions

Deposit (y) in x, then P

and

Die.

than to force the two to always happen together.

Posted by: Mike Shulman on September 9, 2009 2:42 AM | Permalink | Reply to this

Re: The Pi Calculus II

Todd wrote:

It would be sort of like describing how free groups (or free somethings) work in terms of colorful metaphors of magical elves, and having John ask: yes, but what’s a group?

Just to be clear: I like Mike’s elves. That explanation made about 10 times more sense to me than his previous one! I have plenty of rigorous definitions of the pi-calculus at my fingertips, but I don’t understand them. I need semantics to help me digest syntax — but it’s perfectly fine if the semantics involves elves.

For one thing, I like metaphors, the more colorful the better.

For another, I have no qualms about being talked down to.

For another: I’m a wizard, I can handle elves.

And for another: to me, computer programs seem a lot more like elves than mathematical structures. Instead of just sitting there looking pretty, they run around, and whisper to each other, and do all sorts of sneaky stuff when I’m not looking.

Like the mathematical structures I usually deal with, program follow rules — but unlike them, it’s really useful to think of programs as ‘agents’ that do stuff like ‘send messages’, or ‘open mailboxes’, or ‘replicate’, or ‘die’.

For this reason, I’m not necessarily expecting the pi-calculus or other process calculi to have a semantics that falls neatly into one of the frameworks that’s proved so successful for dealing with mathematical structures like sets or groups. I’m actually expecting something a bit more like physics, where you have particles running around, interacting, and decaying. However, there are also lots of differences.

So: bring on the elves!

Posted by: John Baez on September 9, 2009 5:03 PM | Permalink | Reply to this

too many elves; Re: The Pi Calculus II

My wife’s teaching a Physics class right now, so I can’t double check with her, nor was Google helpful just now. But I seem to recall a clueless reviewer of “At the Movies” saying that a Lord of the Rings film had “too many elves.”

Posted by: Jonathan Vos Post on September 9, 2009 7:34 PM | Permalink | Reply to this

Re: too many elves; Re: The Pi Calculus II

Posted by: Mike Stay on September 9, 2009 7:53 PM | Permalink | Reply to this

Re: The Pi Calculus II

Don’t get me wrong. I like elves too. Elves are cool. And Mike did a good job with the metaphors.

But it’s still a way of describing the calculus (read: logical syntax) of processes, whereas we still don’t understand what the semantics should be. That’s all I meant.

Posted by: Todd Trimble on September 10, 2009 1:09 AM | Permalink | Reply to this

Re: The Pi Calculus II

But it’s still a way of describing the calculus (read: logical syntax) of processes, whereas we still don’t understand what the semantics should be. That’s all I meant.

I think you and John Armstrong must be assigning different semantics to the term “semantics” than I am! :D If by “semantics”, you mean “a model in Set”, then I doubt one exists: as I explained here, I expect a pi theory to be a bicategory, not a category. So unlike lambda calculus, where you map (equivalence classes of) terms to functions, pi terms have to map to things with nontrivial 2-morphisms between them.

I’m using the term “semantics” in the following way: the syntax of the pi calculus consists of the rules for manipulating strings, whereas the semantics is a (perhaps physical) system whose dynamics are homomorphic to those rules. The semantics, in my description of the pi calculus, involve elves, mailboxes, paper slips, counters, natural numbers, etc., that never appear in the syntax. The semantics give an implementation of the syntax.

Posted by: Mike Stay on September 10, 2009 1:35 AM | Permalink | Reply to this

Re: The Pi Calculus II

I think you and John Armstrong must be assigning different semantics to the term “semantics” than I am!

What I mean is that you’re telling us how you can manipulate terms. Whether you’re writing terms as symbol strings or magic elves, you’re still telling us how to manipulate terms. What we’re asking is why to manipulate the terms. Why does putting messages in mailboxes “compute” something?

The improvement with the elves wasn’t the change in language, but just spending more time on the same ground.

Posted by: John Armstrong on September 10, 2009 2:28 AM | Permalink | Reply to this

Re: The Pi Calculus II

Todd wrote:

But it’s still a way of describing the calculus (read: logical syntax) of processes, whereas we still don’t understand what the semantics should be. That’s all I meant.

Mike Stay wrote:

I think you and John Armstrong must be assigning different semantics to the term “semantics” than I am! :D If by “semantics”, you mean “a model in Set”…

I doubt Todd would be so uncool as to demand a model in the category Set! He’s an expert on categorical logic and topos theory, after all. I presume he just wants some mathematical structure that’s described by the π-calculus. Maybe the problem is that his notion of ‘mathematical structures’ doesn’t include elves — he seems to think of them as a ‘colorful metaphor’. Of course elves obeying precise mathematical rules are mathematical structures — but anyway, I should stop trying to guess what he’s thinking, and let him say.

For me, all I really want is to ‘understand’ the π-calculus, and for understanding I seem to need some sort of semantics — but I’m okay with models in Elf, as well as models in Set.

Posted by: John Baez on September 10, 2009 2:42 AM | Permalink | Reply to this

Re: The Pi Calculus II

John A. wrote:

What we’re asking is why to manipulate the terms. Why does putting messages in mailboxes “compute” something?

I think we should be wary of saying what “we” want. It’s quite possible that Todd wants something different from you: he never asked why putting messages in mailboxes “computes” something.

As for me, I’m perfectly willing to believe that a sufficiently complex — and indeed, not all that complex — system of rules for putting messages in mailboxes will able to compute not just “something” but in fact “everything”.

Indeed since Mike said how to embed the λ-calculus in the π-calculus, I know the latter must be able to function as a universal computer. (I didn’t fully understand how it works yet, but it’s in Milner’s book, so I’m not just trusting Mike.)

While I definitely want to understand this someday, I get the impression from various people that it’s just icing on the cake: the main goal of the π-calculus is not to model a universal computer, but to serve as a process calculus able to model concurrency. And so that is what I most want to understand: not how putting messages in boxes and taking them out can be used to compute something, but merely what the π calculus has to say about putting messages in boxes and taking them out. If I could understand what Mike has already said well enough to remember it for more than a day, that would probably count as success for me.

Posted by: John Baez on September 10, 2009 2:56 AM | Permalink | Reply to this

Re: The Pi Calculus II

Note that a Turing machine is just a “system of rules for putting messages in mailboxes” and we know that (assuming Turing-Church) computes everything.

Posted by: Mark Biggar on September 10, 2009 11:31 PM | Permalink | Reply to this

Re: The Pi Calculus II

Mike Stay wrote:

If by “semantics”, you mean “a model in Set”, then I doubt one exists: as I explained here, I expect a pi theory to be a bicategory, not a category.

I didn’t say anything about SetSet. I do grasp why you’re contemplating structured bicategories.

John Baez more or less correctly interpreted my intended meaning back here. If typed lambda calculus is a language interpretable in cartesian closed categories, and if untyped lambda calculus is interpretable in cartesian closed categories with an object XX isomorphic to both X×XX \times X and X XX^X, then what sort of algebraic structure is the language of π\pi-calculus interpretable in?

What I was trying to suggest is that quite typically, a logical syntax allows one to construct a free model (or term model, or syntactic category or syntactic bicategory, however you wish to put it), and it looks like the story of the elves may be about such a free or syntactic model, so it gives one semantics, yes, but doesn’t yet explain the nature of other models.

Posted by: Todd Trimble on September 10, 2009 3:05 AM | Permalink | Reply to this

Re: The Pi Calculus II

I didn’t say anything about Set.

Yes, I’m sorry; I was just trying to figure out why you didn’t think the elf model was a symantics for the syntax, and Set is the place algebraic theories and lambda theories are usually interpreted.

…what sort of algebraic structure is the language of π-calculus interpretable in?

I have no idea! That’s one of the reasons for this post and John’s earlier one. But I understand now that what you mean by “semantics” isn’t a particular model of the syntax, but rather a characterization of all possible models.

Posted by: Mike Stay on September 10, 2009 5:16 PM | Permalink | Reply to this

Re: The Pi Calculus II

Montanari and his colleagues have tried to understand the π\pi-calculus as a free algebraic gadget using double categories.

I think they obtained rather good results for CCS, a process algebra which is somewhat simpler than the π\pi-calculus, but they only gave a definition without results for the π\pi-calculus itself, here, and here a pdf file.

Posted by: Tom Hirschowitz on September 11, 2009 3:31 PM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks, Tom! I have no strong commitment to the pi-calculus; I’ll be happy to understand any process calculus, so if there’s one that’s been analyzed with the help of nn-categories that’s very good news.

Posted by: John Baez on September 11, 2009 5:12 PM | Permalink | Reply to this

Concurrency, Causality, and Space-time

The idea of concurrency has crossed my mind a few times, but I still know very little about the field of study (not for lack of interest). I’m interested in computing stuff. And “secretly” I’m interested in the idea that the universe computes stuff. So I believe that understanding the universe involves understanding how to compute stuff (which provides me with a fun hobby). The fact that all these topics appear in the context of category theory seems perfectly natural (not to mention cool) to me.

I’m going to share a thought which is probably naive…

With “process calculus” we obviously have “processes” and “processors”. What a processor does may depend on the order in which it receives information from some channel. I like the analogy of writing two checks for more than the bank balance and the outcome depends on which check is cashed first.

What I propose (without any claim of originality) is to try to think of “process calculus” arrow theoretically where traversing each arrow requires a step through time. In other words, the picture should be that of a directed graph that looks reminiscent of a spacetime diagram. There should be a clear global concept of the “flow of time”.

The significance of this is that statements like the following

Processor AA sends a message to processor BB over channel xx. Processor AA sends a response back to processor BB over the same channel.

can no longer be made.

Instead, to be explicit, you have to say something like:

Processor AA at time t 0t_0 sends a message to processor BB at time t 1t_1 over channel x A(t 0)B(t 1)x_{A(t_0)\to B(t_1)}. Processor BB at time t 2t_2 responds to channel AA at time t 3t_3 over channel x B(t 2)A(t 3)x_{B(t_2)\to A(t_3)}.

With spacetime diagrams as a motivation, instead of saying “Processor AA at time t 0t_0”, we should think of the processor as a “spacetime event” or “node” in our directed graph. Then “Processor AA at time t 0t_0” is completely distinct from “Processor AA at time t 3t_3” since they are different points in spacetime and can have completely different behaviors depending on what has occurred between t 0t_0 and t 3t_3.

We have a bias toward thinking in terms of “space” and not so much about “spacetime”. The internet is a good example. You have computer AA talking to computer BB over channel xx. You can take a picture of it and label it accordingly.

However, for computer AA to talk to computer BB, it has to send information, which takes time. The “channel” is not really a “spacelike” link. It is a “lightlike” link. Even though there is just one wire xx connecting the two computers there is really a series of channels for each time step. For example, at time t 0t_0 there are two distinct channels

x A(t 0)B(t 1)andx B(t 0)A(t 1).x_{A(t_0)\to B(t_1)}\quad\text{and}\quad x_{B(t_0)\to A(t_1)}.

Therefore, the “spatial” picture of two processors with one channel

ABA\bullet\leftrightarrow\bullet B

needs to be replaced by the “spacetime” picture

A(t 3) B(t 3) A(t 2) B(t 2) A(t 1) B(t 1) A(t 0) B(t 0)\begin{aligned} A(t_3) & \bullet & & & \bullet & B(t_3) \\ & & \nearrow & \nwarrow & & \\ A(t_2) & \bullet & & & \bullet & B(t_2) \\ & & \nearrow & \nwarrow & & \\ A(t_1) & \bullet & & & \bullet & B(t_1) \\ & & \nearrow & \nwarrow & & \\ A(t_0) & \bullet & & & \bullet & B(t_0) \end{aligned}

Note: This is a binary tree (or 2-diamond) with every even/odd transverse node identified, which can be thought of as a discrete version of the directed space S 1×IS^1\times\vec{I}.

As time evolves, the directed graph evolves where channels/directed edges exist or not depending on the availability of the processor node they’re connecting to.

Sorry for this stream of consciousness. To understand π\pi-calculus categorically, it would seem that a natural thing (and probably equivalent thing) to do is to understand it arrow theoretically. This was my initial attempt to maybe rephrase the question and start drawing some arrows.

Posted by: Eric Forgy on September 9, 2009 9:07 PM | Permalink | Reply to this

Re: Concurrency, Causality, and Space-time

I think there’s something to what you say, Eric, but I think that most of these process calculi want to avoid assuming a ‘global clock’ that enables distant guys to agree on what time it is. So, no variable tt denoting time.

In a typical CPU you have a clock that keeps all the processes synchronized. But on the internet, things are more disorganized. So, most process calculi take a quite different attitude to time than you’re proposing. And an important aspect of the π-calculus is that you can say stuff like “wait until you hear from me, then do blah-di-blah”.

That’s my impression at least. There are also asynchronous CPU’s that don’t have a central clock, but apparently they haven’t caught on.

Posted by: John Baez on September 10, 2009 4:45 AM | Permalink | Reply to this

Re: Concurrency, Causality, and Space-time

I was a little nervous writing that, so thanks for being kind and taking the idea semi-seriously despite my presentation.

Whether or not the events are synchronized is not so important to the idea I was trying to outline I don’t think, but I do think that a notion of time is important for concurrency problems (from what little I understand). Things are always moving forward. Causality is manifest I think. Processor BB cannot receive a signal from processor AA before AA sends it, etc.

So although we could drop the ‘global clock’, I don’t know if we should drop time altogether. Although distant guys may not agree on what time it is, they should agree that event AA happened before event BB. For this we simply need a preorder (which is niftily encoded in a directed graph).

So that I can kick the idea around without distracting from the conversation here, I created Process Calculus. Anyone interested in helping, please feel free to drop any notes there as well as here.

Posted by: Eric Forgy on September 10, 2009 6:30 AM | Permalink | Reply to this

Re: Concurrency, Causality, and Space-time

Just two tiny notes: firstly, for practical programming applications the goal is to find programs (and ideally programming “styles”) such that any time ordering which respects the causal relationships specified by the program are observed, equivalent results happen. (Equivalent just means that, just as system “inputs” need only respect causal order, program “outputs” just need to respect a causal order and not a strict temporal ordering).

Secondly, I’d put the constraint purely “locally”: given two entities xx and yy that can both see events AA and BB they agree (after taking into account propagation time) on the relative ordering of AA and BB (and compositionality/modularity means they probably shouldn’t be able to see very much). This implies that there must be at least one global preorder with which all these are compatible, but not necessarily that any individual entity can see all of it. (Really modern chips are just at the point where they’re trading in a truly global clock for a clock within each computing unit, along with mechanisms for these units to send data between themselves, partly due to signal propagation time and partly because a global clock requires a lot of power. And as John says, the internet makes this look ordered.)

Posted by: bane on September 11, 2009 4:34 PM | Permalink | Reply to this

Re: Concurrency, Causality, and Space-time

Eric wrote:

So although we could drop the ‘global clock’, I don’t know if we should drop time altogether.

No, we shouldn’t. As you say, it seems that a fairly primitive notion of time is all we need: namely, a preorder. There’s clearly some sort of concept of ‘event’ lurking in the pi-calculus, and these events form a preorder. In my so far quite feeble attempts to understand process calculi using the language of category theory, my physics background naturally tempts me to cook up some sort of category where morphisms are (roughly) bunches of events — and among other things, I’d like any bunch of events to have a preorder structure on it, which expresses causality.

But I think it’s best if I start by understanding a bit of what people have already done, and what it’s good for…

Posted by: John Baez on September 10, 2009 5:24 PM | Permalink | Reply to this

Re: The Pi Calculus II

I’m having difficulty getting a clear picture of the rules involved in Mike’s explanation in the main article. On the assumption that I’m not the only one having this difficulty, I’ll write out what I think is going on – please correct me where I’ve gone wrong, so that what I’ve written below might evolve into a correct presentation of these ideas!

So, we have named mailboxes, and we can create new mailboxes with the “new” command; this command takes one argument, which is the name of the new mailbox. Every mailbox has exactly one name – the one it was given at its creation – and cannot subsequently be given additional names (e.g. created with the name “Hesperus” and then given the additional name “Phosphorus”) or have its name changed.

Being mailboxes, we can deposit messages in them, and subsequently withdraw these messages from them. Of course, a particular message can only be withdrawn from a mailbox after it has been deposited there (and messages can only be deposited in a mailbox after that mailbox has been created), so we need to have some notion of time-ordering of actions. We’ll return to this point later.

What are these “messages”? For now, let’s allow them to be any string we like: “Hello, World”, or “foo bar baz”, or “You might be interested in what’s in the mailbox called ‘Oscar’”. As it turns out (I think), this freedom is unnecessary: for the purposes of doing computation, we can restrict the messages to be nothing but the names of mailboxes (or rather, tuples of names of mailboxes). But to get a picture of what’s going on, we won’t impose that restriction in the following discussion. [NB: This may be a huge mistake, a symptom of my failure to understand something fundamental about what’s going on. Oh well, let’s carry on…]

To deposit a message in a mailbox, we use the “Deposit” command, which takes two arguments: the message to be deposited, and the mailbox in which to deposit it. For clarity we’ll use the following syntax:

  • Deposit “Hi, Mike!” in Mike’sMailbox

  • Deposit “Buy more milk” in ShoppingListReminders

Before we come on to withdrawing messages, we should ask: what happens if we deposit two messages into the same mailbox? What’s the status of the mailbox ‘Fred’ at the end of the following sequence of commands?:

  • new(Fred)

  • Deposit “A” in Fred

  • Deposit “B” in Fred

Well, it’s up to us to decide! We could choose to work with a system in which mailboxes hold at most one message at a time, in which case mailbox ‘Fred’ will contain message “B” at the end, having forgotten all about message “A”. Or we might decide that our mailboxes should hold queues of messages, or stacks of them, or big unsorted jumbled piles of them. The exact form of the calculus we end up with will depend on what choices we make here (and, correspondingly, on what we decide the rules for withdrawals should be). However, it might turn out that the computational or expressive power of the system doesn’t change – i.e. that single-message mailboxes are sufficient to compute everything.

Q: Which choice for Deposit behaviour does Mike’s original example use? Is it the same as the streams in wren’s example? Is my above speculation, about single-message mailboxes being sufficient, correct?

[NOTE: This is the point at which I become even less confident in the correctness of what I’m saying. But I’ll press on regardless, in the hope that someone can confirm or correct what I’ve written…]

Putting this multi-deposit question to one side for now, let’s look at withdrawing messages. This is where the analogy with mailboxes starts to break down a little. With real physical mailboxes, the protocol for withdrawing messages is something like ‘go to a specific mailbox, take out whatever messages are in it, and read them to find out what they say’. This is a protocol that compulsive email-checkers follow several hundred times a day! However, reading your messages in the pi calculus works rather differently.

Whereas in the real world case reading messages is a process of discovery, with pi calculus mailboxes there are no surprises. There are no mystery messages whose contents we only learn once we’ve withdrawn them – on the contrary, the protocol requires that we know exactly what message we’re asking for! In fact, the syntax for withdrawal looks just like the one for Deposit:

  • Withdraw “Hi, Mike!” from Mike’sMailbox

  • Withdraw “The diamonds are on the move” from CrimeSyndicateBox

So what’s the point of passing messages if we already have to know them in order to get access to them? This brings us back to the point made above, that messages can only be withdrawn after they have been deposited. What happens if we’re given a command to withdraw message X from mailbox Y, but we find that no-one has made the corresponding deposit yet? We don’t just give up, or panic, or skip to a different instruction instead; what we do is wait. We hang around by mailbox Y, doing nothing, until the message we’ve been waiting for turns up. Only then do we carry on with whatever else was on our list of commands. So the instruction we’re writing as

  • Withdraw … from …

might be better expressed as

  • Wait until … is in …

By now the point of all this message-passing should hopefully be getting a little clearer: we’re not passing messages in order to convey information in the messages’ contents, but rather using them as triggers or guards to control when other activities can begin. If I don’t want the payment for my new yacht to go out of my bank account until I can be sure there are sufficient funds to cover it, I can guard the payment instruction with a wait command:

  • Wait until “I have won the lottery” is in MyMailbox

  • {Instructions to pay for the yacht go here}

Since the commands are processed in sequence, I can’t get to the payment instructions until I’ve passed through the Wait instruction, which guarantees that the payment won’t go out until after the money has arrived.

Now, as a language to write instructions for a single agent, the above set-up wouldn’t make much sense. Sure, it would allow us to guarantee that certain actions couldn’t be taken before certain other actions, but it would be very prone to coming to a complete standstill! If there’s only one agent taking any action, then if the agent ever comes to a Wait command whose message isn’t yet present, it will just wait there forever (since there’s no-one else to put the required message into that mailbox, and the waiting agent can’t do anything else while it waits).

But as a model of multiple agents working concurrently, this kind of language makes a lot of sense: it allows them to co-ordinate their actions without needing to synchronise themselves, or get into a dialogue, or even exchange any information directly between them at all. One agent can happily proceed with carrying out its instructions, safe in the knowledge that (thanks to well placed guards) it’s not going to inadvertently mess up what any other process is doing – not going to spend money that another agent needed, or over-write a block of memory that another agent was in the middle of using, for example.

Up until now we’ve been thinking (at least, I’ve been thinking!) only about strictly time-ordered sequences of instructions, in which we have to finish one command before we can move on to the next. But the whole point of concurrency is that we often explicitly don’t care about the order in which some actions take place. This allows us to have multiple agents each doing their own thing in their own time; but as long as each agent’s instructions are still linear and sequential, we’re not exploiting concurrency to its fullest.

That’s why we want a command like “P|Q” (where P and Q are blocks of commands) that says ‘next I want you to do these two things P and Q, but I don’t care about the order you do them in’. When our agent comes to a command like this it can just spawn a new pair of agents, sending them off to carry out P and Q respectively, and waiting for them both to finish before killing them off and proceeding to its next instruction. Now, it might turn out that, because of Wait instructions, P can’t be finished until Q is done, in which case the execution of “P|Q” will actually proceed somewhat sequentially. But the benefit of having a language like this is that we don’t need to know about that; issuing the instruction “P|Q” means that I don’t care about the ordering of P and Q, not that I explicitly want them to proceed at the same time. In concurrent programming we should be able to demand only as much control over the time-ordering of actions as we need, and the rest is left unstated.

Finally, we get into the question of universality and the relation to the lambda calculus, and this is where I get stuck. Mike illustrates how we can embed lambda calculus into pi calculus, but I don’t entirely follow this. The questions I haven’t resolved above include: How can we build up more complex commands from the simple instructions provided? Why is it sufficient to allow only mailbox names as our messages, rather than arbitrary strings (and what’s the significance of passing the name of one mailbox rather than another, if the messages themselves are just triggers)? But I don’t want to try getting into those issues until I’m more sure I’ve correctly understood the more basic stuff I’ve described above.

Posted by: Stuart on September 10, 2009 12:21 PM | Permalink | Reply to this

Re: The Pi Calculus II

Hurrah! All the things you’re wondering about, Stuart, are things I was wondering about. Except that for me, these and dozens of other puzzles piled on top of each other so fast as I read Mike’s post that my brain simply short-circuited, leaving me unable to phrase questions in any clear way.

I hope someone answers your questions in a comprehensible manner; then I might someday try to take your post and expand it into a full-fledged explanation of The Pi-Calculus for Dummies, Mathematicians and Physicists.

Here’s another small question to add to the heap. Mike gave a rewrite rule:

Now it’s time for some rewrite rules. In the example above, the obvious thing to do is let the deposit trigger the withdrawal; this rule is called “synchronization”. It says

  Deposit (z1, z2, ..., zn) in x |

  Wait until you can withdraw (y1, y2, ..., yn) from x,

  then P

 

  P { with z1, z2, ..., zn replacing y1, y2, ..., yn }

If you’re waiting for (y1, y2, ..., yn) to be in the mailbox, why should you do anything after someone has put something else, namely (z1, z2, ..., zn), into the mailbox?

Or does “Wait until you can withdraw (y1, y2, ..., yn) from x,” simply mean “wait until you can withdraw any string of length n from the mailbox”.

And what does “P { with z1, z2, ..., zn replacing y1, y2, ..., yn }” mean? Is P supposed to represent any pi-calculus expression with free variables y1, y2, ..., yn ?

And why do computer scientists still write code in a crappy teletype font like this? Are they trying to impress us with the venerable history of their profession? Math goes back a lot further — should I start writing equations in Roman numerals, or cuneiform?

Posted by: John Baez on September 10, 2009 4:58 PM | Permalink | Reply to this

Re: The Pi Calculus II

If you’re waiting for (y1, y2, ..., yn) to be in the mailbox, why should you do anything after someone has put something else, namely (z1, z2, ..., zn), into the mailbox?

Wait until you can withdraw...” is a binding operator like ν\nu or λ\lambda or \exists or \forall. This confusion is the same as that of kids who don’t know what to do with the expression f(y 2)f(y^2) when all they have is the definition of f(x)f(x).

Or does “Wait until you can withdraw (y1, y2, ..., yn) from x” simply mean “wait until you can withdraw any string of length n from the mailbox”.

In the elf model, if you define “string of length n” to be “n-tuple of natural numbers”, then yes—if in addition the elf binds each of the nicknames y1 … yn to the corresponding numbers.

And what does “P { with z1, z2, ..., zn replacing y1, y2, ..., yn }” mean? Is P supposed to represent any pi-calculus expression with free variables y1, y2, ..., yn ?

Yes.

And why do computer scientists still write code in a crappy teletype font like this? Are they trying to impress us with the venerable history of their profession?

Because we use indentation to visually align blocks of code as well as to put ASCII art in the code. Otherwise, if you have a system where you’re adding markup to the code to make it pretty (like HTML or TeX) you not only have to debug your code, you have to debug the markup as well, and you have to use two different viewers (like a web browser or a postscript viewer). There are tools like JavaDoc that postprocess the code, extract and parse comments, and render webpages that document the purpose, pre- and postconditions. But they’re just for convenience and documentation’s sake–the comments should be readable right there in the source even without running JavaDoc over them.

Posted by: Mike Stay on September 10, 2009 5:57 PM | Permalink | Reply to this

Re: The Pi Calculus II

John asked

And why do computer scientists still write code in a crappy teletype font like this? Are they trying to impress us with the venerable history of their profession? Math goes back a lot further — should I start writing equations in Roman numerals, or cuneiform?

I think it’s closer to the reason why mathematicians already write formlae in

Eλληνικα E\lambda\lambda\eta\nu\iota\kappa\alpha

and offset funny. It’s what they’re used to code looking like while they tinker and fiddle with it untill it compiles.

Posted by: some guy on the street on September 10, 2009 7:23 PM | Permalink | Reply to this

Re: The Pi Calculus II

I’m having difficulty getting a clear picture of the rules involved in Mike’s explanation in the main article. On the assumption that I’m not the only one having this difficulty…

Yeah, it was lacking. Try the elves version.

So, we have named mailboxes, and we can create new mailboxes with the “new” command; this command takes one argument, which is the name of the new mailbox. Every mailbox has exactly one name – the one it was given at its creation – and cannot subsequently be given additional names (e.g. created with the name “Hesperus” and then given the additional name “Phosphorus”) or have its name changed.

This isn’t quite right. For example, given two parallel processes

Deposit (z) in x. |
Wait until you can withdraw (y) from x, then 
  deposit (v) in y.

the first process refers to a mailbox by z, whereas (after the withdrawal succeeds) the second refers to it by y. In the story involving the elves, each mailbox has a unique number that doesn’t change, but that’s just one possible implementation.

Posted by: Mike Stay on September 10, 2009 5:03 PM | Permalink | Reply to this

Re: The Pi Calculus II

Looking at the Milner essay a couple comments down, it looks like yes, you’ve got the “control” aspect down; but you’re using a much more elaborate system than what Mike described. In particular, your messages have stringy type, while Mike’s messages are “untyped” in a sense similar to that in “untyped λ\lambda-calculus” —

Hmm… perhaps this alternate semantics will have a more immediate familiarity?

In Mike’s syntax,

  • New(x) { Q }

is like setting up a new blog with blogpresspad.com, and then going off to do Q; YOU will think of the new blog/blog address as “x”.

  • deposit x in y

is like leaving a comment on the blog you think of as “y”, mentioning the address of the blog you think of as “x”.

  • wait untill you can withdraw x from y

is like lurking on the blog you think of as “y” until someone leaves a new comment mentioning another blog you will later think of as “x”, and you bookmark the address for later use.

  • P | Q

is about doing lots of things in different tabs of your browser; or being two people… This is clearly a superhuman feat.

Now, I don’t really get what happens if many people try to leave comments in the same place at the same time; maybe we need some more info about these “rewrite rules”? What’s the proper re-write of this: … ?

new (x,v) {

new (y) { put y in x | withdraw z from y, then Q } |

new (w) { put w in x | withdraw z from w, then R } |

withdraw z from x, then put v in z

}

I can’t decide if this is a worse problem for me than λ\lambda-calculus applications that can’t converge to abstractions. But then, it’s hard to be Turing complete AND have a solvable halting-problem!

I’d also like to ask (as others have) why deposit is the last thing a concurrent process does? The wikipaedian doesn’t acknowledge this restriction, but lets depositing be followed by further waiting, while Mike seems to think stopping at deposit is important.

Posted by: some guy on the street on September 10, 2009 5:30 PM | Permalink | Reply to this

Re: The Pi Calculus II

I like your blogging metaphor! In fact, before the elf example, I tried writing one up using a wiki metaphor.

  • wait untill you can withdraw x from y

is like lurking on the blog you think of as “y” until someone leaves a new comment mentioning another blog you will later think of as “x”, and you bookmark the address for later use.

True, except you also have to delete that comment from the page; only one lurker ever sees any given comment!

What’s the proper rewrite of this?

new (x,v) {
new (y) { put y in x | withdraw z from y, then Q } |
new (w) { put w in x | withdraw z from w, then R } |
withdraw z from x, then put v in z
}

Well, first we have to do some scope migration so the withdrawal from x is in the same scope as the deposit:

 new (x,v,y,w) {
   put y in x | withdraw z from y, then Q |
   put w in x | withdraw z from w, then R |
   withdraw z from x, then put v in z
 }

Now we have two deposits in x and one withdrawal, so there are two possible rewrites: either y gets drawn:

 new (x,v,y,w) {
   withdraw z from y, then Q |
   put w in x | withdraw z from w, then R |
   put v in y
 }

and then we synchronize on y:

 new (x,v,y,w) {
   Q {with y replacing free occurrences of z in Q} |
   put w in x | withdraw z from w, then R
 }

or w gets drawn:

 new (x,v,y,w) {
   put y in x | withdraw z from y, then Q |
   withdraw z from w, then R |
   put v in w
 }

and we synchronize on w:

 new (x,v,y,w) {
   put y in x | withdraw z from y, then Q |
   R {with v replacing free occurrences of z in R}
 }
Posted by: Mike Stay on September 10, 2009 6:22 PM | Permalink | Reply to this

Re: The Pi Calculus II

Oh dear! Re-writes aren’t deterministic!? goodness… I hope the fragment of Pi that computes “everything” is deterministic. Is there a quantum Pi-calculus in vogue somewhere?

Posted by: some guy on the street on September 10, 2009 8:12 PM | Permalink | Reply to this

Re: The Pi Calculus II

“Some guy on the street” wrote:

quantum pi calculus

I don’t know, but it seems to me like a pi term is analagous to a state of a system of QHOs, weakly coupled by rewrite rules that annihilate the previous term and create the next one.

Posted by: Mike Stay on September 10, 2009 9:36 PM | Permalink | Reply to this

Re: The Pi Calculus II

Oh dear! Re-writes aren’t deterministic!? goodness…

Even in the λ\lambda-calculus, rewrite rules aren't deterministic, in that a given expression, say (λx.ax)(λy.by)c, (\lambda x. a x) (\lambda y. b y) c , may reduce to two different expressions, in this case a(λy.by)c a (\lambda y. b y) c and (λx.ax)bc. (\lambda x. a x) b c . But they're still deterministic in the weaker sense that they're confluent, as can be seen in this example since both possible intermediate expressions reduce to (and only to) abc. a b c .

But the Π\Pi-calculus is not confluent. This is meant to reflect the fact that, when you hook up computers in a network, what happens may depend on unpredictable conditions such as which parallel process happens first.

I hope the fragment of Pi that computes “everything” is deterministic.

It should be confluent; if we're lucky, somebody will cite some theorem to that effect. Actually, that theorem should be a corollary of some more general theorems:

  • given λ\lambda-calculus expressions aa and bb, which become Π\Pi-calculus expressions AA and BB when translated as described above, then AA reduces to BB in the Π\Pi-calculus if and only if aa reduces to bb in the λ\lambda-calculus;
  • given a λ\lambda-calculus expression aa, which becomes a Π\Pi-calculus expression AA when translated, and a Π\Pi-calculus expression BB, if AA reduces to BB in the Π\Pi-calculus, then aa reduces to some λ\lambda-calculus expression bb that becomes BB when translated.

Are these theorems true?

Posted by: Toby Bartels on September 10, 2009 11:48 PM | Permalink | Reply to this

Re: The Pi Calculus II

Toby wrote:

Are these theorems true?

Yes; this was proved by Milner.

Posted by: Mike Stay on September 11, 2009 5:08 AM | Permalink | Reply to this

Chaos in Distributed computation; Re: The Pi Calculus II

This sensitivity of “unpredictable conditions” is precisely why such metasystems may be chaotic.

See, for example:

T Hogg, BA Huberman, JM McGlade, The stability of ecosystems, Proceedings of the Royal Society of London. Series B, 1989

BA Huberman, T Hogg, Distributed computation as an economic system, The Journal of Economic Perspectives, 1995

X Li, Y Ying Jin, G Chen,
Complexity and synchronization of the world trade web, Physica A: Statistical Mechanics and its Applications, 2003

MT Matache, J Heidel, Random Boolean network model exhibiting deterministic chaos, Physical review E, 2004

Posted by: Jonathan Vos Post on September 11, 2009 5:56 PM | Permalink | Reply to this

Re: The Pi Calculus II

Are these theorems true?

Yes; this was proved by Milner.

Excellent! Then some guy on the street may feel relieved.

Posted by: Toby Bartels on September 11, 2009 6:20 PM | Permalink | Reply to this

Re: The Pi Calculus II

Well, about as relieved as when I hear that program X outputs PS or PDF conforming to Adobe Document Structuring Conventions.

I’m not sure that the well-behavedness of a notational system should be effectively left dependent on “coding style”, but I’m getting more used to the idea that Pi-terms describe snapshots of a stochastic system rather than instructions for “doing stuff”. It is encouraging that a basically-stochastic thing enables enough to also support a “control theory”, for building reliable structures out of unreliable bits.

Posted by: some guy on the street on September 11, 2009 6:31 PM | Permalink | Reply to this

Re: The Pi Calculus II

I’d also like to ask (as others have) why deposit is the last thing a concurrent process does?

It’s not necessarily true in all variants of pi calculus, just in this one, called “polyadic asynchronous” pi calculus. I’m not really sure of the reason for it.

There are LOTS of variants of pi calculus. In fact, I was originally going to use a slightly more complicated version where there are two flavors of names and each mailbox has an arity. There’s one flavor of name for depositing messages and one for withdrawing them—just like public key cryptography. Then the new operator would look something like new (x-in, x-out, 5) { P } and if you wanted a process to be able to both deposit into and withdraw from to a mailbox, you’d have to send them both names.

Posted by: Mike Stay on September 10, 2009 6:38 PM | Permalink | Reply to this

Re: The Pi Calculus II

Is it really only about synchronization, though?

You can only withdraw a MESSAGE from a mailbox by knowing about the mailbox. But that message may well, and as we saw in Mike’s version of the Church numerals often are, not the mailbox itself, but the mailbox where we can get hold of something else we want to go look at instead. Hence, linking from mailbox to mailbox by depositing mailbox names in mailboxes may very well be a good way to get things done more than just synchronizing parallell processes.

Posted by: Mikael Vejdemo Johansson on September 10, 2009 5:32 PM | Permalink | Reply to this

Re: The Pi Calculus II

Whereas in the real world case reading messages is a process of discovery, with pi calculus mailboxes there are no surprises. There are no mystery messages whose contents we only learn once we’ve withdrawn them – on the contrary, the protocol requires that we know exactly what message we’re asking for!

I think that this is your only mistake (although hopefully somebody will confirm this). As Mike said in his reply to John's reply to you, the Withdraw command introduces a new variable (sort of like New does) rather than using a previously extant term (which is what Deposit does).

Because of this, the desire to withdraw does more than just force one to wait until one can withdraw (although it does that too); it also gives you a referent for the variable when it appears in the next command. And that is why we like to put mailbox names in these variables; the next command is liable to tell us to deposit or withdraw something in mailbox xx, where xx is a name that we have previously withdrawn.

Posted by: Toby Bartels on September 10, 2009 6:39 PM | Permalink | Reply to this

Re: The Pi Calculus II

There’s a difference between a mailbox address or message contents (i.e. values) and the names referring to them (i.e. variables). When “new(x)” is executed, it does not create the mailbox “x”— rather it creates some anonymous mailbox (e.g. the mailbox “foobar”) and then stores that mailbox into the variable x. Similarly, the execution of “withdraw(x) from y” will wait for y to have some contents and then will bind those contents to the name x. This doesn’t mean that the message contents are “x”, but rather that whatever the message is we’ll call it x to keep it distinct from other messages in our local naming scope.

This is just like when we say “f(x) = 5 + x” we do not mean that only “x” can be passed to f, but rather that whatever we pass to f will be called x in the body of the function.

However, your discussion of guards and triggers does mostly hold up despite the confusion above. Because the receiving of messages blocks when there is no message yet, this creates a (weak-)synchronization which can be used to guard a process until the appropriate trigger. For example, we can use the template

ν(startP). ( startP(ignored). P | … )

to spawn a process P which will wait until some message is sent to startP (in the ellipses) before running. The reason to pass one channel instead of another is because passing a channel gives other processes the ability to write to or read from it, so this allows us to control access to different channels.

Posted by: wren ng thornton on September 11, 2009 4:30 AM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks to everyone for clearing up the mistakes I made in my comments last week, the most serious of which was a value-vs.-variable confusion regarding the names of mailboxes and the contents of messages. So, here’s a second attempt to clear up (in my own mind) how this works. This time I’m going to try thinking about how we might implement a pi calculus system.

First, we have an unlimited supply of mailboxes – these are like the memory registers in your computer. Your computer’s memory registers each have permanent addresses, like “0x0013:0005”, which the operating system knows about. However, when we’re writing programs we don’t generally get access to these “true names” of the memory registers – instead, we use variables like “x” and “y”, and some behind-the-scenes magic takes care of remembering that a particular name “x” corresponds to a particular block of memory.

When we declare a new variable “x” in a program, we’re asking the computer to set aside a section of memory and to associate the name “x” with it. That is, we want to establish that any future uses of the name “x” will be understood to refer to that section of memory. In pi calculus, we do this with the “new” command – the computer finds a new mailbox that’s not being used, and establishes that the name we passed as an argument to “new” will from now on refer to that mailbox.

We can deposit messages into a mailbox with the “Deposit” command, as long as we have a name that refers to that mailbox. In what I wrote last week I allowed messages to be arbitrary strings, but let’s skip that stage this time: what we’re calling “messages” are in fact just names for mailboxes. When I issue the command:

  • Deposit(M1, M2)

I’m asking the computer to look up the “true name” of the mailbox that I refer to by the name “M1”, and put it into the mailbox that I refer to by the name “M2”.

We can also withdraw messages from a mailbox – again, only if we have a name that refers to that mailbox. When I issue the command:

  • Withdraw(y, M3)

I’m asking the computer to go to the mailbox that I refer to by the name “M3” and extract a message from it; from the above description of “Deposit”, we know that this message will be the true name of some mailbox; the computer doesn’t tell me this true name, but establishes that my future use of the name “y” will be understood to refer to that mailbox. (Note that, as in last week’s draft, if we can’t immediately make a withdrawal we simply wait until a message arrives.)

Now we can have multiple agents (or processes), each with their own vocabulary of names. If processes A and B both know about some common mailbox M, then A can use it to tell B about other mailboxes it knows of too. Let’s say A uses the name “M5” to refer to some mailbox, whose true name is “0x0013:0005”; A’s command Deposit(M5, M) puts “0x0013:0005” into mailbox M; after B executes Withdraw(z, M), it gets the name “z” added to its vocabulary, which name now also refers to mailbox “0x0013:0005”.

There are a couple of subtleties I haven’t addressed. First, I’ve been assuming for simplicity that messages must be single names of mailboxes, whereas Mike’s presentation allowed for tuples of names. If we allow that, we have to ensure that any withdrawal command knows what tuple-length to expect. For example, if we’re expecting a 2-tuple – we’re doing Withdraw((y,z), M), say – but the only message in mailbox M is a 5-tuple, then this constitutes an error. It’s not clear to me what advantage we get from allowing tuples, so I’ve stuck to single-name messages.

The second issue is the question of multiple messages in a single mailbox. As noted in my previous draft, this is something we’re free to choose in defining our pi calculus: we can decide that mailboxes will hold messages in a queue, or a stack, or an unordered jumble. If we choose the latter option, as Mike’s example does, then we get a source of non-determinism – when we make a withdrawal from a mailbox that contains multiple messages, it’s indeterminate which of those messages gets read.

I hope this time I’ve described the elementary mechanics of mailboxes, messages and names correctly. If so, I might get on to understanding how Mike’s demonstration of the embedding of lambda calculus into pi calculus works, and how we can use all this machinery to do computation!

Posted by: Stuart on September 17, 2009 10:35 AM | Permalink | Reply to this

Re: The Pi Calculus II

Some remarks about the `difference’ between polyadic deposits and single deposits — there isn’t an expressive difference between them, which I found to be a useful exercise!

There are doubtless many variations on this available.

I’m going to assume that the full environment of processes follows some Calling Convention, so that every process waiting for a message on any given chanel is effectively ready to handle the same protocol — but this echoes my earlier concern about coding convention/vs. language structure: Mike’s language supports a nearly-built-in notion of well-sortedness which should be easy to verify (at least for the immediate future), but this implementation obfuscates that notion.

So, suppose process P wants to tell process Q “(a,b,…)”; but all P really knows about Q is that it’s waiting for input on some particular channel. For simplicity, I’ll assume they both use the same name “w” for this one channel. ( ‘w’ for `waiting’ )

Now, to make sure that the same process gets all the message terms “a,b,…” in the right order, we can’t just dump them down “w”, because there may be several (interchangeable) processes waiting for input on “w”, and we can’t tell which of them will get which bits of our message. For emphasis, instead of Q, I’ll write Replicate Q

What we do instead is take

Deposit (a,b,…) in w

as short-hand for

new(pst!){ Deposit pst! in w | withdraw OK from pst! … }

— where we’ll fill in “…” later — and

Withdraw (α,β,...)(\alpha,\beta,...) from w

as short-hand for

Withdraw wakeup from w . new(goAhead){ Deposit goAhead in wakeup | … }

what this allow us is to unlock (wake-up) a single process Q waiting for input on “w”; and afterwards P and Q establish two private channels — rather like exchanging pgp keys or something — for talking exclusively to eachother (untill one of them spills the beans outside, anyways)

and what follows will look similar. I’m going to introduce another abbreviation, though, because those { } brackets are going to get messy:

{ deposit a in b | Q’ }

will look like

deposit a in b \ Q’

Then the two processes together look like

new(pst!) {

deposit pst! in w \
withdraw OK1 from pst! . deposit a in OK1 \
withdraw OK2 from pst! . deposit b in OK2 \ … } |

Replicate withdraw wakeup from w . new(goAhead) {

deposit goAhead in wakeup \
withdraw α\alpha from goAhead . deposit goAhead in wakeup \
withdraw β\beta from goAhead . deposit goAhead in wakeup \ … }

Another thought that occurred to me while working this out is that your earlier example of

wait for “You have won the lottery” in MyMailbox, then GoBuyYacht

was only a little bit backwards, afterall: a Pi-term closer to the present set-up might look like this:

wait to withdraw “newSafetyDepositBoxKey” from “YouHaveWonTheLotteryAnnouncements”, then deposit “newSafetyDepositBoxKey” with “YachtSalesman”.

Posted by: some guy on the street on September 17, 2009 4:00 PM | Permalink | Reply to this

Re: The Pi Calculus II

Mike Stay wrote:

The pi calculus is just as powerful as the lambda calculus; in fact, there’s a straightforward translation of lambda calculus into the pi calculus.

I would like to expand on this statement. I’d say that the pi-calculus is more powerful than the lambda-calculus. The reason is that pi-calculus can ‘express in a simple manner’ much more computational behaviour than the lambda-calculus. This seems to be an innocent statement, but there’s much subtlety here: the key problem here is that the notion of “expressing in a simple manner” is not well-understood. In fact I doubt that there is one notion of expressivity. Rather there many, depending on what properties translations between programming languages ought to preserve.

The weakest understanding of this concept is the Church-Turing thesis (CTT). In the sense of the CTT, lambda-calculus and pi-calculus are equally expressive. But the CTT thesis doesn’t seem to be very useful because it is not discriminating enough. Other, related theses like the Cobham-Edwards Thesis have not yet been studied in the context of concurrency. The notion of expressivity that concurrency theorists study is compositional full abstraction (CFA) which might be augmented with other preservation properties.

What does CFA mean? Something like this: given two programming languages L and L’, a mapping f from L to L’ is CFA if f is a homomorphism (ignoring for simplicity issues of bound names) and two L-programs M, M’ are equated by the chosen notion of L-equality if and only if their translations f(M) and f(M’) are equated by the chosen notion of L’-equality. I believe that pi-calculus is more expressive than lambda calculus in the sense of CFA.

It is interesting to study the expressivity of different variants of pi-calculi w.r.t. expressivity (of whatever form). C. Palamidessi has shown (simplifying somewhat) that asynchronous pi-calculi are less expressive than synchronous pi-calculus, because the latter can be used to do a form of symmetry breaking that the former cannot express.

This has a strong resemblance to the continuation passing style transform, also known as the Yoneda embedding.

Most of the existing translations of lambda-calculi into (fragments of) pi-calculus are straightforwardly understood as continuation passing style transforms.

Posted by: Martin Berger on September 10, 2009 1:13 PM | Permalink | Reply to this

Re: The Pi Calculus II

What’s in a name? – five gentle pages by Milner.

Posted by: David Corfield on September 10, 2009 1:31 PM | Permalink | Reply to this

Public Disinformation Announcement Logic; Re: The Pi Calculus II

I’ve been wrestling with some of these issues, even in a simplified universe where all messages are public (i.e. to all agents, no private mailboxes/channels).

Abstract:
Public Disinformation Announcement Logic (PDAL) is an extension of multiagent epistemic logic with dynamic operators to model the informational consequences (including belief revisions) of announcements (some by liars) to the entire group of agents. We propose an extension of public announcement logic with a dynamic modal operator that expresses what is true after any announcement: {diamond}φ expresses that there is a truthful announcement ψ after which φ is true. This logic gives a perspective on Fitch’s knowability issues: For which formulas φ, does it hold that φ → {diamond}Kφ? We give various semantic results and show completeness for a Hilbert-style axiomatization of this logic. There is a natural generalization to a logic for arbitrary events. Issues of computational complexity are raised. It would be interesting if PDAL is computationally infeasible with classical computers, but feasible with quantum computers. Future directions indicated.

Posted by: Jonathan Vos Post on September 10, 2009 3:55 PM | Permalink | Reply to this

Re: The Pi Calculus II

A largely tangential thought, but perhaps of interest for unification purposes: can we think of the “new” operator as a channel that every process listens to and that exactly one external process writes on just listing (e.g.) all the available channels in some order? Because then the question of reflecting Pi in n-Cat has one primitive fewer to worry about…

Posted by: some guy on the street on September 11, 2009 6:21 PM | Permalink | Reply to this

Grand Central

Incidentally, this article popped up on my reading list this morning:

Grand Central Now Open to All

Apple today made the source code of Grand Central Dispatch available under an Apache open source license. One of the new technologies for concurrency added to Mac OS X 10.6 Snow Leopard, Grand Central is Apple’s attempt to help developers deal with the rise of multi-core.

[snip]

Of course, this is also very interesting for scientific developers. It may be possible to parallelize code in the not too distant future using Grand Central Dispatch, and run that code not only on Macs, but also on clusters and supercomputers.

Posted by: Eric Forgy on September 11, 2009 10:09 PM | Permalink | Reply to this

Re: The Pi Calculus II

It occurred to me that the lambda-calculus might have a coalgebraic flavour to it. Term models are usually initial algebras for some functor, elements being constructed from primitives and constructors. As such, they’re isomorphic to their image under the functor. Reversing the isomorphism shows that they’re also coalgebras, so on the side of semantics.

I just came across Jacobs and Rutten discussing Böhm trees for (untyped) lambda terms in their A Tutorial on (Co)Algebras and (Co)Induction. On page 7 they show that the Böhm-tree formation, BTBT, is a kind of coalgebraic destructor/observation type operation.

For a λ\lambda-term MM, if MM does not have a head normal form, then BT(M)BT(M) consists of a single, unlabeled node. If MM does have a normal form, say of the form λx.yM 1...M n\lambda \vec{x}.y M_1 ...M_n, then BT(M)BT(M) is a tree with root labelled $$\lambda \vec{x}.yand and nbrancheslabeled branches labeled BT(M_i).Unknown character/pUnknown characterUnknown characterblockquoteUnknown characterUnknown characterpUnknown characterTheoperation.</p> <blockquote> <p>The operation BT(-)oftakingBöhmtreescanthenbeseenasafunctionfromthesetoflambdaterms(modulo of taking B&ouml;hm trees can then be seen as a function from the set of lambda terms (modulo \beta$-equivalence) to an appropriate space of observations (consisting of possibly infinite trees labeled by sequences of variables with finitely many branches).

Then they note that, in personal communication, Henk Barendregt, author of The Lambda Calculus: Its syntax and semantics, said that

…at the time of writing this definition of Böhm trees he felt slightly uncomfortable about the nature of the definition. he saw it as a possibly infinite process, which is well-defined at every finite stage. He emphasised…that it is certainly not an inductive definition. Indeed, it is a coinductive definition!.

Posted by: David Corfield on September 17, 2009 11:10 AM | Permalink | Reply to this

Re: The Pi Calculus II

For much more on Böhm trees for the lambda-calculus as coalgebras see chapter 4 of Jeroen Ketema’s thesis – Böhm-Like Trees for Rewriting.

Posted by: David Corfield on September 17, 2009 4:58 PM | Permalink | Reply to this

Re: The Pi Calculus II

Thanks! I’m ashamed to admit I haven’t gotten around to understanding Böm trees yet… and I’m too busy today. But you’re making me more eager to do so.

Posted by: John Baez on September 17, 2009 6:19 PM | Permalink | Reply to this

Re: The Pi Calculus II, Concurrent Combinators

I found this paper to be very clear and well-structured in explaining asynchronous pi-calculus (and some relatives) and also how to model its semantics with a small set of simple atoms linked into graphs. Does this look similar to string-diagrams for monoidal categories and what Bob Coecke is doing for quantum systems?

Raymond Hu
Concurrent Combinators for Mobile Processes

The original paper about the concurrent combinators is:

Kohei Honda and Nobuko Yoshida. Combinatory representation of mobile processes.

Posted by: Marcus Verwiebe on September 25, 2009 12:34 AM | Permalink | Reply to this

Re: The Pi Calculus II

A post by Bob Walters as to why, since semantics should come before syntax, process algebras are not the way to approach concurrency. And more problems and more.

Posted by: David Corfield on January 7, 2010 9:40 PM | Permalink | Reply to this

Post a New Comment