Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

April 29, 2011

Higher Categories for Concurrency

Posted by John Baez

guest post by Mike Stay

Way back in 2006 when this blog started, we had a discussion about cartesian closed categories and the lambda calculus. There we learned about Lawvere’s invention of categorical semantics and how modern programming languages use his ideas for defining and implementing datatypes. Basically, a datatype declaration is a presentation of a cartesian closed category. Implementations of the datatype are cartesian closed functors from that category to Set; these pick out a set of values for the datatype along with functions that act on those values and satisfy the given relations. The datatype declaration is “syntax”, and the implementation functor assigns “semantics” in Set to the syntax.

In Robin Milner’s 1991 Turing Award lecture, he explained how Set doesn’t work as a target for implementing concurrent programs. He showed how the two programs

{A: x = 1; x = x + 1;}

and

{B: x = 2;}

when run individually, assign to x the same value. But if we run them concurrently, x may be either 2 or 3 depending on whether program A completes before or after the first statement in program B. So we need to look elsewhere for a categorical semantics for concurrent programs.

Milner contributed a great deal to the study of concurrent programming; the award cited the following:

  1. LCF, the mechanization of Scott’s Logic of Computable Functions, probably the first theoretically based yet practical tool for machine assisted proof construction;

  2. ML, the first language to include polymorphic type inference together with a type-safe exception-handling mechanism

  3. CCS, a general theory of concurrency. In addition, he formulated and strongly advanced full abstraction, the study of the relationship between operational and denotational semantics.

In CCS, the topology of interacting processes was fixed; Milner, Parrow, and Walker extended it and proposed a concurrent programming language with a dynamic topology called pi calculus, which we’ve discussed before. Bruno and Montanari gave a categorical semantics for the pi calculus in terms of cartesian closed double categories.

Gérard Boudol’s blue calculus is an elegant fusion of typed lambda calculus and pi calculus; there are faithful embeddings of both into the blue calculus. I’ve been trying to find a higher category whose objects are types, whose morphisms are blue terms with a single free variable, and whose 2-morphisms are the rewrite rules. I think I’ve succeeded, and I’d like to lay out the parts of this construction for comments. I’m in the process of writing it up as part of my thesis.

There are various features we want in the higher category:

  1. Since we can embed lambda calculus into it, it should be at least symmetric monoidal closed.

  2. Since concurrent processes can compete for resources, there’s the opportunity for nondeterminism: when two withdrawals hit the same account and there’s not enough to cover both, it matters which one happens first! In the blue calculus, this happens at the level of rewrite rules; unlike the lambda calculus, rewrite rules in the blue calculus are nonconfluent, so we can’t mod out by them without losing a lot. This means we’ll need at least a bicategory.

  3. One way to think of a bunch of concurrent processes is as a “chemical soup” of terms where any two compatible terms may engage in a “chemical reaction”, interacting to become a new term. In the chemical world, such a reaction would look something like this:

    Na +|Cl NaCl\displaystyle \begin{array}{c}Na^+ | Cl^- \\ \Downarrow \\ NaCl\end{array}

    The simplest concurrent interaction looks like this:

    sendyonthechannelx|receivezonthechannelx,thendoP P[whereyreplacesz]\displaystyle \begin{array}{c}send y on the channel x | receive z on the channel x, then do P \\ \Downarrow \\ P[where y replaces z]\end{array}

    In blue calculus, the monoidal product from embedding lambda calculus types differently from running two processes concurrently, so we’ll need separate operations for those.

  4. The blue calculus has a notion of names; in actual programming practice, these are often called references or pointers or addresses. Names should have something to do with natural numbers and allocating resources.

Symmetric monoidal closed bicategories

Paddy McCrudden defined symmetric monoidal bicategories; by defining AA \multimap - to be right pseudoadjoint to A- \otimes A, we get symmetric monoidal closed bicategories. Simon Ambler’s thesis gives a term calculus for symmetric (weakly) monoidal closed categories; by treating the rewrites as 2-morphisms, it should provide a 2-term calculus for symmetric monoidal closed bicategories.

Tensor product versus concurrency

In the blue calculus, one can run any two terms of the same type concurrently and get a new term of that type:

ΓT:XΓT:XΓ(T|T):X,\displaystyle \frac{\Gamma \vdash T:X \quad \quad \Gamma \vdash T':X}{\Gamma \vdash (T|T'):X},

where Γ\Gamma is a typing context. Since morphisms are just terms with a single free variable, this suggests that we should be enriching over the monoidal bicategory SymMonCat so that the category Hom(A,BA,B) is symmetric monoidal: the tensor product is | and the unit is the do-nothing process 00. In a symmetric monoidal closed bicategory enriched over SymMonCat, the tensor product \otimes distributes over the tensor product ||, which is analogous to how multiplication distributes over addition. To emphasize this, I’ll talk about the tensor product of objects, 1-morphisms, and 2-morphisms and the concurrent sum of 1-morphisms and 2-morphisms. I think this is a critical feature that has been missing from previous attempts to give categorical semantics to concurrent calculi.

Aside: physics

Physicists can get some intuition of categories enriched over SymMonCat using an example that John Baez and I came up with in Singapore. 2Hilb—the category of 2-Hilbert spaces, linear functors, and linear natural transformations—is an example of a symmetric monoidal closed bicategory enriched over SymMonCat. Concurrent processes correspond to sums of linear functors acting on the same 2-Hilbert space, whereas parallel processes act on different 2-Hilbert spaces. If we allow formal sums of cobordisms that have the same source and target manifolds, then nCob 2n Cob_2 is also one of these kinds of bicategories: the tensor product is horizontal juxtaposition and the concurrent sum is the formal sum. An extended TQFT is a structure-preserving functor between such bicategories.

Names

Ian Stark’s work on the nu calculus shows that we can model names by introducing a natural-number object NN and then introducing the monad NN \multimap -, where

μ X: NNX NX f λn.f(n,n+1)   η X: X NX x λn.x\begin{array}{rrcl}\mu_X:&N \multimap N \multimap X & \to & \N \multimap X\\&f & \mapsto & \lambda n.f(n, n+1)\\   \\ \eta_X:&X & \to & N \multimap X \\ &x & \mapsto & \lambda n.x \end{array}

Given such a monad, we can then define the inference rule

Γ,n:NT:XΓ(λn.T)(fresh):X\displaystyle \frac{\Gamma, n:N \vdash T:X}{\Gamma \vdash (\lambda n.T)(fresh):X}

where the fresh number comes from the monad. In the nu calculus, this curry-then-apply-to-fresh operation is written

Γ,n:NT:XΓ(νn.T):X\displaystyle \frac{\Gamma, n:N \vdash T:X}{\Gamma \vdash (\nu n.T):X}

In many programming languages, the ν\nu operator is called “new”; the Greek “nu” is a pun.

Blue calculus

We’re now in a position to introduce the blue calculus:

T,T::= x | Tx | λx.T | (x=T) | (xT) | T|T | νx.T\displaystyle \begin{array}{rlc}T, T' ::= & x &|\\& T x &|\\& \lambda x.T &|\\& (x = T) &|\\& (x \Leftarrow T) &|\\& T|T' &|\\& \nu x.T\end{array}

Note that the second term constructor says that unlike in the lambda calculus, one can only apply terms to variables. To enforce this, we use an endofunctor of bicategories PP in our typing rules; whereas the simple term x:Xx:X is merely a value with no potential for side-effects, we read T:P(X)T:P(X) as being a process of type XX that may result in the creation of new names. A standard choice for PP is NN \multimap -.

x 1:X 1,,x n:X nx i:X i1in\displaystyle \frac{}{x_1:X_1, \ldots, x_n:X_n \vdash x_i:X_i} \quad 1 \le i \le n
x:Xx:P(X)\displaystyle \frac{}{x:X \vdash x:P(X)}
ΓT:P(XY)Γx:XΓTx:P(Y)\displaystyle \frac{\Gamma \vdash T:P(X \multimap Y) \quad \quad \Gamma \vdash x:X}{\Gamma \vdash T x:P(Y)}
Γ,x:XT:P(Y)Γ(λx.T):P(XY)\displaystyle \frac{\Gamma, x:X \vdash T:P(Y)}{\Gamma \vdash (\lambda x.T):P(X \multimap Y)}
Γx:XΓT:P(X)Γ(x=T):P(Y)\displaystyle \frac{\Gamma \vdash x:X \quad \quad \Gamma \vdash T:P(X)}{\Gamma \vdash (x=T):P(Y)}
Γx:XΓT:P(X)Γ(xT):P(Y)\displaystyle \frac{\Gamma \vdash x:X \quad \quad \Gamma \vdash T:P(X)}{\Gamma \vdash (x\Leftarrow T):P(Y)}
ΓT:P(X)ΓT:P(X)Γ(T|T):P(X)\displaystyle \frac{\Gamma \vdash T:P(X) \quad \quad \Gamma \vdash T':P(X)}{\Gamma \vdash (T|T'):P(X)}
Γ,x:NT:P(Y)Γ(νx.T):P(Y)\displaystyle \frac{\Gamma, x:N \vdash T:P(Y)}{\Gamma \vdash (\nu x.T):P(Y)}

We should model these inference rules as (di)natural transformations (of functors between bicategories of this kind).

The interactions between the concurrent sum of terms looks like this in the blue calculus:

xy|(xT) Ty\displaystyle \begin{array}{c}x y | (x \Leftarrow T) \\ \Downarrow \\ T y \end{array}

If TT is a term of the form λz.T\lambda z.T' then at this point beta reduction happens just as in lambda calculus. Interactions and the beta and eta rules should be modeled as modifications.

Conclusion

Symmetric monoidal closed bicategories, enriched over the monoidal bicategory SymMonCat and equipped with a datatype and a monad for names, provide

  1. the ability to embed lambda calculus

  2. the ability to model nonconfluent rewrite rules

  3. concurrent sums different from the monoidal product

  4. interactions on named channels

which together, I believe, are sufficient for modeling Boudol’s blue calculus.

Posted at April 29, 2011 2:48 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2393

118 Comments & 0 Trackbacks

Re: Higher Categories for Concurrency

Hi Mike,

I am not quite following yet. Maybe you can help me a bit.

I see that you say that you need a category enriched in symmetric monoidal categories, but I do not quite follow why exactly. I do get the rough idea that somehow the tensor product is supposed to take care of the parallelism, which makes a lot of sense, but that alone would just suggest to enrich just in abelian monoids = symmetric monoidal 0-categories, no? How is the 2-categorical aspect related to the concurrency? Sorry, maybe I am missing the whole point. But then, maybe you can expand.

Sorry for the dumb question. I think what would help me is if you could systematically, in a textbook kind of fashion, tell me what the 2-categorical semantics is that you are thinking of. You do define above the blue calculus (why “blue”?), and I gather there is now be supposed to be a set of rules for how to interpret that in some kind of 2-category with monoidal hom-categories. (Did you even say what kind of such 2-category precisely you need?)

I would enjoy if you could spell out that 2-categorical semantics (at least a fragment of it) explicitly. Thanks.

Posted by: Urs Schreiber on April 29, 2011 8:23 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I see that you say that you need a category enriched in symmetric monoidal categories, but I do not quite follow why exactly. I do get the rough idea that somehow the tensor product is supposed to take care of the parallelism, which makes a lot of sense, but that alone would just suggest to enrich just in abelian monoids = symmetric monoidal 0-categories, no? How is the 2-categorical aspect related to the concurrency?

The concurrent sum of two terms of the same type gives a new term of that type:

(1)ΓT:XΓT:XΓ(T|T):X. \frac{\Gamma \vdash T:X \quad \quad \Gamma \vdash T':X}{\Gamma \vdash (T|T'):X}.

At the rewrite level, we have a corresponding rule:

(2)α:TUα:TU(α|α):(T|T)(U|U). \frac{\alpha:T \Rightarrow U \quad \quad \alpha':T' \Rightarrow U'}{(\alpha | \alpha'):(T|T') \Rightarrow (U|U')}.

There are also rewrite rules that are not of this form. Just as Ctrl-NOT is fundamentally a two-qubit gate—it can’t be written as the tensor product of 1-qubit gates—the rewrite rule for interactions

(3)xy|(xT) Ty \begin{array}{c}x y | (x \Leftarrow T) \\ \Downarrow \\ T y\end{array}

is fundamentally a two-term rewrite rule.

I think what would help me is if you could systematically, in a textbook kind of fashion, tell me what the 2-categorical semantics is that you are thinking of.

Here are four different ways to give semantics to the blue calculus. ( By the way: anyone know how to get HTML tables in a comment?)

why “blue”?

I have no idea. It was the name Boudol gave it, and he offered no etymology. I like to think that he was referring to Borges’ short story blue tigers:

“These are the stones that spawn!” he exclaimed. “There are many of them now, but they can change. Their shape is that of the moon when it is full, and their colour is the blue that we are permitted to see only in our dreams. My father’s father spoke the truth when he told men of their power.”

[…]

Naturally, the four mathematical operations - adding, subtracting, multiplying, and dividing - were impossible. The stones resisted arithmetic as they did the calculation of probability. Forty discs, divided, might become nine; those nine in turn divided might yield 300. I do not know how much they weighed. I did not have recourse to a scale, but I am sure that their weight was constant, and light. Their colour was always that same blue.

These operations helped save me from madness. As I manipulated the stones that destroyed the science of mathematics, more than once I thought of those Greek stones that were the first ciphers and that had been passed down to so many languages as the word “calculus”. Mathematics I told myself, had its origin, and now has its end, in stones. If Pythagoras had worked with these…

(Back to Urs’ questions again):

I gather there is now be supposed to be a set of rules for how to interpret that in some kind of 2-category with monoidal hom-categories.

Yes:

  • types get interpreted by objects,

  • terms with one free variable as morphisms (though there’s some subtlety in how you compose them),

  • rewrite rules as 2-morphisms,

  • function types with the internal hom \multimap,

  • the monoidal product type with the tensor product in the symmetric monoidal closed bicategory (so that A- \otimes A is left pseudoadjoint to AA \multimap -),

  • and the concurrent sum with the tensor product in the symmetric monoidal Hom category.

(Did you even say what kind of such 2-category precisely you need?)

Yes, a symmetric monoidal closed bicategory enriched over SymMonCat.

Posted by: Mike Stay on April 29, 2011 11:52 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

rewrite rules as 2-morphisms,

Which of these 2-morphisms are taken to be inverible, which not?

Posted by: Urs Schreiber on April 30, 2011 12:43 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I think what would help me is if you could systematically, in a textbook kind of fashion, tell me what the 2-categorical semantics is that you are thinking of.

Here are four different ways to give semantics to the blue calculus.

That page just names four symmetric monoidal 2-categories. I tried to ask for the analog of, say, section D1.2 in Johnstone’s Elephant.

But never mind, I think I can guess it now. But what I still don’t clearly see is to what degree the 2-categorical level is necessary for the interpretation of concurrency. What fails if you just have monoidal structure on the hom-sets?

Posted by: Urs Schreiber on April 30, 2011 1:00 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

D1.2 in Johnstone’s Elephant

Defn 1.2.1 talks about Σ\Sigma structures in a category CC with finite products, where Σ\Sigma is a signature. I haven’t yet come up with a precise definition of a “higher signature”, but it will have sorts, function symbols, rewrites, and relations. In addition to product sorts, there will also be internal homs. The rewrites will be able to map between concurrent sums of function symbols. The relations will relate rewrites rather than function symbols.

Which of these 2-morphisms are taken to be invertible, which not?

The invertible rewrites are the ones corresponding to “structural congruence”: rewrites asserting that the concurrent sum is associative and commutative; scope migration

(1)P|νx.Qνx.(P|Q)ifxisnotfreeinP; P | \nu x.Q \quad \equiv \quad \nu x.(P|Q) \quad\quad if x is not free in P;

distributivity of application over commutative sum; etc.

What fails if you just have monoidal structure on the hom-sets?

This term has two processes xyx y and xzx z competing for the mutex (xT(x \Leftarrow T:

(2)xy|xz|(xT). x y \;|\; x z \;|\; (x \Leftarrow T).

It has two inequivalent rewrites,

(3)Ty|xz T y \;|\; x z

and

(4)xy|Tz. x y \;|\; T z.

If we enrich over a monoid rather than a monoidal category, we’re forced to identify these.

Posted by: Mike Stay on May 2, 2011 3:32 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

This term has two processes xyx y and xzx z competing for the mutex

Ah, thanks, now I get it. And now I understand that you addressed just this simple point already way above where you mentioned Milner: the point here is that the programs are not deterministic. Given some state of the program, there may be several successor states that may be inequivalent. Hence we are in a genuine category of states (terms), not a discrete set, and also not a groupoid.

Would it nevertheless make sense to separate this issue from concurrency? In the sense that concurrency is one source of non-deterministic programs, but there might be other sources?

I am just trying to get a clear picture of what 2-logic is like, not just in the case of (2,1)(2,1)-logic that is itself a special case of the (,1)(\infty,1)-logic that has been discussed a fair bit here, but in the fully general case of (2,2)(2,2)-logic.

So right now I am thinking: ah, (2,2)(2,2)-logic is about non-determinism, and one major example for this is concurrency.

Posted by: Urs Schreiber on May 2, 2011 4:12 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Would it nevertheless make sense to separate this issue from concurrency? In the sense that concurrency is one source of non-deterministic programs, but there might be other sources?

I don’t think you can remove nondeterminism from concurrency, but I can easily imagine other sources of nondeterminism.

In his Turing award talk, Milner mentioned nondeterminism without concurrency:

Plotkin dealt with nondeterminism by means of his powerdomain construction, a tour de force of domain theory. It provides, for any suitable domain DD, the powerdomain P(D)P(D) whose elements are subsets of DD. So with nondeterminism in mind we can redefine the meanings of programs as

Program Meanings = MemoriesP(Memories)Memories \to P(Memories)

essentially relations over memories. This semantics is perfectly compositional for the kind of nondeterministic language which you get by adding “don’t care” branching to a sequential language.

 

But concurrency has a shock in store; the compositionality’s lost if you can combine subprograms to run in parallel, because they can interfere with one another…

Posted by: Mike Stay on May 2, 2011 6:50 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Maybe one kind of (2,2)-logic is about non-determinism, but there’s another kind which is much more like (2,1)-logic, only it talks about categories instead of groupoids.

Posted by: Mike Shulman on May 3, 2011 6:09 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Maybe one kind of (2,2)-logic is about non-determinism, but there’s another kind which is much more like (2,1)-logic, only it talks about categories instead of groupoids.

Can you expand on this? What are these different “kinds” of (2,2)-logic?

Posted by: Urs Schreiber on May 3, 2011 6:40 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I don’t yet understand the kind which is supposedly about non-determinism, but the one I do understand is here.

Posted by: Mike Shulman on May 3, 2011 9:34 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I don’t yet understand the kind which is supposedly about non-determinism, but the one I do understand is here.

Thanks for that link, this is the kind of 2-categorical semantics discussion that I tried to ask Mike Stay to point me to. (I probably once knew that you have this page, but must have forgotten.) It’s good to see how exactly which 2-limits are supposed to be used where in 1-categorical semantics we have ordinary limits: I am happy to see the inserter appear in the interpretation of hom-types.

But then further down the list, when you talk about the 2-categorical semantics of logical conjunctions and of quantifiers, you just say “subobject”, “union of subobjects”, “image”. Is it unabmbiguous what this means in a (2,2)(2,2)-category? A subobject I gather you take to be an equivalence class of (-1)-truncated morphisms. What precise definition of “image” of a morphism in a 2-category do you mean?

Concerning “non-deterministic kinds of 2-logic”: “non-determinism” seems to a sensible way to think about the categories of terms in (2,2)(2,2)-logic: you describe the homhom-types in (2,2)(2,2)-logic; in (2,1)(2,1)-logic these are thought of as “types of proofs of equality”; or “types of programs that equivalently rewrite one term to another”. In that vein, (2,2)(2,2)-logic has types of programs that rewrite terms, where you can not necessarily tell from the domain term what the equivalence class of the output term will be. So it’s “non-deterministic” in this sense.

Posted by: Urs Schreiber on May 4, 2011 7:52 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Well, that page is just part of the 2-categorical logic project, so it doesn’t define all of its terms. Subobjects and images are defined on the page regular 2-category.

Posted by: Mike Shulman on May 4, 2011 7:29 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Is (N)(N\multimap -) really a monad? It seems like one of the composites (NX)(NNX)(NX)(N\multimap X) \to (N\multimap N\multimap X) \to (N\multimap X) would send ff to λn.f(n+1)\lambda n. f(n+1).

Posted by: Mike Shulman on April 30, 2011 10:14 AM | Permalink | PGP Sig | Reply to this

Re: Higher Categories for Concurrency

That’s my question, too.

I haven’t seen this model of the nu-calculus before. If it works it’s much simpler than the ones I remember from Stark’s thesis.

The simplest model in Stark’s thesis was a functor-category model, in which the category of names was the category II of finite sets and injections (modelling renaming and name generation). Types were functors F:ISetF : I \to \mathrm{Set}, and terms were the natural transformations between these functors.

Usually in denotational semantics, a naive model of allocation uses a functor like T(A)=(A×)T(A) = \mathbb{N} \to (A \times \mathbb{N}). Then allocation use the argument, and returns an incremented counter, so that the next allocation will be fresh. However, this is not a commutative monad, which is why I thought Stark had to turn to a functor-category semantics.

Posted by: Neel Krishnaswami on May 2, 2011 10:02 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Yes, you’re both right. I’d gotten confused between what Stark requires and what blue calculus requires. In blue calculus, you can only apply terms to variables, not to arbitrary terms. I’m trying to enforce that using typing constraints, so I need a functor that maps from a type XX for a variable to a type P(X)P(X) for an arbitrary term. The typing rule for currying

(1)Γ,x:XT:P(Y)Γλx.T:P(XY)\frac{\Gamma, x:X \vdash T:P(Y)}{\Gamma \vdash \lambda x.T:P(X\multimap Y)}

implies that I need a map lift:XP(Y)P(XY)lift:X\multimap P(Y) \to P(X \multimap Y). The functor PX=NXPX = N \multimap X works fine in this setup, but it’s not a monad.

Stark’s monad for names is different; it needs to be a strong monad, and the “state” monad Neel mentioned is the usual choice.

Posted by: Mike Stay on May 2, 2011 3:40 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

What is the intended semantics of (T|T)(T | T')? Is it “run TT and TT' in parallel and take the output of whichever one finishes first?”

Posted by: Mike Shulman on May 2, 2011 5:50 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

What is the intended semantics of (T∣T’)? Is it “run T and T’ in parallel and take the output of whichever one finishes first?”

Not exactly. A useful division of the kinds of terms is into senders and receivers; the term xyx y can be thought of as “sending yy to the address xx”, while the term (xT)(x \Leftarrow T) can be thought of as “the process TT waiting for messages delivered to the address xx”.

So the concurrent sum of processes is just listing the current set of senders and receivers, and the meaning is something more like “run T and T’ in parallel and let them interact with each other”. For example,

(1)xy|(xT) x y | (x \Leftarrow T)

rewrites to

(2)Ty, T y,

so it’s not like one of them finished first. Also,

(3)xy|xz x y | x z

just says there are two messages waiting to be delivered to the address xx; it has no rewrites unless you sum it with a receiver of the form (xT)(x \Leftarrow T).

To have the two processes TandTT and T' race, you’d need a mutex UU for the “finish line”. The simplest example of that kind of competition is letting T=xyT = x y, T=xzT' = x z and then forming

(4)νx.(xy|xz|xU). \nu x.(x y | x z | x \Leftarrow U).

This has two inequivalent rewrites,

(5)νx.(Uy|xz) \nu x.(U y | x z)

and

(6)νx.(xy|Uz). \nu x.(x y | U z).

If xx is not free in UU, then by scope migration we have

(7)Uy|νx.xzandUz|νx.xy. U y | \nu x.x z \quad and \quad U z | \nu x.x y.

Since there are no reduction rules that apply to νx.xy\nu x.x y or νx.xz\nu x.x z no matter what you sum them with, these processes are equivalent to the “do-nothing” process 0 which is the identity for the concurrent sum. We’re left with

(8)UyandUz, U y \quad and \quad U z,

respectively.

Posted by: Mike Stay on May 2, 2011 4:10 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Huh, okay. So what does it mean to say that a term of that sort has a “type”? I’m having trouble imagining what sort of “type” a collection of senders and receivers can have, such that if we have two different sets of senders and receivers and let them interact with each other the resulting process would have the same type.

It also seems kind of inelegant to me to make use of a syntactic notion like “variable names” to describe how processes communicate with each other. I would tend to prefer something that’s more invariant under alpha-equivalence. My first inclination would be an “operadic” sort of “concurrent sum” which specifies exactly which input/output pipes of the two processes get plugged into each other, rather than depending on variable names to describe that.

Posted by: Mike Shulman on May 3, 2011 6:13 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

So what does it mean to say that a term of that sort has a “type”?

Well, you can look at the typing rules for details, but the key is that receivers can have any type you want; they simply throw away whatever they’re applied to (note the YY!):

(1)Γx:XΓT:P(X)Γ(xT):P(Y) \frac{\Gamma \vdash x:X \quad \quad \Gamma \vdash T:P(X)}{\Gamma \vdash (x\Leftarrow T):P(Y)}

So

(2)(xT)y (xT) \begin{array}{c}(x \Leftarrow T)y \\ \Downarrow \\ (x \Leftarrow T)\end{array}

Since application distributes over concurrent sum, this lets you write things like

(3)((xT)|x)y((xT)y|xy) (xT)|xy Ty \begin{array}{c}((x \Leftarrow T) \;|\; x)y \equiv ((x \Leftarrow T)y \;|\; x y) \\ \Downarrow \\ (x \Leftarrow T) \;|\; x y \\ \Downarrow \\ T y\end{array}

It also seems kind of inelegant to me to make use of a syntactic notion like “variable names” to describe how processes communicate with each other.

It’s not so much the variables’ names as their values that governs the interaction. Often programmers think of variables as naming places in memory and will say that a variable “contains” a value: xyx y means “send to the address contained in xx” and (xT)(x \Leftarrow T) means “TT listens for messages at the address contained in xx”.

I would tend to prefer something that’s more invariant under alpha-equivalence.

Blue terms are equivalent under alpha substitution:

(4)λx.Tλx.T[x/x]ifxisnotfreeinT. \lambda x.T \equiv \lambda x'.T[x'/x]\; if x' is not free in T.
(5)νx.Tνx.T[x/x]ifxisnotfreeinT; \nu x.T \equiv \nu x'.T[x'/x]\; if x' is not free in T;
Posted by: Mike Stay on May 4, 2011 1:58 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

My first inclination would be an “operadic” sort of “concurrent sum”

Like this? ;)

Bugs and Elmer and Wagner

Posted by: Mike Stay on May 4, 2011 2:36 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Well, I was hoping to get you to explain something about the typing at an intuitive level, since I don’t have time to read the whole thing myself right now. What does “throwing away whatever they’re applied to” have to do with “having any type you want”? I can define a constant function of any type, which throws away whatever it’s applied to, but it still has a particular type, namely the type of the constant that it returns.

Blue terms are equivalent under alpha substitution

Sure, but you also said that xy|(xT)x y | (x \Leftarrow T) reduces to TyT y, whereas I bet that xy|(xT)x y | (x' \Leftarrow T) doesn’t. Obviously this is something different that global alpha-equivalence—it’s maybe formally analogous to the difference between λx.x\lambda x.x and λx.x\lambda x.x'—but I guess what I’m saying is that it seems weird to me to enforce that separate processes (or sets of processes) use the same variable names to refer to the same memory addresses.

Posted by: Mike Shulman on May 4, 2011 4:58 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

What does “throwing away whatever they’re applied to” have to do with “having any type you want”? I can define a constant function of any type, which throws away whatever it’s applied to, but it still has a particular type, namely the type of the constant that it returns.

Let y:Y,x:YZ,T:YZy:Y, x:Y \multimap Z, T:Y \multimap Z. Then

(1)(xT)y (xT) \begin{array}{c} (x \Leftarrow T)y \\ \Downarrow \\ (x \Leftarrow T) \end{array}

In the top term, the subterm (xT)(x \Leftarrow T) has type YZY \multimap Z, whereas in the bottom term it has type ZZ.

I guess what I’m saying is that it seems weird to me to enforce that separate processes (or sets of processes) use the same variable names to refer to the same memory addresses.

Nu allocates a new channel for sending and receiving messages and sticks the address of the channel in a variable; how would you get that address to two different processes other than by giving that variable to both of them? The processes themselves don’t have to use that name in their definition; they can abstract out the address with a lambda. For example, λx.(xT)\lambda x.(x\Leftarrow T) is a process that reads in an address and then listens on it for messages, while λz.zy\lambda z.z y reads in an address and sends the message yy to it. If we sum these concurrently, nothing happens:

(2)(λx.xT)|(λz.zy). (\lambda x.x\Leftarrow T) \;|\; (\lambda z.z y).

But if we apply the sum to a variable vv, the processes can start interacting and we get

(3)((λx.xT)|(λz.zy))v (λx.xT)v|(λz.zy)v (vT)|vy Ty. \begin{array}{c}((\lambda x.x\Leftarrow T) \;|\; (\lambda z.z y))v\\\Downarrow\\(\lambda x.x\Leftarrow T)v \;|\; (\lambda z.z y)v\\\Downarrow\\(v\Leftarrow T) \;|\; v y\\\Downarrow\\ T y. \end{array}
Posted by: Mike Stay on May 4, 2011 9:08 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike Stay wrote:

Nu allocates a new channel for sending and receiving messages and sticks the address of the channel in a variable; how would you get that address to two different processes other than by giving that variable to both of them?

When I was first trying to understand this stuff (and in some sense I still am), concepts like “message”, “channel” and “address” really threw me. These are not concepts that most mathematicians ordinarily use, at least not in their work on math. We’re pretty comfortable with concepts like “variable”, “free variable”, “bound variable” and the like… but it seems to me that this work on concurrency is trying to formalize a bunch of ideas that are familiar at an intuitive level among computer scientists, but not to most mathematicians or even logicians.

I don’t know if this what is bothering Mike. But I thought I should say this, at least to reassure any mathematicians reading this discussion who find words like “message”, “channel” and “address” as baffling as I once did.

Posted by: John Baez on May 5, 2011 6:21 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

John said:

I don’t know if this what is bothering Mike. But I thought I should say this, at least to reassure any mathematicians reading this discussion who find words like “message”, “channel” and “address” as baffling as I once did.

Isn’t it a little bit “cruel” to say that it is baffling, but that you do understand it now - and then not to explain it?

Posted by: Tim van Beek on May 5, 2011 2:31 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

It’s Mike Stay, not me, who really understands this stuff. I just said it’s less baffling to me than it once was.

I understand SymMonCat-enriched symmetric monoidal closed bicategories—that’s a piece of cake, because I know nice examples and have thought about category theory a lot. But stuff like “messages”, “channels” and “addresses”—that’s stuff you computer guys have more intuition for. I have some vague idea about how computers work, and I’ve spent a little time trying to understand people try to formalize that, abstracting away a lot of the details. But my hands-on programming experience didn’t bring me into much contact with the concepts that are formalized by the blue calculus. So, it remains much more elusive to me than the λ-calculus—and even that I understood only after teaching a class on it.

I wanted Mike to write a thesis that treated the λ-calculus using bicategorical ideas. It turned out Seely, Hilken and others have already done a lot of work here. I still think there’s a lot more to do. But Mike Stay does actual computer programming, and he found the blue calculus a lot more relevant to modern-day issues. The bicategories are also more useful here, due to the lack of confluence (mentioned in his post above).

Posted by: John Baez on May 5, 2011 2:52 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Hm, here is how I would explain it, I hope that’s correct in the given context, and Mike et alt. do not use some of the buzz words in a different sense:

computer memory is a finite sequence of bits:

memory := (b n),n=1,...,N(b_n), n = 1,...,N

An address is an index of a bit. A typed variable is a tuple (name, address, type). The name is really just a name that let’s us talk about the tuple, so you could write as well name:= (address, type). But what’s the type?

When we (or rather processes) need to know the value of the variable “name” starting at the address b nb_n, we have to look at the bit b nb_n and have the type tell us how to interpret the following bits. For example, “type” could be “4 bits denoting a positive natural number with big endian encoding”, which means the most significant byte comes first. So if we read

1010

that would mean that the variable “name” has the value

18+04+12+00=101*8 + 0*4 + 1*2 + 0* 0 = 10

as a decimal number. A “message” is a tuple of typed variables, a “channel” means that one process can send a message to another process, so a channel could be identified again by some unique name and the two processes it links, like (channelname, writingprocess, listeningprocess).

Concurrent programming is all about having more than one process (or threads, which is synonymous in theory, but not in practice) with shared state.

“Shared state” can mean that there is a variable that at least two processes write to and read from (then the processes share the same part of the memory, they both have access to it). “Shared state” can also mean that a process sends a message to another process and vice versa, which means that they don’t share the same memory: Each process has its own allocated memory, and the receiving process will store (duplicate) the received message in its own share of the memory.

(BTW: “Allocation of memory” means that a process tells the operating system “I need memory to store a 4 bit variable”, and the operating system, when it is nice, answers with an address mm. Which means that the process may use the bits b m,...mb m+3b_m, ...m b_{m+3} to store its variable.)

This may look like a trivial distinction, but it does make a lot of difference in practice: Java’s concurrency model is based on shared memory, which is very difficult to handle in practice. Thinking in exchanged messages seems to be much easier for humans, it’s therefore the model that Scala has adopted.

HTH

Posted by: Tim van Beek on May 5, 2011 3:50 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Sure: here you’re treating Set as a one-object bicategory. The terms (with only one object, there’s no difference between open and closed terms) are spaces of potential configurations for the computer and the rewrite rules are functions acting on those configuration spaces. If you have two parts of the system AA and BB in a configuration where they both want to write to the same spot in memory, there are two ways to let both do it: one where AA goes first and another where BB goes first. These are represented by different functions between the sets and lead to different configurations.

Posted by: Mike Stay on May 6, 2011 2:38 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

John wrote:

I wanted Mike to write a thesis that treated the λ-calculus using bicategorical ideas.

Why? Is there a tagline for people who are new to this topic, something you’d tell a journalist? (Or a member of google’s higher management :-)

I’ve been a member of developer teams of several multithreading programs that are in use, so I can offer some practical experience. In my experience, there isn’t a language that is both sophisticated and abstract enough to talk about concurrent systems. If (higher) category theory is a language of languages, or a theory for theories, a naive question would be: Is Mike after the creation of a formalized language that can describe concurrent systems?

Example: I’m mostly sure that it should be possible to determine the possibility of deadlocks via a static code analysis (for a programming language that supports the explicit locking of resources), but haven’t heard of any attempt to develop a tool capable of this (one that is useful for realistic systems, that is).

Posted by: Tim van Beek on May 6, 2011 7:45 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Is Mike after the creation of a formalized language that can describe concurrent systems?

Yes.

Example: I’m mostly sure that it should be possible to determine the possibility of deadlocks via a static code analysis (for a programming language that supports the explicit locking of resources), but haven’t heard of any attempt to develop a tool capable of this (one that is useful for realistic systems, that is).

Greg Meredith has made such a system, and has a much better understanding of concurrency than I do; you might be interested in talking to him.

Posted by: Mike Stay on May 7, 2011 1:17 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

John wrote:

I wanted Mike to write a thesis that treated the λ\lambda-calculus using bicategorical ideas.

Tim wrote:

Why? Is there a tagline for people who are new to this topic, something you’d tell a journalist?

Sure: we need bicategories to describe the process of computation!

When I first heard that the lambda calculus was related to category theory, I got excited because I like the idea of a computation as a morphism: any sort of process should be a morphism in some category.

But then I was shocked to discover that in the usual story relating the lambda calculus to categories, the process of computation is completely neglected!

In the lambda calculus, the process of computation consists taking a ‘term’ and repeatedly ‘rewriting’ it using certain rules until it’s as simple as possible. This mimics a program running until it’s done.

But these ‘rewrites’ are not morphisms in the category that people usually look at.

Instead, they look at a category where objects are ‘types’ in the simply typed lambda calculus, and morphisms are equivalence classes of terms, where two terms are considered equal if you can get from one to another using rewrites.

In computer science this corresponds to this: the objects are datatypes x,y,z,x,y,z,\dots, but the morphisms f:xyf: x \to y are equivalence classes of programs that take data of type xx and output data of type yy. And this equivalence relation is so coarse that a program that computes

2×(3+5)2 \times (3 + 5)

and then prints out the answer is considered equivalent to one that simply prints out

1616

They differ by rewrites, so they’re considered the same.

So, the process of computation is being neglected.

The answer is to introduce a bicategory where we also have 2-morphisms corresponding to rewrites.

I taught a class about this here, on the nn-Category Café, starting in the fall of 2006 and going on to the spring 2007. The idea of this class was to set up some basic ideas; I hoped Mike would write a thesis fleshing out the details.

However, he wants to go further and consider the blue calculus. The lack of confluence in the blue calculus, typical of situations where ‘concurrency’ is important, makes it completely absurd to identify terms that differ by rewrites: you’d wind up identifying really different things, like the numbers 33 and 77. So here the bicategorical approach is not merely better than the categorical one: it’s the only one that makes sense.

I still, however, love the lambda calculus and wish someone would take some of the deeper results in this theory and translate them into the language of bicategories. I think this could lead to some new insights. Barnaby Hilken made a very good start here:

However, my friend Paul-André Melliès knows interesting further results concerning ‘standardization’ that seem to cry for a bicategorical or even tricategorical treatment.

Note, I’m not interested in this project because I think it will help computer science, at least not right away. I’m sure it would eventually, because it would help bridge the gap between the worldview of mathematics and the worldview of computer science. But my main motivation is just to clarify an important point. When Lambek noticed the relation between the lambda-calculus and cartesian closed categories, a lot of category theorists and highly theoretical computer scientists seem to have gotten the impression that everything truly interesting about the lambda-calculus was captured by cartesian closed categories. This is far from true, but it’s a beautiful dream and it’s on the right track: going to a cartesian closed bicategory gets us much closer, by putting computations into the picture.

Posted by: John Baez on May 7, 2011 1:27 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

So, the process of computation is being neglected.

The answer is to introduce a bicategory where we also have 2-morphisms corresponding to rewrites.

[…] I hoped Mike would write a thesis fleshing out the details.

However, he wants to go further and consider the blue calculus.

Which sounds like a good idea if one really wants to bring genuine 2-categories into the game. For higher categorical plain lambda-calculus it seems that (2,1)(2,1)-categories, hence homotopy type theory, would be sufficient.

Posted by: Urs Schreiber on May 7, 2011 11:21 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Urs wrote:

For higher categorical plain lambda-calculus it seems that (2,1)-categories, hence homotopy type theory, would be sufficient.

I don’t think so. People using lambda-calculus to model computation often choose a preferred direction to the rewrite rules, so that computations will go from complicated expressions to simpler ones and eventually terminate in a ‘normal form’. This is one of the things people are trying to understand by modelling the lambda calculus 2-categorically.

Posted by: John Baez on May 7, 2011 3:32 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Let me guess Urs’ response. I could be totally wrong, but I would suspect Urs’ response would be something like, “Yes, but a (2,1)-category is directed.”

I know he would say (,1)(\infty,1)-categories are directed.

Posted by: Eric on May 7, 2011 4:07 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I could be totally wrong

Yes, you are off by one level. The discussion is about the directivity of the hom-objects:

In the game being played here

  • a type is an object TT;

  • a term of type TT is a morphism

    f F T \array{ f \\ \downarrow^{F} \\ T }

    (a generalized element of TT)

  • a rewrite rr from a term GG to a term FF is a 2-morphism

    f g F r G T. \array{ f &&\to&& g \\ & {}_{\mathllap{F}}\searrow &\swArrow_r& \swarrow_{\mathrlap{G}} \\ && T } \,.

So

  • in a 1-category terms related by rewriting rules are identified

  • in a (2,1)(2,1)-category terms related by a rewriting rule are taken to be isomorphic;

  • in a category enriched in categories with terminal object; there are non-reversible rewrites, but there is a guaranteed essentially unique (“normal”) form of the term at which the rewriting stops;

  • in a category enriched in posets with terminal object; there are non-reversible rewrites, but there is a guaranteed essentially unique (“normal”) form of the term at which the rewriting stops; and moreover rewriting always progresses and cannot turn back;

  • in a general 2-category rewrites may lead to completely unrelated different forms of the term.

Posted by: Urs Schreiber on May 7, 2011 8:21 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Having actual terminal objects in the hom-categories says more than that: it says that any two terms of the same type have the same normal form, which makes the system rather boring. I think maybe you want disjoint unions of categories with terminal objects, or maybe even weakly terminal objects (can there be nonequivalent rewrite sequences leading to the same normal form?)

Posted by: Mike Shulman on May 8, 2011 4:06 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

In my 2007 course notes on 2-categories of computation, I defined what it means for a category to be ‘confluent and terminating’. This is often (though certainly not always!) what we want the hom-categories associated to the 2-λ-calculus to be like.

Hilken gives a much deeper treatment of confluence here:

Posted by: John Baez on May 8, 2011 5:19 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Having actual terminal objects in the hom-categories says more than that: it says that any two terms of the same type have the same normal form,

Only if they also have the same domain. But, sure, for the usual choice of domains (contexts) I should have said “disjoint union of categories with terminal object”.

In any case, I still think that blue calculus is in a way intrinsically 2-categorical whereas λ\lambda-calculus is not. λ\lambda-calculus can be interpreted in a suitable (1,1)(1,1)- or (2,1)(2,1)-category, even though only something more general will see rewriting as a process. But blue calculus cannot be interpreted in a (2,1)(2,1)-category at all, because it has rewrites that are genuinely irreversible.

It seems to me that there are two diferent kinds of directedness here, one intrinsic, the other being more of an additional layer of information.

For instance, isn’t it right that there are different notions of what counts as “normal form” for a λ\lambda-term? And some terms won’t have normal form in some of thes senses at all. For instance

(λx.xx)(λx.xx) (\lambda x . x x) (\lambda x . x x)

can’t be reduced to a term with only a single λ\lambda-abstraction. So it would seem that which of the directions of rewrites of λ\lambda-calculus terms is preferred is a matter of convention. Is that not right?

Posted by: Urs Schreiber on May 8, 2011 10:50 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Urs wrote:

For instance, isn’t it right that there are different notions of what counts as “normal form” for a λ-term?

Yes, this is a big subject. The classic approach corresponds to saying you’ve hit a normal form when you can’t do β\beta- or η\eta-reduction anymore. This is called a βη normal form.

And some terms won’t have normal form in some of these senses at all.

Your example here, a classic one, comes from the so-called ‘untyped’ λ\lambda-calculus. In the language category theory, this calculus is talking about a cartesian closed category with an object XX such that X X=XX^X = X. In this context the Church-Rosser theorem says the βη\beta\eta normal form is unique if it exists.

The ‘simply-typed’ lambda-calculus is better-behaved in many ways. In the language of category theory, this calculus is talking about the free cartesian closed category on some set of objects. In this context the Church-Rosser theorem says the βη\beta\eta normal form exists and is unique.

I translated this latter version of the Church-Rosser theorem into the language of 2-categories here. It says that all the hom-categories in some 2-category are terminating and confluent.

Of course, for any computationally universal form of the λ-calculus, we have to have some nontermination, thanks to the Halting Problem issue.

So it would seem that which of the directions of rewrites of λ\lambda-calculus terms is preferred is a matter of convention. Is that not right?

Yes, there’s a really interesting case to be made for taking η\eta-expansion as a morphism rather than η\eta-reduction. This is one of the classic papers on the 2-λ-calculus:

With η\eta-expansion we don’t have terminating hom-categories, but we still get something interesting, which they explain. The relevant buzzword is ‘long βη\beta\eta-normal form’.

Posted by: John Baez on May 8, 2011 11:52 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Okay, so then I want to re-iterate my above statement:

I think that for the purpose of establishing 2-categories in the context of programming it is good that Mike Stay passed from the λ\lambda-calculus to the blue calculus:

in the former one can introduce directedness of rewrites by hand and I can see that it can be useful, but in the latter directedness is god-given and no convention in the world can remove it. There is potential room to argue about whether the study of λ\lambda-calculus benefits from using 2-categorical language (at least there is something to be demonstrated here, not that I doubt it can be done) whereas there is no wiggle room for the blue calculus: some of its rewrites have god-given direction.

Posted by: Urs Schreiber on May 8, 2011 1:45 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I agree with you completely now, Urs.

And your comments made me change my opinion slightly. I now think it may be optimal to separate studies of the λ-calculus into two parts: the ‘god-given’ or ‘logical’ part which is about a certain cartesian closed (2,1)(2,1)-category canonically associated to a λ-theory, and a ‘by hand’ or ‘computational’ part in which one introduces irreversibility into the 2-morphisms somehow.

There are various ways to introduce irreversibility:

  • We can take a sub-2-category with fewer 2-morphisms, which is what I’ve been talking about so far.  
  • Or, we can introduce the concept of a deterministic evaluation strategy, which associates to each 1-morphism (or ‘term’)

    f:xyf : x \to y

    a certain 2-morphism (or ‘rewrite’)

    α f:fF(f).\alpha_f : f \to F(f).

    Here F(f)F(f) is the result of rewriting ff according to the chosen strategy. There’s a pretty obvious concept of a rewriting strategy being ‘terminating’: F n+1(f)=F n(f)F^{n+1}(f) = F^n(f) for all ff for sufficiently large nn. We can also hope that α\alpha is a pseudonatural transformation from the identity to FF, though I’m not sure how often this occurs.

  • Or, we can think about nondeterministic evaluation strategies, where we allow ourselves a choice about how to rewrite terms.
Posted by: John Baez on May 9, 2011 3:54 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

But is the process of rewriting in classical lambda-calculus really a good model of the process of computation in an actual computer?

It seems to me that there are roughly two levels to models of computation: describing what can be computed, and describing how well (how quickly, how effectively) it can be computed. For the first level, it suffices to identify terms that are related by rewrites. For the second, we need a model which reflects what a computer does.

In practice, concepts belonging to the second level, like complexity classes, are usually defined using alternate models of computation like Turing machines or circuits. This is arguably due to cultural and historical factors rather than mathematical ones, since I’ve been told that one can just as well define such concepts using lambda calculus—but my understanding is that in order to do this, one has to be at least a little careful in defining the relevant “measure of complexity” of a lambda term. In particular, one doesn’t just count the number of rewrites it takes to reduce to a normal form.

So would a categorical model of the process of lambda rewriting really say anything about the process of computation, which is what we’re really interested in?

Posted by: Mike Shulman on May 7, 2011 8:00 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike wrote:

But is the process of rewriting in classical lambda-calculus really a good model of the process of computation in an actual computer?

An actual computer?

Probably not. Actual computers are pretty complicated! I’d love to understand them, but I’m still stuck back somewhere around the 1930s, when Church developed the lambda calculus as an abstract model of computation, before von Neumann took it all literally and actually built one of the darn things.

So would a categorical model of the process of lambda rewriting really say anything about the process of computation, which is what we’re really interested in?

There’s lots of imaginable processes of computation: those carried out by all the actual computers that have been built, the simplified abstract ones that theoretical computer scientists like to think about, and lots more. Actual computers keep getting more complicated. But the lambda-calculus is famous because it’s an extremely simple model of computation that looks almost like ‘logic’, while still managing to compute all recursive functions. So, you can see processes of computation in a very primitive form grinding away:

(λfx.f(f(fx)))(λgy.g(gy))(λz.z+1)(0) (λx.(λgy.g(gy))((λgy.g(gy))((λgy.g(gy))x)))(λz.z+1)(0) (λx.(λgy.g(gy))((λgy.g(gy))(λy.x(xy))))(λz.z+1)(0) (λx.(λgy.g(gy))((λgy.g(gy))(λw.x(xw))))(λz.z+1)(0) (λx.(λgy.g(gy))(λy.(λw.x(xw))((λw.x(xw))y)))(λz.z+1)(0) (λx.(λgy.g(gy))(λy.(λw.x(xw))(x(xy))))(λz.z+1)(0) (λx.(λgy.g(gy))(λy.x(x(x(xy)))))(λz.z+1)(0) (λx.(λgy.g(gy))(λw.x(x(x(xw)))))(λz.z+1)(0) (λx.(λy.(λw.x(x(x(xw))))((λw.x(x(x(xw))))y)))(λz.z+1)(0) (λx.(λy.(λw.x(x(x(xw))))(x(x(x(xy))))))(λz.z+1)(0) (λx.(λy.x(x(x(x(x(x(x(xy)))))))))(λz.z+1)(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1) ((λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)y))))))))(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1) ((λz.z+1)((λz.z+1)((λz.z+1)(y+1))))))))(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)(y+2)))))))(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)(y+3))))))(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)((λz.z+1)(y+4)))))(0) (λy.(λz.z+1)((λz.z+1)((λz.z+1)(y+5))))(0) (λy.(λz.z+1)((λz.z+1)(y+6)))(0) (λy.(λz.z+1)(y+7))(0) (λy.y+8)(0) (0+8) 8 \begin{aligned} (\lambda f x .f (f (f x )))(\lambda g y.g (g y))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))((\lambda g y.g (g y))((\lambda g y.g (g y))x )))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))((\lambda g y.g (g y))(\lambda y.x (x y))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))((\lambda g y.g (g y))(\lambda w.x (x w))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))(\lambda y.(\lambda w.x (x w))((\lambda w.x (x w))y)))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))(\lambda y.(\lambda w.x (x w))(x (x y))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))(\lambda y.x (x (x (x y)))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda g y.g (g y))(\lambda w.x (x (x (x w)))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda y.(\lambda w.x (x (x (x w))))((\lambda w.x (x (x (x w))))y)))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda y.(\lambda w.x (x (x (x w))))(x (x (x (x y))))))(\lambda z.z+1)(0)\\ \to(\lambda x .(\lambda y.x (x (x (x (x (x (x (x y)))))))))(\lambda z.z+1)(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)\\ \quad((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)y))))))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)\\ \quad((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)(y+1))))))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)(y+2)))))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)(y+3))))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)(y+4)))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)((\lambda z.z+1)(y+5))))(0)\\ \to(\lambda y.(\lambda z.z+1)((\lambda z.z+1)(y+6)))(0)\\ \to(\lambda y.(\lambda z.z+1)(y+7))(0)\\ \to(\lambda y.y+8)(0)\\ \to(0+8)\\ \to8\\ \end{aligned}

This is a little ‘program’ that computes 2 32^3. This kind of stuff appeals to certain people. I like thinking of each step of this computation as a kind of morphism. So I was bummed when Lambek’s famous result about the lambda-calculus and cartesian closed categories turned out to be based on treating all the expressions above as equal.

Bummed? Well, for about 5 seconds—but then I was pleased, because I saw that this was another great opportunity for categorification to work its wonders. All the stuff in Lambek and Scott’s book really wants to get boosted up to cartesian closed 2-categories!

Of course it turned out I wasn’t the first to have this idea:

And more recently:

There’s still a lot of stuff in Barendregt’s magisterial tome on the lambda-calculus that deserves to be understood in 2-categorical terms. If you want to really affect the world, it’s probably better to apply higher categories to actual computers, or actual modern computer languages. But that’d require more expertise and focus than I have.

Posted by: John Baez on May 7, 2011 9:35 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

John wrote:

Actual computers are pretty complicated! I’d love to understand them, but I’m still stuck back somewhere around the 1930s, when Church developed the lambda calculus as an abstract model of computation, before von Neumann took it all literally and actually built one of the darn things.

You don’t have to understand the details of the hardware or the electronics, it’s enough to understand higher level programming languages that abstract away a lot of the hard stuff. A programming language that is executed in a virtual environment (Pyhton, Ruby, Java, C#) does not depend on details of the operating system either. And just in case there is no language that everyone in the room understands, people usually make up a pseudo-language that abstracts away the syntactic peculiarities of existing languages, so actually it is enough to understand that and the world of theoretical computer science is open to you.

Posted by: Tim van Beek on May 9, 2011 8:19 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’ve been slowly and tentatively entering the world of theoretical computer science, but since I don’t program using any modern language, I lack the familiarity with this world that I have with, say, physics. Helping Mike Stay with his thesis has been a wonderful way to start overcoming this obstacle. But someday I should bite the bullet and do a bit more programming.

Computer science should eventually be a big part of my network theory project, not only because computation and communication involves networks, but because people are trying to write programs that simulate other kinds of networks, like biological systems. A good understanding of networks should help.

Posted by: John Baez on May 9, 2011 8:41 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

But is the process of rewriting in classical lambda-calculus really a good model of the process of computation in an actual computer?

It’s pretty good, yes. You can actually build a machine for which the symbols of a lambda term are the entire internal state of the computer. The process of computation would be rewriting those symbols, as John illustrated.

It’s useful to think of a computer as a 2-category internal to Set:

  • You have a set of objects, your types. These are largely formal; the types are just there for making sure compositions of morphisms are well-formed. You can, however, think of a type TT as corresponding to the set of its elements, the closed terms of type 1T1\to T.

  • You have a set of morphisms, the terms. Each element of that set is some internal state of the computer.

  • You have a set of 2-morphisms, rewriting steps that update the internal state of the computer. In lambda calculus, when a normal form exists, these rules are confluent and terminating. In blue calculus, they are certainly not confluent in general.

(Back to Mike’s comments.)

It seems to me that there are roughly two levels to models of computation: describing what can be computed, and describing how well (how quickly, how effectively) it can be computed. For the first level, it suffices to identify terms that are related by rewrites. For the second, we need a model which reflects what a computer does.

I agree that this is true for lambda calculus; your first level corresponds to 1-morphisms, while your second to 2-morphisms.

We can’t identify blue terms related by rewrites without saying that, for example, 2=3. What is being computed depends on how it is computed; the result of a computation is not path-independent like in the lambda calculus.

In practice, concepts belonging to the second level, like complexity classes, are usually defined using alternate models of computation like Turing machines or circuits. This is arguably due to cultural and historical factors rather than mathematical ones, since I’ve been told that one can just as well define such concepts using lambda calculus—but my understanding is that in order to do this, one has to be at least a little careful in defining the relevant “measure of complexity” of a lambda term. In particular, one doesn’t just count the number of rewrites it takes to reduce to a normal form.

Really? I was unaware of that. Why don’t you just count the number of rewrites?

Part of my motivation for looking at blue calculus was Baez’s note “Action as a Functor”, where paths through configuration space get assigned actions. Here, paths through configuration space are 2-morphisms; each should be assigned a cost; for example, on modern CPUs they talk about the number of cycles each instruction takes. It then makes sense to talk about how the cost scales with the size of the input.

So would a categorical model of the process of lambda rewriting really say anything about the process of computation, which is what we’re really interested in?

A 2-categorical model like Hilkens’ would talk about the process of computation. And I’m suggesting that a 2-categorical model of blue calculus is necessary because you can’t identify terms related by rewrites.

Posted by: Mike Stay on May 9, 2011 9:40 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Why don’t you just count the number of rewrites?

I’m not an expert here, but here are a couple of links that I found by Googling “lambda calculus complexity theory”, and which agree with the general impression I had formed previously:

Posted by: Mike Shulman on May 10, 2011 3:10 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

OK; it seems the major objection is that beta reduction has to do replacements along the whole term, so it should have a cost proportional to the length of the term. Beta reduction in the 2-lambda calculus or in blue calculus is done using 2-morphisms that are particular instances of beta reduction, and can be assigned costs individually.

Posted by: Mike Stay on May 10, 2011 5:48 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

But even a single instance of beta reduction can perform many substitutions, one for each occurrence of the variable being substituted for.

(λx.fxxxxxx)(abc) βf(abc)(abc)(abc)(abc)(abc)(abc) (\lambda x. f x x x x x x)(a b c) \to_\beta f (a b c)(a b c)(a b c)(a b c)(a b c)(a b c)

is a single beta reduction. Right?

Posted by: Mike Shulman on May 10, 2011 11:56 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Yes, by assigning costs to instances separately, it’s no longer “just counting beta reductions”; that instance of beta reduction you mention would have a cost of 6*3=18, whereas

(1)(λx.fxx)(d(ef))=f(d(ef))(d(ef)) (\lambda x.fxx)(d(ef)) = f(d(ef))(d(ef))

would have a cost of 2*5=10.

Posted by: Mike Stay on May 11, 2011 1:00 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

My impression is that it’s not that straightforward. If it were, why would people be spending so much effort on it? But I’d be happy to be proven wrong.

Posted by: Mike Shulman on May 11, 2011 2:33 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I recently started working a bit in this area, and I wrote a blog post describing a-less-well-known-than-it-should be categorical model of bounded space usage in the lambda-calculus, due to Martin Hofmann.

As usual, when you start tracking more detail, the available structure increases – space-bounded computation has a very pretty model which is a doubly closed category: it’s got both monoidal and cartesian closed structure. It’s also the simplest nontrivial (not posetal) example of such a category I know. All of the other ones I know come via Day’s construction, whereas this one is just a concrete category.

(Hofmann and his collaborators have done the same thing for time, or a mix, as well as other complexity classes, though I don’t talk about it my blogpost.)

Posted by: Neel Krishnaswami on May 10, 2011 8:09 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

a 2-category internal to Set

I think most people would just call that “a 2-category”…

You have a set of morphisms, the terms. Each element of that set is some internal state of the computer.

Okay, this is coming closer to answering my question. I’m guessing from the above comment and from this one that you mean to think of a term of the blue calculus, instead of being a function coded in some programming language the way I think of a term of the lambda calculus, as a “state of a machine”, which might include things like (1) several programs in memory at different places, in different stages of being executed in parallel (including perhaps a list of “program counters” marking which programs where in memory are in what stage of being executed), and (2) data stored in certain memory locations which might be readable or writable by one or more of these programs.

This helps. Now I can guess, for instance, that t 1|t 2t_1 | t_2 means that we simply place the machine states represented by t 1t_1 and t 2t_2 at different places in memory. And if t 1t_1 and t 2t_2 share some variable names, then we identify their corresponding memory locations, so that the processes in t 1t_1 and t 2t_2 can communicate with each other. (Though it still seems to me as though the memory locations which we identify should be notated as part of the “|” operation, rather than specified implicitly by sharing of variable names.)

If that’s correct, then I have to return to my question “what does it mean to assign a type to a blue-calculus term”? And more basically, what is a “type” in the blue calculus? For instance, if I have one term which represents a program in the process of computing an integer, and another term which represents a program in the process of computing a real number, and I place these two machine states side by side with the programs prepared to run in parallel, then what is the type of the resulting term?

We can’t identify blue terms related by rewrites without saying that, for example, 2=3. What is being computed depends on how it is computed; the result of a computation is not path-independent like in the lambda calculus.

Yes… but parallelism is irrelevant for the question of “what can be computed”, since anything that you can compute by a parallel computation could also be computed by a sequential one (only more slowly). So it seems to me that the blue calculus is only interesting when we ask the question of how things are computed.

Posted by: Mike Shulman on May 10, 2011 3:24 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’m guessing from the above comment and from this one that you mean to think of a term of the blue calculus, instead of being a function coded in some programming language the way I think of a term of the lambda calculus, as a “state of a machine”.

Yes.

If that’s correct, then I have to return to my question “what does it mean to assign a type to a blue-calculus term”?

Pretty much the same thing as it means to assign a type to a lambda term, except that we’re not taking equivalence classes of terms. While there’s some subtlety about “generalized points” depending on the bicategory in which you interpret the calculus, you can pretty much think of a type TT as consisting of the set of terms 1T1\to T that have no reductions.

And more basically, what is a “type” in the blue calculus? For instance, if I have one term which represents a program in the process of computing an integer, and another term which represents a program in the process of computing a real number, and I place these two machine states side by side with the programs prepared to run in parallel, then what is the type of the resulting term?

It depends what you mean by “parallel”. If you put them in parallel using ×\times, where ×A- \times A is left pseudoadjoint to AA \multimap -, then you have a result of type Int×RealInt \times Real. If you mean run them concurrently, then you can’t: it’s not a well-typed term. You can only sum terms of the same type:

(1)ΓT:XΓT:XΓ(T|T):X. \frac{\Gamma \vdash T:X \quad \quad \Gamma \vdash T':X}{\Gamma \vdash (T | T'):X}.

There’s an “untyped” blue calculus where you can do concurrent sums of arbitrary terms, but then you’ve thrown out typing.

Yes… but parallelism is irrelevant for the question of “what can be computed”, since anything that you can compute by a parallel computation could also be computed by a sequential one (only more slowly). So it seems to me that the blue calculus is only interesting when we ask the question of how things are computed.

Sure. Concurrency doesn’t give you a larger set of computable functions. It does, however, let you ask security questions like “if computer A runs this program and computer B can only send these messages to these channels, is there any way for computer B to cause changes to this part of A’s memory?”

Posted by: Mike Stay on May 10, 2011 5:44 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Pretty much the same thing as it means to assign a type to a lambda term, except that we’re not taking equivalence classes of terms.

Unfortunately, that doesn’t mean anything to me. We’ve just concluded that unlike lambda terms, which can be regarded as functions which compute a value from inputs, a blue-calculus term is a “state of a machine”. We assign a type to a lambda term by looking at the type of the value it computes. In what way can we do the “same” thing to a blue calculus term, which doesn’t “compute a value”?

Posted by: Mike Shulman on May 10, 2011 11:47 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Unfortunately, that doesn’t mean anything to me. We’ve just concluded that unlike lambda terms, which can be regarded as functions which compute a value from inputs, a blue-calculus term is a “state of a machine”.

Sorry, I misread what you wrote. I meant to say that blue calculus terms AND lambda terms are “states of a machine”—that’s what I meant when I wrote in the other recent message that

You can actually build a machine for which the symbols of a lambda term are the entire internal state of the computer. The process of computation would be rewriting those symbols, as John illustrated.

You can interpret the state of the lambda calculus machine as specifying a function, whereas a state of a blue calculus machine specifies something more general. This generalized function thing still maps one set to another, but there are issues of nondeterminism and concurrency.

Posted by: Mike Stay on May 11, 2011 12:56 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

This generalized function thing still maps one set to another

This is what I’m failing to understand! In what sense does “a state of a machine” “map one set to another”?

Posted by: Mike Shulman on May 11, 2011 2:36 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

This is what I’m failing to understand! In what sense does “a state of a machine” “map one set to another”?

It works in just the same way it does in the lambda calculus. Consider the lambda calculus term

(1)P=λxyz.zxy. P = \lambda x y z.z x y.

The term PP is literally the state of the computer; you have memory cells that can hold a variable name or a dot or a lambda or a parenthesis, and you have a circuit that takes that memory as input and either outputs a flag saying it’s time to stop or writes new values as output. In this case, there’s nothing to reduce, so it halts.

We can think of a type TT as being the set of “elements” of TT, i.e. the closed terms of type TT a.k.a. the morphisms of type 1T1\to T. In PP there are no free variables, so it’s an element of type XY(XYZ)ZX\multimap Y \multimap (X \multimap Y \multimap Z) \multimap Z.

But it also specifies a function: if you apply PP to an element of XX, you get an element of Y(XYZ)ZY \multimap (X \multimap Y \multimap Z) \multimap Z. If we have a term t:Xt:X, then

(2)Pt=(λxyz.zxy)t βλyz.zty. P t = (\lambda x y z.z x y)t \to_\beta \lambda y z.z t y.

This new state of the computer also specifies a function, one that maps element of YY to elements of (XYZ)Z(X \multimap Y \multimap Z) \multimap Z.

In the blue calculus, we can also think of TT as being the set of “elements” of TT, the closed terms of type TT. Since we can sum any two terms of the same type concurrently, all blue calculus types are monoids in this picture. The internal state of the blue calculus computer is a closed term. There’s a circuit that either signals that there are no more available rewrites or outputs a new state, but the circuit may be nondeterministic; for instance, it may depend on the environment of the computer rather than just the internal state of the computer.

Posted by: Mike Stay on May 11, 2011 8:42 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

It sounds like you’re saying that the “type” of a blue-calculus term has no meaning at all; it’s just something we assign formally to the term based on its lexical structure. Is that right?

Posted by: Mike Shulman on May 12, 2011 6:00 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

It sounds like you’re saying that the “type” of a blue-calculus term has no meaning at all; it’s just something we assign formally to the term based on its lexical structure. Is that right?

Yes, but the same could be said of lambda calculus terms. A Turing machine is a nonreversible but deterministic dynamical system, where there’s a fixed update rule for the set of states. Untyped lambda calculus allows for the possiblity that multiple update rules might apply to a state, but they’re confluent when they terminate. Typed lambda calculus is a formal “oidification” of untyped lambda calculus based on the lexical structure of the term.

  • Types are formal.
  • Terms are elements of a set.
  • Rewrites are endofunctions on that set.

In blue calculus, these rewrites aren’t confluent.

Posted by: Mike Stay on May 12, 2011 4:55 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Typed lambda calculus is a formal “oidification” of untyped lambda calculus based on the lexical structure of the term.

I suppose that’s one way to look at it, but I think that’s a bizarre way to look at it. We assign types to lambda terms because we are thinking of lambda terms as representing functions. The whole idea of typing arises from this conception, and the typing rules are motivated by it. I’ve been saying all along that what I want is an intuitive explanation of the types in blue calculus, not just their formal rules. You seem to be saying that there is no such explanation, only formal rules. Okay, well, I can live with that, I guess, although it makes the system seem much less natural to me. But let’s not pretend that the lambda calculus is similarly lacking in motivation!

Posted by: Mike Shulman on May 13, 2011 12:39 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

It is bizarre, because it wasn’t designed by a mathematician. It models how programmers actually use computers. They run multithreaded programs that share memory. They have computers send messages back & forth over networks. To me, that kind of talk is intuitive, because it’s what I was trained in and what I do for a living. I’m sorry it’s taking me so long to figure out what would be an intuitive explanation for a mathematician.

Let me try to turn blue calculus into a category instead of a bicategory. If we say blue calculus types are sets, then we follow Plotkin and say that terms with free variables in a typing context are relations. “Rewriting” is deciding which of the outcomes fora particular input actually appears; by taking all possible rewrites we hope to ignore the 2-morphism level. These compose in the obvious way. There’s the usual tensor product of relations.

There’s also the concurrent sum. To sum two terms of the same signature, you use alpha equivalence to give them the same free variables. Then you consider all possible interleavings of the term’s rewrites. A relation doesn’t have this information; this was the point Milner was making when he mentioned Plotkin. You have to equip the relation with more information that says how they interact with each other.

I’m hoping this is more intuitive:

  • types are sets
  • terms are relations that map an element to one of several outputs plus some extra information involving the rewrite rules that says how to run them concurrently
Posted by: Mike Stay on May 13, 2011 6:37 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I wasn’t saying that blue-calculus or concurrency is bizarre; I was saying that the assignment of “types” to blue-calculus terms without any intuitive meaning behind it is bizarre. I doubt that working programmers think of “types” as things assigned to their code purely formally on the basis of syntactic analysis. I would expect them to think of the type of “integers” as collecting all the 16- or 32-bit-sequences that represent “integers” on the computer, regardless of whether they are programming sequentially or in parallel.

If we say blue calculus types are sets, then we follow Plotkin and say that terms with free variables in a typing context are relations.

Okay, this is helpful! So while a lambda calculus term computes a definite value of some type on every input and thus represents a function, a blue calculus term is trying to compute a value of some type (the same sort of type) but its value might depend on the order in which its different threads execute; thus it is only a “multi-valued function”, i.e. a type of relation.

I still don’t understand the concurrent sum, though. If I have one term that computes a collection of possible outputs, and a second that computes some second collection, then how are those two collections combined when I do the concurrent sum? In other words, exactly what operation on the homsets of RelRel corresponds to the concurrent sum of blue-calculus terms?

Note, by the way, that the category RelRel of sets and relations is actually a bicategory, although somewhat degenerate: its hom-categories are partial orders. It seems possible to me that blue-calculus rewrites would map to reverse inclusions of relations: every time we choose a rewrite, we restrict the potential outcomes (by ruling out sequences of rewrites that don’t begin with the one we’ve chosen). Does that sound plausible?

Posted by: Mike Shulman on May 14, 2011 2:32 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I still don’t understand the concurrent sum, though. If I have one term that computes a collection of possible outputs, and a second that computes some second collection, then how are those two collections combined when I do the concurrent sum? In other words, exactly what operation on the homsets of Rel corresponds to the concurrent sum of blue-calculus terms?

I don’t think there is one on the homsets of Rel; that’s why I said “relations plus some extra stuff”. Sometimes they don’t interfere with each other at all, and the result is just the product of the two collections you mention. Let

(1)P=νx.(xy|xz|xT) P = \nu x.(x y \;|\; x z \;|\; x \Leftarrow T)

and

(2)Q=νw.(wy|wz|wU). Q = \nu w.(w y \;|\; w z \;|\; w \Leftarrow U).

By itself, PP evolves into

(3)TyorTz T y \quad or \quad T z

while QQ evolves into

(4)UyorUz. U y \quad or \quad U z.

If TT and UU are both senders or both receviers, then no interaction is possible, and we get four possible outcomes for P|QP \;|\; Q:

(5)Ty|UyorTz|UyorTy|UzorTz|Uz T y \;|\; U y \quad or \quad T z \;|\; U y \quad or \quad T y \;|\; U z \quad or \quad T z \;|\; U z

However, if we let

(6)T=λv.vνu.(ur|us|ut|uS) T = \lambda v.v \Leftarrow \nu u.(u r \;|\; u s \;|\; u t \;|\; u \Leftarrow S)

and

(7)U=λv.vp, U = \lambda v.v p,

then in the first and last case, the summed terms interact and produce three more possibilities. Here’s how it goes for Ty|UyT y | U y:

(8)yνu.(ur|us|ut|uS)|yp y \Leftarrow \nu u.(u r \;|\; u s \;|\; u t \;|\; u \Leftarrow S) \;|\; y p
(9) \Downarrow
(10)νu.(ur|us|ut|uS)p \nu u.(u r \;|\; u s \;|\; u t \;|\; u \Leftarrow S) p
(11) \Downarrow
(12)SrporSsporStp S r p \quad or \quad S s p \quad or \quad S t p

I haven’t worked out yet what the extra stuff is, exactly, that we need to attach to the relation, since I haven’t been thinking about this model for very long. But it will have something to do with the structure of the term and the rewrites that occurred.

By the way, CSP, Milner’s first concurrent calculus, is rather like a petri net for terms, where instead of wolves eating rabbits and reproducing, we have terms interacting and producing new terms. However, pi and blue calculus allow the topology of the net to change as well as the populations. It’s vaguely like quantum gravity in that way: the topology governs how the messages flow, but the messages can alter the topology.

Posted by: Mike Stay on May 16, 2011 4:37 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Note, by the way, that the category Rel of sets and relations is actually a bicategory, although somewhat degenerate: its hom-categories are partial orders.

That’s interesting; assuming we can make this idea work, it would suggest some level of structure beyond the rewrites.

It seems possible to me that blue-calculus rewrites would map to reverse inclusions of relations: every time we choose a rewrite, we restrict the potential outcomes (by ruling out sequences of rewrites that don’t begin with the one we’ve chosen). Does that sound plausible?

If it were just parallelism rather than concurrency, then yes. The concurrent sum, however, complicates things. It’s true that if you have two rewrites PPP \Rightarrow P' and QQQ \Rightarrow Q', then you’ll have a rewrite P|QP|QP|Q \Rightarrow P'|Q'. However, you may also have more rewrites than that, as I illustrated in my previous message.

Posted by: Mike Stay on May 16, 2011 4:50 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Let’s just think about the case where there is no interaction, for now. Suppose that PP and QQ are terms which don’t interact, and whose semantics are given by relations RR and SS from a set XX to a set YY, so that R(x,y)R(x,y) means that PP might compute yy given input xx, and similarly S(x,y)S(x,y) means that QQ might compute yy given input xx. What is the relation from XX to YY giving the semantics of P|QP|Q? You say the “product” of the values of PP and QQ, which suggests to my mind something like “T(x,(y 1,y 2))=R(x,y 1)T(x,(y_1,y_2)) = R(x,y_1) and S(x,y 2)S(x,y_2)”, but this TT is no longer a relation from XX to YY, rather a relation from XX to Y×YY\times Y.

Posted by: Mike Shulman on May 16, 2011 10:32 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Say we have two constant terms y 1y_1 and y 2y_2 of type YY. We think of these as relations from the terminal set to the set YY that pick out a single value. Then y 1|y 2y_1 | y_2 has to also pick out a single value of YY; the value has to be insensitive to the order of the terms and to associativity and to summing with the morphism 0. That’s equipping YY with the structure of a commutative monoid, so I guess datatypes have to be commutative monoids; if we really want to think of them as sets, a morphism XYX\to Y could be a relation between the free commutative monoids on XX and YY.

Then to answer your question, we have T:XY×YT:X \to Y \times Y followed by addition in YY.

Posted by: Mike Stay on May 17, 2011 6:28 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I think requiring our types to be commutative monoids defeats the whole purpose of having a model like RelRel, which was to let “types” be the sort of types that we’re familiar with from lambda calculus and actual programming. But thinking about free monoids, in a Kleisli sort of way, makes a little more sense. Now a 1-morphism XYX\to Y is a relation between elements of XX and elements of the free (commutative?) monoid on YY: we allow our “functions” to output not just single elements of YY, but collections of such elements, and then we make our functions many-valued because the choice of execution order might affect which collection of elements of YY they output.

Maybe this is finally a semantics I can live with. At least it’s relatively precise. On the other hand, it seems kind of far removed from actual programming. I usually write a function that computes “an integer”, not “an element of the free commutative monoid on the set of integers”. Is concurrent programming really so different as all that?

Posted by: Mike Shulman on May 18, 2011 5:17 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Now a 1-morphism XYX\to Y is a relation between elements of XX and elements of the free (commutative?) monoid on YY: we allow our “functions” to output not just single elements of YY, but collections of such elements, and then we make our functions many-valued because the choice of execution order might affect which collection of elements of YY they output.

Almost: the morphisms have to be relations from F(X)F(X) to F(Y)F(Y), because there need to be terms that do things like map x 1+x 2x_1 + x_2 to yy.

I usually write a function that computes “an integer”, not “an element of the free commutative monoid on the set of integers”. Is concurrent programming really so different as all that?

Define a “memory of type XX” to be a word in the free commutative monoid on the set XX. A typical memory of type integer would be 2|32 | 3. It means, roughly, that the computer has 2 in one part of memory and 3 in another part. I say “roughly”, because this “memory” is usually spread out between cores on a processor or between different computers on a network. 2|32|3 should be thought of as “one computer gave the answer 2, while another gave the answer 3”—for example, when requesting airfares from Orbitz.

I tend to picture nondeterministic computation rather like a game of dominoes. Each domino has an input term and an output term, and a “program” is some starting domino. Each player can play some set of solitaire games by himself building off of that domino. Parallelism is when we consider all the solitaire games that can be played. Concurrency thinks about the set of games that can be played when the players can build on each others’ board; we have to think about all the possible interleavings of rewrites. Interactions in concurrent domino games are like strangely shaped dominos that connect nn inputs to mm outputs. Types show up as the colors painted on the sides of the dominoes, so they can be joined side-to-side as well as end-to-end.

Posted by: Mike Stay on May 19, 2011 6:13 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Okay, thanks, that’s helpful. I think I am starting to understand how to think about this model. But I still have one big question. Why do we want to consider a program which might result in “one computer gave the answer 2, while another gave the answer 3” as having the same output type as a program which might result only in “this computer gave the answer 2”?

Posted by: Mike Shulman on May 21, 2011 6:21 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Why do we want to consider a program which might result in “one computer gave the answer 2, while another gave the answer 3” as having the same output type as a program which might result only in “this computer gave the answer 2”?

Replication: (x=T)(x = T) is equivalent to (x=T)|(xT)(x = T)\;|\;(x \Leftarrow T), so it doesn’t make sense in that case to ask how many computers are involved; the answer is always “enough”.

Posted by: Mike Stay on May 22, 2011 4:20 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I wasn’t thinking about the number of computers involved, but rather about the number of answers produced.

Posted by: Mike Shulman on May 22, 2011 11:06 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Sorry, the “number of computers” was an irrelevant distraction; we don’t consider them explicitly in blue calculus. All we have are “memories of type XX”. Since xTx \Leftarrow T counts as an “answer”, the number of answers is indeterminate once you have replication x=Tx = T as part of your term.

One might similarly ask why a diagonally-polarized state is the same “type” as a vertically polarized state. One can “change basis” and see x=Tx=T as the concurrent sum of two terms or many or one.

Posted by: Mike Stay on May 23, 2011 8:13 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’m not counting xTx \Leftarrow T as an “answer”. To me, an “answer” is an output, not a constituent of the term. Two equivalent terms will necessarily have the same output, and therefore the same answer(s). Notice my original question:

Why do we want to consider a program which might result in “one computer gave the answer 2, while another gave the answer 3” as having the same output type as a program which might result only in “this computer gave the answer 2”?

It’s irrelevant whether the first program is “one computer gave both answers 2 and 3” — the point is that there are two outputs.

Posted by: Mike Shulman on May 23, 2011 11:30 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

The only concurrent system I’ve ever written a real program in is JavaScript. It has a type system with types like “number”, “string”, “boolean”, etc. You can play with a read-eval-print loop here. You can write functions that take in a single value and spit out a single value, just like with lambda calculus.

JavaScript also has side-effects; in that sense, the program maps between states of the computer’s memory. For example:

function doubleCount(x) {
  // Increment a global counter 
  counter = counter + 1;
  // Return an "output"
  return 2 * x;
}

This program simply declares a function; it doesn’t do anything with it. Running this program will go from the initial state to a new one where there’s some compiled code bound to the symbol “doubleCount” in the computer’s memory. If we extend the program to the following:

function doubleCount(x) {
  // Increment a global counter 
  counter = counter + 1;
  // Return an "output"
  return 2 * x;
}
counter = 0;
doubleCount(5);

then running it will go from the initial state to one in which there’s a symbol “doubleCount” bound to some code and a symbol “counter” bound to 1. The output that doubleCount returned, 10, gets thrown away.

JavaScript programs map from the initial state of the computer’s memory to some new state; in that way we can think of programs as functions from the set of states of the computer to itself. This set of states doesn’t show up anywhere in JavaScript’s type system.

Concurrency is implemented as an extra layer on top of that. You can contact other servers on the internet and send them text messages. The computer gets a message and invokes a JavaScript “function” that acts on the text and generally modifies current internal state of the computer. During processing, the program can contact other servers and send out a bunch of messages. If you want some other machine to do a computation for you, you have to send it a message and have it send you one back.

So there are three different type systems involved:

  1. The JavaScript type system that talks about what types variables are;
  2. The type “computer’s memory”—a non-communicating program maps between values of this type;
  3. The type “collection of computer memories and messages on the network”—a bunch of programs running concurrently map between values of this type.

Boudol’s type system for blue calculus is working at level 3, whereas your intuition is about level 1.

Posted by: Mike Stay on May 27, 2011 1:53 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Thanks for the clarification. Now I feel like we’re back here again.

  • Wouldn’t it be less confusing to use a word other than “type” to refer to levels 2 and 3?

  • What does it mean to have “more than one type” in levels 2 and 3? You only mentioned “the type” in those two cases. I can imagine that two different computers, or networks of computers, would have different “types”, but then what would it mean for a (bunch of) program(s) to map from one of these types to a different one?

An aside: I think Javascript isn’t a great example for my intuition of level 1, because being “dynamically typed”, it really only has one type. In a real, static, type system that honestly has more than one type, types are associated with inputs and outputs rather than with values, so that “a function returning int” is different from “a function returning a string”. But I get the picture.

Posted by: Mike Shulman on May 27, 2011 7:22 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Wouldn’t it be less confusing to use a word other than “type” to refer to levels 2 and 3?

Probably. What other term would you suggest?

What does it mean to have “more than one type” in levels 2 and 3? You only mentioned “the type” in those two cases. I can imagine that two different computers, or networks of computers, would have different “types”, but then what would it mean for a (bunch of) program(s) to map from one of these types to a different one?

I think of it in terms of layers of emulation. For an extreme example, if you go here you can boot Linux in JavaScript and compile and execute C code. Of course, in the end, it’s always just the type “electrons in microchips,” but it’s useful to think of it as being a different kind of “collection of networked computer memories” than the previous layer, just as you can emulate typed lambda calculus using untyped lambda calculus.

Posted by: Mike Stay on May 27, 2011 8:30 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

What other term would you suggest?

Maybe once I understand what these level-2 and level-3 “types” are supposed to mean, I’ll be able to suggest something.

I think of it in terms of layers of emulation. For an extreme example, if you go here you can boot Linux in JavaScript and compile and execute C code.

I don’t quite see what that has to do with a function mapping from one level-3 type to another. What are the two level-3 types and what is the function which maps one of them to the other one?

Posted by: Mike Shulman on May 28, 2011 1:10 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I don’t quite see what that has to do with a function mapping from one level-3 type to another. What are the two level-3 types and what is the function which maps one of them to the other one?

Sorry, those were two level-2 types. For level-3 types, you’d have one network simulating another. It’s still an emulation, just one with the notion of asynchronous messages being passed around between components.

Posted by: Mike Stay on May 31, 2011 11:24 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Sorry, those were two level-2 types.

Okay, s/3/2/g in my previous question.

Posted by: Mike Shulman on June 1, 2011 1:09 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I think Turing machines might be helpful here. A Turing machine has an infinite tape where each cell has one symbol in it, taken from a specified alphabet. Given a table of states, a current state, and a position for the read-write head, we get a partial computable function from tapes to tapes.

Some Turing machines can simulate other ones: they represent the simulated machine, with its distinct alphabet, tape, table, and states, using some encoding. It is useful to think of such machines as mapping from one type of tape to another one.

Posted by: Mike Stay on June 3, 2011 6:01 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

You still haven’t actually explained what you mean. Maybe you mean that if T is a Turing machine which simulates another Turing machine M, then there is a function from tapes-of-M to tapes-of-T which specifies, given a tape for M, what the corresponding tape for T would be that simulates it? And this “translation” function is itself computable by some third Turing machine? But then this function from tapes-of-M to tapes-of-T is a different sort of thing from the function from tapes-of-T to tapes-of-T, since the latter describes the action of a single state of T, while the former is about the entire process of simulation. So maybe you mean something else.

Posted by: Mike Shulman on June 3, 2011 8:10 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

S. Abramsky, Retracing some paths in Process Algebra summarizes Milner’s work and talks about processes as transducers and resumptions, two words I hadn’t known in this context.

Posted by: Mike Stay on June 18, 2011 1:09 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike Shulman wrote:

On the other hand, it seems kind of far removed from actual programming. I usually write a function that computes “an integer”, not “an element of the free commutative monoid on the set of integers”. Is concurrent programming really so different as all that?

Maybe you’re just a big fan of those elements in the free commutative monoid on the set of integers that are formal products of just one integer.

I am too. Maybe Mike Stay can tell/remind me how I’m supposed to think about a program whose output is a formal product of two integers? What’s that like?

I haven’t been paying careful attention to your conversation, so I’m sort of mixed up with this business about ‘relations from XX to the free commutative monoid on YY’. After all, that’s the same as ‘functions from XX to the power set of the free commutative monoid on YY’. And the power set on a set SS is the free commutative monoid on SS with the extra relations s 2=ss^2 = s for all ss: multiplication is union, and having ss in a subset twice is the same as having it in once. So there’s a similarity between ‘free commutative monoid’ and ‘power set’ that’s getting me really confused.

In fact, I’m reminded of quantization, where you sometimes take the free commutative algebra on the free commutative algebra on a vector space. That kind of iteration can be lots of fun.

But here the ‘power set’ and ‘free commutative monoid’ are supposed to have radically different interpretations, right? The ‘power set’ business, i.e. the idea of working with relations instead of functions, is supposed to capture the idea of nondeterminism: instead of a specific output lying in SS a program might have various possible outputs, with the possibilities lying in some subset of SS.

But the ‘free commutative monoid’ business is supposed to capture the idea of… what, exactly?

By the way, I’ve been resisting mentioning this so far, but I can’t resist anymore: the natural numbers are the free commutative monoid on the set of primes, so Mike Shulman probably does write programs whose output lies in the free commutative monoid on the set of primes.

Another thing I can’t resist mentioning is that Mike Stay should learn to engage in lengthy discussions without constantly generating new vertical lines at the left of this page when its read in ‘threaded’ form. There’s an absurd number of them here, and it doesn’t really help.

Each time Mike Stay replies to Mike Shulman’s comment, an extra vertical line appears. But when Mike Shulman replies to Mike Stay, that doesn’t happen! Clearly one of the Mikes knows something the other doesn’t. But the other Mike, being a computer scientist, can figure it out.

Posted by: John Baez on May 18, 2011 5:27 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Maybe Mike Stay can tell/remind me how I’m supposed to think about a program whose output is a formal product of two integers? What’s that like?

I explained this somewhat over here. It’s like having one computer giving back the answer 2 and a different one giving back the answer 3. Tthink about requesting airfares from different airlines serving the same route: The formal product of two terms f|gf|g is like the two different services the airlines are running. However, they might both be reselling seats on the same plane, so there are different outcomes (here’s where the relations come in) depending on what order the services end up interleaving their operations and trying to grab seats on the plane.

Another thing I can’t resist mentioning is that Mike Stay should learn to engage in lengthy discussions without constantly generating new vertical lines at the left of this page when its read in ‘threaded’ form. There’s an absurd number of them here, and it doesn’t really help.

Each time Mike Stay replies to Mike Shulman’s comment, an extra vertical line appears. But when Mike Shulman replies to Mike Stay, that doesn’t happen! Clearly one of the Mikes knows something the other doesn’t. But the other Mike, being a computer scientist, can figure it out.

Huh: I had the opposite reaction. I was thinking that people here should learn to respond to the proper post and not to its parent—the indentation is all wrong!

The indentation is actually is helpful for me, because I have a script that collapses everything except the post I’m replying to, but inserts little + buttons so I can go back and read context. It’s very similar to the tree view on any usenet reader produced in the last two decades. I can’t imagine why a feature like that isn’t already part of the software running this site. This is what my view of the thread looks like:


What my n-category cafe threads look like


Posted by: Mike Stay on May 19, 2011 6:50 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I had the opposite reaction. I was thinking that people here should learn to respond to the proper post and not to its parent—the indentation is all wrong!

Yes, I have been thinking this for a long time. It is true that (at least without extra scripts as you say you have installed) the behaviour of the blog software here is not well-suited for long discussions with lots of nesting.

But instead of hacking the system by working around this problem by hand and thus making the situation worse (we are working by hand against the software, instead of the software working for us) one should do something else.

Posted by: Urs Schreiber on May 19, 2011 8:47 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

What we should do is put my script into whatever template is used to produce this page; I’d be happy to do so, but I’d need access to the server. Before anyone lets me do that, though, they should check with Distler to see what the potential maintenance issues are.

Posted by: Mike Stay on May 19, 2011 5:27 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

But it looks like your script doesn’t solve the problem of the ugliness of all the vertical lines!

Another reason I don’t like the “nesting” view with everyone replying directly to the previous post is that most of the time when we have a discussion, it makes sense to view all the posts in that discussion in sequence, rather than in a tree. This is also a problem when looking at usenet posts and mailing list archives. Fortunately, Gmail does the right thing and collapses all replies in a single thread into a linear view. Too much focus on the “tree” way of looking at things can lead you to do things like say the same thing in reply to two different comments, like you did here and here, rather than only needing to say it once.

Only occasionally does it happen that a discussion branches off in more than one direction (such as is happening now, with this discussion about blog features), and then a tree view is useful. (In email, you can do this by changing the subject line.) So I think the current system where you can choose to reply to the comment itself (creating a branch) or to its parent (continuing the linear discussion) is just about right, although its implementation is kludgy and should ideally be better supported in the software.

Posted by: Mike Shulman on May 19, 2011 6:38 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Fortunately, Gmail does the right thing and collapses all replies in a single thread into a linear view.

I could modify the script to put everything with the same subject line into a linear view. People would have to be careful about creating a new subject line when they branch the discussion, but that seems a rare event.

Posted by: Mike Stay on May 19, 2011 11:03 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

And the power set on a set SS is the free commutative monoid on S with the extra relations s 2=ss^2=s for all ss

… at least if SS is finite. If SS is infinite (like the integers), then it’s a bit bigger than that. But it’s conceivable that we only care about finite subsets here, anyway.

But here the ‘power set’ and ‘free commutative monoid’ are supposed to have radically different interpretations, right?

It occurs to me that maybe the ‘power set’ is capturing or in the way that you said (a program might output this or that or …, depending on what evaluation strategy we choose) while the ‘free commutative monoid’ is capturing and (the program outputs this and that and … all at the same time). Then an element of the power set of the free commutative monoid is like a statement in disjunctive normal form: “(A and B and C) or (D and E and F) or (G and H)”.

This sort of explains why we allow repetitions in the ‘and’ part but not the ‘or’ part. A program which outputs “2” twice is noticeably different from one which outputs “2” once. But if there are two different ways for the program to run, both of which output “2” twice, then a user probably won’t notice the difference.

It’s less clear to me from this perspective why we want the free commutative monoid. After all, a program which outputs “2” and then “3” is different from one which outputs “3” and then “2”, right? Moreover, it’s also not clear to me why we would want to consider a program which outputs “2” once as having the same type as a program which outputs “2” twice. Surely the first one should have output type “int” and the second one should have output type “int × int”. So maybe this is not the right way of thinking about it.

Posted by: Mike Shulman on May 18, 2011 6:14 PM | Permalink | PGP Sig | Reply to this

Re: Higher Categories for Concurrency

the natural numbers are the free commutative monoid on the set of primes

Au contraire! The positive integers are the free commutative monoid on the set of primes. The natural numbers, on the other hand, as everyone knows, also include that lovely number zero. (-:

I can’t recall offhand the last time I wrote a program to compute a positive integer, rather than a natural number, but I’m sure it’s happened.

And an unrelated question: out of curiosity, Mike Stay, what are your favorite “real-world” programming languages?

Posted by: Mike Shulman on May 18, 2011 6:41 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’m a fan of programming languages that make it easy to reason about consequences. Functional programming languages can’t have any effects and can’t depend on any state other than on the things you explicitly pass in, so they’re very easy to test and it’s very easy to see what other parts of the system they can possibly affect and be affected by. Haskell is a prime example.

A weaker requirement than being a functional language is being capability secure; a language with objects is capability secure if the only way to get a reference to an object is to receive it as a parameter in a method or to create it yourself. Then you can reason about the authority to cause effects by thinking about what references an object may come to hold. E is the principal example of this, but it hasn’t seen wide adoption. It has, however, inspired many “tamings” of other languages, restrictions that turn them into capability-secure languages. Joe-E, CaPerl, and Emily are capability-secure subsets of Java, Perl, and ML, respectively.

The new version of JavaScript called ECMAScript 5 (since Java is a trademark of Sun Microsystems) includes a “strict mode” that allows capability-secure programming. The changes made to the language to enable this grew directly out of the work my team did at Google; it enables a webpage owner to make “mashups”, i.e. take third-party code and limit its authority to make changes to the page it runs in while at the same time allowing it to communicate easily with other components on the page. For a platform like Google Sites, where there are tens of thousands of authors but all of their pages are served from the same domain, it’s the only way to allow client-side scripting at all. The project is called “Caja”, which is both an acronym for “CApabilities JAvascript” and a Spanish word meaning “vault” or “box”.

Posted by: Mike Stay on May 19, 2011 7:12 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Interesting, thanks! I am also a fan of functional languages, and while I haven’t read anything about capability-secure languages before, they sound like a good idea.

I am an even bigger fan of strongly statically typed languages with expressive type systems, like Haskell and ML and (at the extreme) Agda and Coq. The reason I asked is that it struck me that the perspective whereby a “type” is something assigned to a “term” based on a formal lexical analysis has certain similarities with so-called “dynamically typed” languages like Perl, Python, and Ruby. In a statically typed language, you set out to write a function with given input and output types, and the compiler/typechecker tells you whether you succeeded—whereas in a dynamically typed language, the “input and output types” of a function, insofar as they exist, can only be inferred by analysis of its code.

On the other hand, some strongly statically typed languages can do type inference and guess the exact type of a function by analysis of its code, without you having to write down the type explicitly. So perhaps the analogy is false (and maybe even insulting—in which case I beg your pardon!).

Are there any actual programming languages with type systems that are in any way reminiscent of the typed blue calculus?

Posted by: Mike Shulman on May 19, 2011 7:18 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Are there any actual programming languages with type systems that are in any way reminiscent of the typed blue calculus?

Sure, Pict is syntactic sugar on top of the pi calculus plus a nice type system.

Posted by: Mike Stay on May 19, 2011 11:04 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Sure, Pict is syntactic sugar on top of the pi calculus plus a nice type system.

Okay, thanks – maybe if I play around with that for a while then I can get some better intuition for what this stuff is supposed to mean.

Posted by: Mike Shulman on May 20, 2011 6:05 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Thanks, John and Tim, for trying to help. (-: I think what’s bothering me is that I don’t know what a term is the blue calculus is, fundamentally. I feel like I know what a term in the lambda calculus is: it’s an element of some type, which is more or less like a set. That set might happen to be a set of functions, in which case I could take a lambda term of that type and apply it to a term of some other type (the input type of the type of functions in question) to get a term in some third type (the output type of the functions).

Well actually I guess that’s about closed terms. If a term has a free variable xx, say, then that variable xx itself has a type, and so the term tt is a term-in-context x:Xt:Yx\colon X \vdash t\colon Y, which is to say that tt represents a morphism from XX to YY. The distinction between such a tt representing a morphism, and the corresponding closed term λx.t\lambda x.t as an inhabitant of the function type XYX\to Y, is just the cartesian-closedness adjunction of a ccc, and under the propositions-as-types correspondence it goes over to the distinction between “entailment” and “implication”.

So what is a (closed or open) term in the blue calculus?

Posted by: Mike Shulman on May 5, 2011 11:30 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

So what is a (closed or open) term in the blue calculus?

A closed term is still an element of a type, but the types are part of a bicategory rather than a category, so you can’t think of open terms (in general) as function-like. Set can become a bicategory with trivial 2-morphisms, but that destroys a lot of the structure of the blue calculus.

I guess you could think of types as being monoids, open terms as monoid homomorphisms, closed terms as being elements of monoids, the concurrent sum of closed terms as being the product of the elements, and the rewrites as bimodules. I think that’s as close as you can get to the usual way of thinking of types as having a set of values.

One way of getting a bicategory out of Set is to treat it as a one-object bicategory. Types map to the one object; terms map to sets of internal states of the computer, and rewrite rules map to state update functions. In fact, this is typically how Turing machines are presented; there’s no notion of type there, just configurations of the tape and what state the machine’s in, together with the state transition table. This same way of thinking works for the blue calculus.

Posted by: Mike Stay on May 6, 2011 2:29 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike Stay wrote:

I guess you could think of types as being monoids…

Instead of talking about how you guess we could think of types in the blue calculus, how about talking about how we should think of them? Or how you do think of them? We may need to learn some computer science, but that’s okay! The blue calculus is about computer science, after all.

Posted by: John Baez on May 6, 2011 3:59 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I think of types as objects!

When pressed, I think of the model I’ve mentioned twice, where the computer has a set of configurations and there are various endofunctions that can be applied. It’s a nondeterministic dynamical system. In that case there’s only one object, but you can “oidify” it and get types.

You really can’t think of the configurations as functions, but programmers want to. They want their types to be sets and their programs to be functions. They’ve invented mutexes and locks and producer-consumer models to try to get into a situation where they can ignore the rest of the system. It’s hard to do; deadlock, unanticipated interleavings of actions, etc. are all problems that arise form trying to shoehorn concurrency into lambda calculus.

One nice form of concurrency, called “event-loop concurrency”, models concurrency with a bunch of “vats”, each of which consists of a lambda term and a queue of messages. On each “turn”, the vat pops the top message off the queue and applies the lambda term to it. The result is a list of outgoing messages and a new term. In order to be “outgoing”, they need to refer to some other vat somehow, which requires a reference type. References could be natural numbers; on the internet, they’re typically URLs. “Turns” only make sense locally to a vat; there’s no universal clock that says “OK, everyone pop off the next message.” Messages get delivered at any time.

This way of programming lets the programmer do all the functional stuff on types within a vat and push all the concurrency to the message exchanges between queues. The problem with that system from a formal point of view is that it’s very complicated, not nearly as pretty as lambda calculus.

Blue calculus suffices to represent event-loop concurrency, but the construction is large.

Posted by: Mike Stay on May 7, 2011 2:04 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Hi,

I think you can probably decisively answer Mike Shulman’s question by explaining how the 2-categorical semantics of the blue calculus differ from the 2-categorical semantics of the lambda calculus.

For example, in Seely’s model,

  • Objects are types
  • Morphisms ABA \to B are terms of type BB, with one free variable of type AA. Terms are equated up to α\alpha-equivalence.
    • The identity morphism id Aid_A is x:Ax:Ax:A \vdash \;x:A
    • Composition of morphisms is substitution.
  • 2-cells are reductions

Can you explain what the analogous dictionary for the blue calculus should be? This is not at all an obvious thing – for example, I can’t guess whether morphisms should be terms up to α\alpha-equivalence, or terms up to structural congruence.

Posted by: Neel Krishnaswami on May 7, 2011 12:56 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Part of the problem I’m trying to work out, but keep waffling on, is how composition ought to work in the blue calculus. Blue calculus doesn’t allow application of terms to other terms directly. I suspect that this is an artificial distinction, rather like the way that lambda calculus, even though it presents cartesian closed categories, doesn’t explicitly mention products (traditionally one uses λxyz.zxy\lambda xyz.zxy as a pairing combinator).

I’m hoping that the table you wrote will apply in exactly the same way to the blue calculus:

  • objects are types.

  • morphisms are terms with one free variable in a typing context, equivalent up to structural congruence (which includes alpha equivalence of the free variable). I write (x,T)(x,T) below for the morphism x:XT:Yx:X \vdash T:Y, where xx is free in TT. I elide the types for readability:

    • The identity is (x,X)(x,X)

    • I’ve been thinking that composition should work like this:

(1)(y,U)(x,T)=(x,νy.(y=T|U)) (y,U) \circ (x,T) = (x, \nu y.(y = T | U))

so that every ocurrence of yy in UU can be replaced by TT when it gets applied to something. It’s not associative, but it’s nearly related by an associator. I say nearly because there are issues related to scope migration that means there is no rewrite rule relating the two ways of parenthesizing the composition of three morphisms. I’m pretty sure, however, that they are operationally indistinguishable. This is something I need to prove or come up with a counterexample.

If they are operationally indistinguishable, then restricting the calculus so that terms can only apply to variables is an annoying restriction that just complicates the process of finding a model. John has suggested several times that I should just present a term 2-calculus for symmetric monoidal closed bicategories enriched over SymMonCat and prove some nice things about it. I’m worried, however, that computer scientists won’t care about yet another concurrent term calculus unless they can see how it relates to concurrent calculi they already know, like pi calculus.

  • 2-morphisms are reductions.
Posted by: Mike Stay on May 9, 2011 10:11 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

For some reason, the TxT\;x restriction didn’t really register with me until just now. Playing with it just a little, it feels a lot like a call-by-value kind of restriction. I guess the point is that it ensures every occurence of a variable in a lambda is replaced by the *same* process.

This is very reminiscent of call-by-value restrictions in effectful programming languages. Perhaps you should look at Paul Blain Levy’s work on “call-by-push-value”. He models these kinds of restrictions on substitution by having two categories – essentially a category of values and a category of computations – with an adjunction between them. Then terms are only the oblique morphisms, which captures the fact that you can’t substitute *arbitrary* terms in expressions, but only values.

Posted by: Neel Krishnaswami on May 10, 2011 8:27 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

No, that didn’t answer my question. (Sorry, Neel!) I want to know what the semantics is at an intuitive level. Something more like this for typed lambda calculus:

  • a type is a collection of things, like integers, or real numbers, or functions

  • a lambda term represents a function, written in a particular computer language, which has as inputs the free variables in its context (with their types) and computes an output value in some other type.

  • two lambda terms are equivalent if they can be shown to compute the same function by a sequence of applications of basic reductions and expansions.

Posted by: Mike Shulman on May 10, 2011 3:07 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike wrote:

…the computer has a set of configurations and there are various endofunctions that can be applied. It’s a nondeterministic dynamical system. In that case there’s only one object, but you can “oidify” it and get types.

The set of configurations is the set of possible states of the memory? A program is an endofunction of this set? Why is this nondeterministic?

You really can’t think of the configurations as functions, but programmers want to. They want their types to be sets and their programs to be functions. They’ve invented mutexes and locks and producer-consumer models to try to get into a situation where they can ignore the rest of the system. It’s hard to do; deadlock, unanticipated interleavings of actions, etc. are all problems that arise form trying to shoehorn concurrency into lambda calculus.

A type like Integer is represented by the set of all possible values, is that correct? I’m a little bit unsure because in object oriented programming you’d also associate a set of methods with a type…the second part of the statement reminds me of the difficulties I had with concurrency in Java, the basic strategy in Java is locking shared state by using the synchronized keyword.

One nice form of concurrency, called “event-loop concurrency”…

The way you explain it it seems to me that event-loop concurrency is the same as the actor-messenger model implemented in Scala.

Blue calculus suffices to represent event-loop concurrency…

Which makes it sound relevant to real life problems, since, as I said before, the locking viewpoint of concurrency seems to be much harder to handle than the actor-messenger-sending one.

BTW: did you already answer Mike Shulman’s question

a blue-calculus term is…?

…on this thread?

Posted by: Tim van Beek on May 9, 2011 8:33 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Tim wrote:

Mike wrote:

… the computer has a set of configurations and there are various endofunctions that can be applied. It’s a nondeterministic dynamical system. In that case there’s only one object, but you can “oidify” it and get types.

The set of configurations is the set of possible states of the memory?

Yes–but if you’re thinking of a modern architecture, you should include the program counter and registers, or if you’re thinking of a Turing machine, it should include the tape and the position of the read-write head and the state of the machine. In short, everything the computer needs to know in order to take the next step.

A program is an endofunction of this set? Why is this nondeterministic?

No, a program is a configuration (or if you want your program to take input, then it’s the configuration of part of the computer and the rest of the configuration is the input). The rewrite rules are endofunctions on the set of configurations, any one of which can apply at each point in the computation.

A type like Integer is represented by the set of all possible values, is that correct?

Well, to each type TT you can associate the set of its elements, the closed terms of type 1T1\to T.

I’m a little bit unsure because in object oriented programming you’d also associate a set of methods with a type.

In a typed lambda theory, you specify a type together with function symbols and relations that the symbols satisfy, just like a datatype declaration in Haskell or ML. (In those programming languages, the relations are given either as comments or as tests, since there’s no decidable way to check that an implementation satisfies relations.)

The way you explain it it seems to me that event-loop concurrency is the same as the actor-messenger model implemented in Scala.

Quoting this paper by Cutsem et al.,

The actor model of Scala is more liberal than that of AmbientTalk, in the sense that the three properties of event loop concurrency are not enforced by Scala’s model. It is the programmer’s own responsibility to guarantee that Scala actors do not perform concurrent updates on shared state but communicate only by message passing.

Returning to Tim’s questions:

BTW: did you already answer Mike Shulman’s question

a blue-calculus term is…?

on this thread?

I’ve tried a few times. Hopefully this response to Neel will do it.

Posted by: Mike Stay on May 10, 2011 12:03 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

It sounds to me from this comment as though you’re saying that a blue-calculus term is a transformation of the configuration of the computer. But I presume one of the other comments you’re referring to is this one, where you said that a blue-calculus term is a set of “potential configurations”. How are those the same thing? It seems much more logical to me for a program that I write to be a “transformation of state” than to be a “set of potential configurations”.

I’m still trying to get you to finish the sentence “a blue-calculus term is…” in a definite way.

Posted by: Mike Shulman on May 7, 2011 7:36 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Keep at him, Mike (Shulman). It’ll help clarify these ideas a lot.

In Mike Stay’s defense, I often find it quite hard to understand the intended semantics of a lot of the ‘calculi’ and ‘logics’ that highly theoretical computer scientists talk about. Most of this is surely my own ignorance, but sometimes I think they get a bit carried away with positing nice-looking syntactical rules.

I tend to feel confirmed in my suspicions whenever people talk about ‘the problem of finding a model of…’ some logical system, as if it were a difficult thing. And that makes me think: why are you talking about it, then? I wouldn’t study group theory if I didn’t know a few groups, for example.

On the other hand, if we think of syntactical rules as ways of doing ‘rewrites’, and rewrites as abstract processes of computation, maybe it matters less whether any of these expressions refer to anything. Maybe that’s part of the cultural difference? I don’t know.

Anyway, I’m sure the pi-calculus and blue calculus are meant to signify something. I think part of the problem is that it’s a new kind of thing, one that people are just groping to understand. Some kind of thing that only became part of everyday experience after the internet was invented!

Posted by: John Baez on May 7, 2011 10:09 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

sometimes I think they get a bit carried away with positing nice-looking syntactical rules

You’re not alone there – Bob Walters has argued that instead of inventing syntax and then looking for models, what computer scientists should do is start with a suitable class of models and then look for the right syntax, that is, a nice way of presenting free models.

Often it is enough to work just with syntax, because that can often be translated fairly easily into a working implementation. The difficulties begin when you try to answer questions that are intractable in the syntactic model and you have to look for others.

Posted by: Finn Lawler on May 7, 2011 6:51 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

In partial disagreement, let me quote Paul-Andre Mellies:

As soon as the sequent calculus is properly understood by the novice reader, the specialist in proof theory will generally advise this reader to forget any guideline related to model theory, like truth-values or tautologies. Apparently, this dogma of proof theory follows from a naive application of Ockham’s razor: now that proofs can be produced mechanically by a symbolic device (the sequent calculus) independently of any notion of truth… why should we remember any of the “ancient” model-theoretic explanations?

In fact, the philosophical position generally adopted in proof theory since Gentzen (at least) is far more radical – even if this remains usually implicit in daily mathematical work. This position may be called anti-realist to stress the antagonism with the realist position. We will only sketch the debate in a few words here. For the realist, the world is constituted of a fixed set of objects, independent of the mind and of its symbolic representations. Thus, the concept of “truth” amounts to a proper correspondence between the words and symbols emanating from the mind, and the objects and external things of the world. For the anti-realist, on the contrary, the very question “what objects is the world made of ?” requires already a theory or a description. In that case, the concept of “truth” amounts rather to some kind of ideal coherence between our various beliefs and experiences. The anti-realist position in proof theory may be summarized in four technical aphorisms:

  • The sequent calculus generates formal proofs, and these formal proofs should be studied as autonomous entities, just like any other mathematical object.
  • The notion of “logical truth” in model-theory is based on the realist idea of the existence of an external world: the model. Unfortunately, this intuition of an external world is too redundant to be useful: what information is provided by the statement that “the formula A∧B is true if and only if the formula A is true and the formula B is true” ?
  • So, the “meaning” of the connectives of logic arises from their introduction rules in the sequent calculus, and not from an external and realist concept of truth-value. These introduction rules are inherently justified by the structural properties of proofs, like cut-elimination, or the subformula property.
  • Gödel’s completeness theorem may be re-understood in this purely proof-theoretic way: every model M plays the role of a potential refutator, simulated by some kind of infinite non recursive proof — this leading to a purely proof-theoretic exposition of the completeness theorem.

This position is already apparent in Gerhard Gentzen’s writings [36]. It is nicely advocated today by Jean-Yves Girard [41, 42]. This line of thought conveys the seeds of a luminous synthesis between proof theory and model theory. There is little doubt (to the author at least) that along with game semantics and linear logic, the realizability techniques developed by Jean-Louis Krivine (see [61] in this volume) will play a key part in this highly desired unification. On the other hand, much remains to be understood on the model-theoretic and proof-theoretic sides in order to recast in proof theory the vast amount of knowledge accumulated in a century of model theory.

I should add that I have no objections to RFC Walters’ work, which I find very fine and enjoyable. His approach clearly works for him, and I am happy to enjoy its fruits. However, in my own experience as a mildly categorically-inclined computer scientist, the relationship between models and syntax is dialectical. I mean this quite literally: at this very moment I am working on a new categorical semantics for a programming language I have designed, based on the deficiencies I discovered fully implementing my old semantics, which had in turn arose from an attempt to rationalize certain well-known implementation techniques.

Posted by: Neel Krishnaswami on May 7, 2011 7:42 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’m not sure Melliès is disagreeing with Walters here. It seems to me that both are advocating the viewpoint of categorical logic, which denies any qualitative difference between syntax and semantics. In particular, I think that a purely syntactic presentation of logic, if it’s done right, amounts precisely to the axiomatization of a class of models, and cut-elimination (as Lambek argues) is a coherence, or soundness-and-completeness, result for that choice of syntax. Think of the rules governing conjunction and implication, for example: if you include proof-rewrites, as you should, then what you’ve given is an axiomatization of cartesian closed (bi)categories.

Maybe the heart of Walters’s argument is simply that when you specify syntax you are really describing, or beginning to describe, free models of some sort, so you should make sure to finish describing them completely.

Posted by: Finn Lawler on May 7, 2011 9:33 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I wouldn’t study group theory if I didn’t know a few groups, for example.

But would you study set theory even if you didn’t know a few sets?

We study axiomatic group theory because we have “naturally ocurring models”; but to construct such models requires some ambient theory, which the mathematician usually takes to be set theory. When we do axiomatic set theory, we can construct models (“universes of sets”) in some ambient theory (which might itself be a bigger set theory) — or we can just take the notion of “set” as undefined, or better, defined by its behavior stipulated within the theory. I have the impression that some of these people, at least, consider this sort of calculus as more analogous to set theory: the notions in it are undefined, or rather defined by their behavior stipulated within the theory. It feels different to mathematicians because the “behavior” is “computational” or “procedural” rather than “axiomatic” — we stipulate what the objects do rather than what is true about them — but of course that is a natural outgrowth of the idea of “computer science”.

From this perspective, the question of looking for set-theoretic models, while interesting, is more like the question of comparing two foundational systems which are already known to be valid and useful in their own right, rather than justifying a questionable system by finding models of it within a standard foundational system. A mathematical model-theorist tends to think of “consistency” of a system as being proven by finding (set-theoretic) models for it; computer scientists tend instead to think of “consistency” as a more intristic syntactic property like “false is not derivable”, which (for their sorts of systems) can usually (I gather) be proven by an analysis of the derivation rules and a magical incantation called “cut-elimination”.

But none of this makes me any less inclined to insist on an answer to “what is a blue-calculus term,” since even a “technically undefined” word like “set” comes with an intuition for what it represents, and therefore what its behavior in the theory is designed to capture. That’s what I’m after.

Posted by: Mike Shulman on May 8, 2011 4:27 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Mike wrote:

But would you study set theory even if you didn’t know a few sets?

Me? No, if I hadn’t known examples of sets I would have been uninterested in set theory.

Already in grade school they taught me examples like ‘here is a set of five sheep’, ‘here is a set of five people’, and so on: this is how they taught arithmetic back then. Later I learned about the set of real numbers, and so on. So by the time I got to college, I was ready for axiomatic set theory.

Of course there’s a difference between formal models within mathematics, and ‘informal models’, by which I mean both real-world examples and somewhat imprecisely defined mathematical examples like the grade-school ‘number line’. But I think we agree, they’re both important:

… even a “technically undefined” word like “set” comes with an intuition for what it represents, and therefore what its behavior in the theory is designed to capture.

Posted by: John Baez on May 8, 2011 5:41 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

John wrote:

I tend to feel confirmed in my suspicions whenever people talk about ‘the problem of finding a model of…’ some logical system, as if it were a difficult thing. And that makes me think: why are you talking about it, then? I wouldn’t study group theory if I didn’t know a few groups, for example.

Synthetic differential geometry is interesting from this point of view. In my understanding of the history, the axioms for a good topos of smooth spaces came long before models for it were found. That would seem to be a strange situation: surely the pioneers would have been bothered, to say the least, by the possibility that no model existed?

My knowledge of this is very sketchy — someone please correct me if I’m wrong.

Posted by: Tom Leinster on May 8, 2011 6:00 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Synthetic differential geometry is interesting from this point of view. In my understanding of the history, the axioms for a good topos of smooth spaces came long before models for it were found. That would seem to be a strange situation: surely the pioneers would have been bothered, to say the least, by the possibility that no model existed?

I can’t be sure about what exactly people thought in the very early days of the idea, but my impression is that the axioms were directly motivated as an attempt to axiomatize the use of infinitesimals in algebraic geometry.

And indeed it is very easy to see that simply the presheaf topos over opposites of finitely presented commutative rings is a model of the SDG axioms (at least for most of them, there is some flexibility in what exactly the collection of axioms is). This is the example Anders Kock likes to spell out in his books, and I can’t imagine that this would not have been known to be a model early on, since it is so simple.

The problem that took much longer to solve was finding models that actually connect to differential geometry. These are nowadays mostly known as “well-adapted models”, which usually means at least that the category of smooth manifolds has a full and faithful embedding. As far as I am aware such models were first written down by Dubuc.

Posted by: Urs Schreiber on May 8, 2011 11:01 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Ah, that would explain it. Thanks. What had stuck in my mind was that Moerdijk and Reyes had done important work on finding models for SDG, and Moerdijk is too young to have been around in the early days of SDG.

Posted by: Tom Leinster on May 8, 2011 11:07 AM | Permalink | Reply to this

Re: Higher Categories for Concurrency

One more comment on the motivation of synthetic differential geometry from models (I am travelling and hence have to post with interruptions ;-):

you (Tom) once amplified here that “sheaves do not belong to algebraic geometry”. But in fact I think that most of algebraic geometry does not belong to algebraic geometry. Lawvere – with axiomatizing SDG and then cohesive toposes – has tried (in my understanding) to fill in some issues of finding general abstract formalizations of geometry that Grothendieck had left open.

But much more has been left open and a good deal further axioms are waiting to be extracted from the examples considered in algebraic geometry.

For instance what Jacob Lurie does in Structured Spaces (writing down what “locally ringed space” and “scheme” means in full abstract generality) and in Deformation Theory (writing down what the canonical stack of quasicoherent sheaves is in full abstract generality) is something that, except for all the \infty-prefixes, could have – and maybe should have – been said easily for instance in Johnstone’s Elephant .

For ages people had been looking at the models and not bothered to write down the theory. SDG is just one small aspect of this, to my mind.

Posted by: Urs Schreiber on May 8, 2011 1:55 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

I’ve heard people express surprise and disappointment that there’s no good introductory book on the “functor of points” approach to schemes (or algebraic geometry), i.e. the approach in which a scheme is defined as a functor CRingSetCRing \to Set satisfying certain conditions. I imagine that if such a book was written, it would contain a great deal of “algebraic geometry that does not belong to algebraic geometry”.

Posted by: Tom Leinster on May 8, 2011 6:00 PM | Permalink | Reply to this

Re: Higher Categories for Concurrency

Instead of talking about how you guess we could think of types in the blue calculus, how about talking about how we should think of them? Or how you do think of them?

Hear, hear.

If possible, I’d like to continue using “type” to mean the same thing that it usually does. I presume that when programmers program concurrent algorithms, they continue to use variables which have types such as int, char, string, etc. If a term of the blue calculus doesn’t have a type in that sense, then that’s good to know! Are traditional types present in some other way? And how should we think of the domain/codomain of the 1-morphisms in the semantical bicategory, if not as traditional types?

Posted by: Mike Shulman on May 6, 2011 4:12 AM | Permalink | Reply to this

Thin indent bars

If your browser is Firefox and you install the Stylish extension then you can write a rule to make the indent bars so thin that that the nesting is not obnoxious. Here is a rule I just wrote:
@namespace url(http://www.w3.org/1999/xhtml);
@-moz-document domain("golem.ph.utexas.edu") {
 .comments-nest-box {
    border-left: .2px solid #444400 !important;
    padding-left: 2px !important;
 }
}
I wrote this in about 2 minutes with little thought about what it means. I don’t know why the displayed vertical bars and spacing are so big other than they are some default that nobody ever thought to change. Which means all complaints and workarounds might be just satisfied by changing the site CSS.
Posted by: Rod McGuire on May 21, 2011 8:34 AM | Permalink | Reply to this

Post a New Comment