## March 1, 2008

### Computer Scientists Needed Now

#### Posted by John Baez

Thanks to advice from Andrej Bauer, Robin Houston and Todd Trimble, I’ve beefed up the logic section of this paper:

For example, I’ve included a longer ‘overview’ to give the non-logician reader a slight feel for how proof theory met category theory in the development of 20th-century logic. I hope there are no egregious errors. If you catch any, let me know.

But now I really need comments from anyone who likes categories and theoretical computer science!

In fact, the final ‘computation’ section of the paper is still very rough. It introduces combinators but doesn’t really explain them well yet… and perhaps worse, it doesn’t say anything about the lambda calculus!

I plan to fix some of these deficiencies in the next week. But, I figure it’s best to get comments and criticism now, while there’s still time to take it into account.

Posted at March 1, 2008 1:47 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1617

### Re: Computer Scientists Needed Now

A really minor comment: the definition of currying on page 57 gives the type as

$((X\otimes Y)\multimap Y)\multimap(Y\multimap(X\multimap Z))$

whilst virtually all the references on the web use the form

curry : ((a,b) -> c) -> (a -> b -> c)

(keeping it in the original notation CS people will be used to)

eg, see http://en.wikipedia.org/wiki/Currying http://www.cs.nott.ac.uk/~gmh/faq.html#currying

so you might want to explain why you’ve chosen an opposite definition. (In fact, you seem to have the opposite convention on pages 58-59 in calculi rules.) Also, you motivate product types with “This gives a less awkward way to deal with programs that take more than one argument.” I’d say you probably need to go deeper and say that product types make it easier for “functions” to output intermediate data structures so you can write programs piece by piece (as there’s no obvious “lack of ease” in having more than one argument in the curried style in the examples you’ve presented in the paper).

Posted by: bane on March 1, 2008 10:50 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Oops: on p 57 type is

$((X\otimes Y)\multimap Z)\multimap(Y\multimap(X\multimap Z))$

(Got confused where I was in the latex.)

Posted by: bane on March 1, 2008 10:58 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

We explained that way back on page 26 when we first introduced currying. The reason X and Y appear switched is because our string diagrams either need to be left-closed or draw the hom backwards: Given

(1)$\array{X \downarrow Y \downarrow \\ f \\ Z \downarrow}$

we can either pull down $X$ on the left

(2)$\array{Y \downarrow \\ \tilde{f} \\ X \uparrow Z \downarrow}$

which gives the switched order in the paper or pull down $Y$ on the right

(3)$\array{X \downarrow \\ \tilde{f} \\ Z \downarrow Y \uparrow}$

where we’d have to denote the output as $Z ⟜ Y$ instead of $Y \multimap Z$.

Posted by: Mike Stay on March 1, 2008 4:26 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

My fault; I jumped straight to the computer science and didn’t read the earlier parts. Might still be worth a brief reference to that earlier point.

Posted by: bane on March 1, 2008 4:39 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

bonjour à tous
for info, Lambda Associates
cordialement votre
bruno

Posted by: bruno frandemiche on March 1, 2008 8:30 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Thanks, bane! Ah yes, the struggle between the usual convention

$(X \otimes Y) \multimap Z \cong X \multimap (Y \multimap Z)$

and our

$(X \otimes Y) \multimap Z \cong Y \multimap (X \multimap Z) !$

This is the kind of thing that gives me ulcers. I don’t mind explaining our convention again, because I expect most readers will read portions of this paper instead of chugging through the whole thing end-to-end… and the convention that looks really nice when working with string diagrams winds up looking really annoying 20 pages later, when we get to currying in computer science.

It’s okay to flip-flop between the two conventions in a symmetric or even braided monoidal category, where we have $X \otimes Y \cong Y \otimes X$. But in a mere monoidal category, we don’t have this, so there are really ‘left closed’ and ‘right closed’ monoidal categories — two different notions! Of course, if $C$ is left closed, $C^{co}$ is right closed. But still…

So, in the merely monoidal case, there’s a strange distinction between what works best when writing expressions in a linear way (as above), and what works best when writing them 2-dimensionally as string diagrams.

Thanks for offering a better justification of product types. I’ll use it!

Now the big question: should I acknowledge ‘bane’ at the end of this paper?

Posted by: John Baez on March 1, 2008 7:37 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

I only read Section 1.4. In general, I think it’s well-done (besides the parts that are obviously unfinished.) I like the clear and simple language, although it sometimes degenerates into excessive handwaviness, as I’ll elaborate.

Page numbers are what my PDF reader tells me. (Adding page numbers to the document itself might help reviewers.)

Section 1.4.1:

p. 52:

“While a Turing machine…” – This sentence is strangely non-parallel. I suggest contrasting Turing-machine-as-hardware with lambda-calculus-as-software.

“The most famous of these are Haskell and Lisp” – uh, I say this a Haskell guy, but the ML family of languages is just as famous as Haskell if not more so.

“So, everyone majoring in computer science at MIT…” – see http://www-tech.mit.edu/V125/N65/coursevi.html – you might want to fact-check this before the book goes out.

“In imperative programming, a program…” – this sentence is vague, and so is the rest of the paragraph. In particular, “how to change state” hides the real difference between imperative languages and functional languages. Languages in both paradigms let you write programs that manipulate state. The distinction is as to whether the state is *explicit* or not. If I were writing this I’d explain that the imperative paradigm concerns more operational, or procedural approaches to describing algorithms, and the functional paradigm emphasizes more mathematical, or declarative approaches.

Same para: “Functional programming is seemingly more limited…” – watch out when you use weasel words like “seemingly”. Whether functional programming seems more limited depends on your perspective. It may seem more limited to C programmers. But other people aren’t necessarily coming at it with that bias. It’s also odd and misleading to describe functional languages’ lack of implicit state as a “limitation”, since as you well know, languages with or without implicit state are equally computationally powerful.

There’s some confusion and vagueness in this section as to whether by “functional languages” you mean “purely functional languages”, which you surely do. An impure functional language (and some people say “if it isn’t pure, it isn’t functional”) like Lisp doesn’t have the nice theoretical properties you want. The only widely used programming language that does is Haskell 98 (not that anyone really programs in Haskell 98, but with the language extensions that most people do use, Haskell isn’t pure either…)

“In fact, the best-behaved functional programming languages…” – “best-behaved” is quite a subjective term. Programmers may well think that a strongly normalizing language (such as the simply-typed lambda-calculus) is *not* well-behaved – most programmers like being able to write nonterminating programs! I think this entire paragraph needs a lot of work. “Input into itself”, for example, is handwavy. You’re eliding a lot of subtlety here. At least to me, when I was learning this subject, the connection between types and the ability/lack thereof to write nonterminating programs was far from obvious.

“…the typed lambda calculus is just another way of talking about cartesian closed categories”: “Just another way” is rather dismissive. I suggest saying “Lambek found a way to interpret the typed lambda-calculus in terms of cartesian closed categories,” or something like that, that doesn’t imply that cartesian closed categories are the more privileged theoretical concept. Along the same lines, I think your assertion that “theoretical computer science has been closely entwined with category theory” is controversial. Certainly there has been a lot of work on the connections between CT and theoretical CS, but not all theoretical computer scientists think that CT is central to their work. It might be good to get more opinions on this.

“Combinatory logic is closely related to the lambda calculus…” – this is vague. Either explain that the two formalisms are related as being alternative minimalist Turing-complete computational systems (I don’t see what other relationship you might mean), or don’t say it at all.

p. 53

“Strips everything down to the bare minimum” is vague. Emphasize that combinatory logic is different because it avoids naming and the complicated machinery (substitution) associated with it – that would be easier to understand. “…In some ways more awkward…” – Again, this is weaselly. Some people think lambda calculus is more awkward because you need to worry about substitution. Don’t state opinions as facts.

“…it seems more efficient…” – Be bold and don’t say what “seeems” to be more efficient, say what you think *is* more efficient.

Section 1.4.2 (still on p. 53)

1st para: Rewrite to be parallel: “Mathematicians might apply… Functional programmers apply…” (Then you get the added bonus of active voice.)

“…suppose we have programs…” “…compute the obvious functions…” – I think you’re using “program” and “function” interchangeably. Use a consistent lexical set, or if you do mean something different, explain the difference between “program” and “function”.

You introduce equational reasoning here in a very vague way. Not all readers may be used to this style of writing programs.

p. 54

“To avoid problems…” – super-vague. Say *what* problems a non-confluent rewrite system causes. Or just declare that we want to talk about confluent rewrite systems, without justifying it.

Your definition of “confluence” is wrong or at least very misleading: when you say “back to the same expression”, I think you’re trying to say that when there’s a choice of rules to apply, you’ll still get the same normal form in the end, either way. But the resulting normal form is not the original expression, which “back to the same expression” implies. This whole paragraph is pretty handwavy.

I suggest putting in a citation for those who want further reading on rewrite systems: Baader and Nipkow’s _Term Rewriting and All That_ is a good one.

“We can handle…” Avoid noise words like “using”. “‘Currying’ is a trick that helps us handle functions…” In the next sentence, what’s “this”? Well, currying doesn’t turn a function of several arguments into anything – currying lets us *define* a function of several arguments as a function that (blah blah). So say that.

“…we can imagine a program plus that adds integers” – this paragraph seems to end a bit abruptly.

“…that branch to the left:” – I find that sentence unnecessary; you can just look at the picture.

p. 55

“…has no well-defined meaning.” – it’s nonsensical to talk about whether a program has a well-defined meaning unless you’ve defined the semantics for the language in which you’re programming. I would say this differently. It’s possible to define a very sensible language in which “(double Tuesday)” has a meaning, after all. Instead I would pick an example of an ill-typed program that’s actually *inconsistent*: for example, one that uses the same value in an integer context and a string context. Now, this could have a perfectly good meaning as well (say, in Tcl), but then you can point out that in most sensible languages it would be ill-typed because strings and integers have different machine representations.

“…instantly complain…” – it’s unclear what “instantly” means. Instead, I would emphasize the compile-time/run-time distinction.

It would be nice to give a nod to type inference in this section, so readers aren’t left with the impression that you always have to write explicit types in a functional language.

“in a variety of ways” – delete that phrase; the sentence has equivalent meaning without it.

“…there is a new type…” – the use of “new” is strange; a function type is no newer than its components. “another type”, perhaps?

“Any program is built…” – this sentence is extremely confusion. Rewrite it, preferably as two or three separate sentences.

“…a datum of type X…” – so f is like a constant function. Say that, perhaps.

p. 56

“…can be reinterpreted as…” – because you use the passive voice, I am confused by what you mean. Who does the reinterpreting? Saying it would clarify the idea.

“In functional programming, we take advantage…” – I have no idea what you’re trying to get at with this sentence.

“Computer scientists call…” – again, the “new”/”old” language is confusing. How about “building more complicated types from simpler types”?

“…a less awkward way…” – Again, don’t make unsubstantiated assertions. Whether currying is more awkward than tupling is a matter of taste.

Skipping to p. 58:

The void type (or unit type – which might be less confusing than “void type”, since the “void” type in C has a very different semantics than the unit type in Haskell or ML, and I think you really mean the latter) is useful in purely functional languages, too, not just in languages with side effects. See Pierce’s _Types and Programming Languages_. Also, “languages with side-effects” is a strange thing to say, because all programming languages allow you to write programs that have side effects. But a purely functional language provides a type system that controls and encapsulates effects. I would distinguish between pure/impure languages rather than languages with/without side effects.

Posted by: Tim Chevalier on March 1, 2008 9:34 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Bottom of page 55 where you explain that datatypes are objects and morphisms are equivalence classes of programs: depending on how it fits with your overall scheme, you might want to mention that the rewrite rules are 2-cells between programs, so you can view the datatypes-programs-rewrite rules as a 2-category.

Page 58: call it “unit type” whose only element is “unit” (write it as empty tuple perhaps?) and mention that this is “sort of like the void type in C/C++/Java”. Please do not propagate the horrible misnomer “void type” (after all, the type is not void, is it?)

There are two main uses for the unit type: (1) to indicate that no result is returned, and (2) for thunking.

Speaking of thunking, you have the opportunity to explain the difference between eager evaluation (call-by-value) and lazy evaluation (call-by-name), and relate these two concepts to two symmetric monoidal closed structures (please verify, I might be slightly wrong here):

(a) call-by-name is like Sets with products and exponentials so that you actually get a CCC (eta-reduction is valid)

(b) call-by-value is like Sets and partial functions, where the structure is that of a smash product and the space of partial maps. Eta reduction is not valid because, while a term t might diverge, lambda x. t x does not.

I do not understand why you insist on confluent and terminating rewrite rules. That automatically prevents you from having a Turing complete programming language, which is sort of a bummer. You could just say that in name of simplicity you assume the rewrite rules to be determinstic (but that computer scientists also deal with non-deterministic computation), and that in principle a computation can be diverging (infinite), and that there is no way around it if we want a Turing complete language.

Posted by: Andrej Bauer on March 1, 2008 11:24 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Sorry, I got the two structures wrong.

(a) call-by-value: pointed sets and strict maps (maps preserving the point, think of the point as “undefined”), the structure is: tensor is smash product and hom is the set of strict maps.

(b) call-by-name: pointed sets and _arbitrary_ maps (no need to preserve the point), with the usual ccc structure. This is equivalent to non-empty sets and maps. The point of having the points is that in a typical programming language all types are inhabited.

For a more refined analysis one would need fancier categories, such as those appearing in domain theory. You probably do not want to get into that.

Posted by: Andrej Bauer on March 2, 2008 12:31 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

As with the others, I agree that “unit” is a much better name than “void” and is the actual name that is used in functional languages (i.e. it’s exactly the name used by ML, Haskell uses () obviously relating to empty tuples and this is read as “unit”.)

In a pure, strongly normalizing language, there is almost no use for the unit type practically speaking. When non-termination is allowed, then the unit type can be used by eager languages, e.g. ML, to delay evaluation, but it is still pretty useless in lazy languages, e.g. Haskell.

Specifically in response to Andrej, the connection between logic and programming becomes much less compelling in the face of non-termination. In particular, adding the Y combinator makes the logic corresponding to the language inconsistent. However, often symmetric monoidally closed categories don’t come up until we start talking about non-termination or concurrency or explicitly about resources. However, too “strong” a side-effect breaks the nice properties and we start to talk about pre-monoidal categories usually built up into Freyd categories.

I think it would be very helpful to mention some of the most popular combinators and how selecting different subsets of them given different families of logic. In particular, the most well-known set of combinators is the S, K, and I combinators (I is redundant being SKK). It happens that S and K correspond to contraction and weakening respectively. (I is the identity.) In the untyped combinatory calculus, S and K are universal. Some other popular combinators are B (= compose) and C (= braid). B, C, and I are often used for linear combinatory logics.

Another significant thing is that application corresponds modus ponens. I.e. this makes up the very direct correspondence between combinatory logic and Hilbert style proofs. The sequent calculus corresponds more closely to the lambda calculus.

Another important thing is that most typed functional languages are parametrically polymorphic. A property called parametricity leads to what are called free theorems. These free theorems are pretty much the (di)naturality equations. So, in short, if type constructors are functors, (parametrically) polymorphic functions are natural transformations.

You don’t mention any combinators corresponding to the dualization operation. They exist, but are much less common. They are related to what are called control operators in computer science which in turn are related to continuations. Usually the way they are explained is by rewriting the program in continuation passing style and then the control operators can be given explicit functional definitions. Continuations are related to logical negation and this transformation into continuation passing style happens to correspond to exactly the embedding of classical propositional logic into intuitionistic propositional logic you mentioned in the section on logic. Getting well-behaved control operators is a bit tricky; the lambda-mu calculus is the thing to look at here.

The papers on the Categorical Abstract Machine give a good and concrete overview for “compiling” lambda terms to categorical combinators.

Finally, there is a “Curry-Howard correspondence” for logic programming languages as well.

Posted by: Derek Elkins on March 2, 2008 1:15 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Derek wrote:

I think it would be very helpful to mention some of the most popular combinators and how selecting different subsets of them given different families of logic. In particular, the most well-known set of combinators is the S, K, and I combinators (I is redundant being SKK). It happens that S and K correspond to contraction and weakening respectively. (I is the identity.) In the untyped combinatory calculus, S and K are universal. Some other popular combinators are B (= compose) and C (= braid). B, C, and I are often used for linear combinatory logics.

Thanks for all your comments! Mike had some detailed remarks about the $S, K$ and $I$ combinators. I’ve temporarily removed them, but I plan to restore them as part of an expanded ‘overview’. Seeing these combinators expressed in terms of the lambda calculus and also in terms of cartesian closed categories will be a good way to help our readers understand all three subjects… at least, it helped me.

In fact, Mike had originally set up a ‘typed linear lambda calculus’ that corresponds to symmetric monoidal closed categories in the same way that the usual typed lambda calculus corresponds to cartesian closed categories. However, it seems a bit tiresome to enforce the ‘no duplication and deletion of variables’ constraint in this sort of language — one needs to pay careful attention to free vs. bound variables, which is not what we want our readers to spend their time thinking about here. Using ‘linear combinators’ seems to work more easily. So, Mike moved in this direction.

It sounds like you’ve read a bunch of stuff about ‘linear combinatory logics’. The only really detailed reference I know is Abramsky, Haghverdi and Scott’s Geometry of Interaction and Linear Combinatory Algebras. Which others should I know about?

Posted by: John Baez on March 2, 2008 6:15 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Most of the work on linear logic in computer science works with variants of linear lambda calculi. Most of what I know has come from a variety of sources and interpolation.

Looking for stuff now, Gavin Bierman’s thesis “On Intuitionistic Linear Logic” seems like a decent resource and briefly covers linear combinatory logic. Various other papers on his site seem relevant as well. http://research.microsoft.com/%7Egmb/publications.aspx

Posted by: Derek Elkins on March 2, 2008 10:29 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

I spotted a small typo: Roberto di Cosmo’s last name is incorrectly spelled “di Cosimo” through the paper.

Posted by: A.G. on March 2, 2008 12:51 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

At the top of page 2 there is a minor typo.
The second sentence reads: “The idea is that a proof is process going us go from one …”. It should probably read “… process taking us from one …”

Posted by: Mark Biggar on March 2, 2008 3:55 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

WOW! Thank you all for coming out of the woodwork and posting these wonderfully detailed, wonderfully helpful comments.

I won’t thank you all by name now: I’ll do that in the paper. I may have more questions later. But, I really must thank Tim Chevalier by name right now for his line-by-line dissection of suboptimal wordings — and even better, suggested improvements. It takes a lot of work to go through prose and write up detailed comments like this. I do it for my grad students. It’s a real pleasure and an honor to have someone do it for me.

Mike Stay does programming and computer science for a living, but I don’t… so to me, it’s a slippery and mysterious business — unlike, say, mathematical physics, where I’ve put in enough time to have a feel for it. To get happy with this paper I really had to take everything he wrote in this section and rewrite it line by line, simply to see if I understand it. (If I can’t say something in my own voice, I don’t understand it.) I’m in the middle of that process now. So far I feel quite uncomfortable, because there’s a lot I don’t understand. That’s probably why a lot of my sentences have weaselly wordings.

So, getting all this feedback helps a lot.

By the way, I’ll be glad to use the term ‘unit type’ instead of ‘void type’ — ‘void’ makes me think of the empty set, or more generally initial objects, instead of unit objects.

Posted by: John Baez on March 2, 2008 5:50 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

I have some minor observations about the latest draft of the logic section. The first few are historical points. The last two are points about your discussion of the sequent calculus.

1. (p 35)

“One reason is that some logicians deny that certain familiar principles are actually valid. But there are also subtler, more interesting reasons.”

I would consider changing that last sentence to “But there are other reasons as well.” After all, Martin-Lof’s reasons for rejecting classical reasoning are very subtle and very interesting, both mathematically and philosophically!

2. (p 35)
“For example, around 1907 Brouwer [51] began developing ‘intuitionistic logic’. “

It is true that Brouwer developed intuitionism as a philosophy of mathematics, but I don’t think it is correct to say that he had any *direct* role in the development of intuitionistic logic. This distinction should go to his student Heyting, along with Kolmogorov in Russia. As far as I understand, Brouwer was resolutely “anti-formalist” and actually did not approve of Heyting’s formalization of intuitionism in logical terms. If I were writing this paper, I would want to avoid all of these historical fine points and just say something like the following:

“For example, in the late 1920s and early 1930s logicians and mathematicians, under the influence of Brouwer’s “intuitionistic” philosophy of mathematics, began to develop formal logics that rejected the inference from “it is not the case that it is not the case that X” to “X”. For obvious reasons, these logics were dubbed “intuitionistic.”

3. (p 36)

“In the 1930’s, this idea was made precise by Godel [47] and Gentzen [44].”

While it is true that Godel and Gentzen made this idea precise in the 1930’s, you might also want to mention that Kolmogorov, working independently of this entire tradition, made it precise in 1925! I believe that you can find this result in his paper “On the principle of the excluded middle” which is collected in Van Heijenoort’s “From Frege to Godel” anthology.
(You might also mention that this paper contains the earliest formalization of intuitionistic logic, predating Heyting’s by about five years.)

Incidentally, when these results of Godel and Gentzen are mentioned by historians of logic, they almost always also mention the closely related theorem proved by Glivenko in 1929: http://en.wikipedia.org/wiki/Glivenko’s_theorem.

4. (p 38)

“Gentzen listed enough inference rules to describe the whole classical propositional calculus, and a subset that give the intuitionistic propositional calculus.”

I’m no expert, but I am fairly certain that one can’t “just” take a subset of the inference rules for the classical sequent calculus in order to get the intuitionistic sequent calculus. One must also restrict the form that sequents can take (at most one conclusion) along with stipulating a new rule for introducing the “or” connective on the left. Maybe you are trying to say something different though? If you are, I think this sentence needs restating.

Similarly, the sentence directly after the one I just quoted seems to say something false. First of all, you are talking about the “classical” sequent calculus, not “classical logic.” Second, the “intuitionistic” sequent calculus does not allow for multiple formulas on the right of the turnstile in a sequent, so it does not even make sense to say that such an equivalence holds in the “intuitionistic” sequent calculus. But again, perhaps I am misunderstanding what you are trying to say.

5. (p 39)

Your discussion of the sub-formula property and of cut-elimination seems excessively handwavy. That is, you say what the sub-formula property is but when it comes time to say why it is so important, you just way your hands and mutter some vague stuff about the set of proofs for a given sequent. Similarly, you say what cut-elimination is but when it comes time to say why it is so important, you just mutter some vague stuff about proof-theorists being obsessed with the theorem. You should say why the subformula property makes proof search behave nicely. You should say why proof-theorists are “obsessed” with cut-elimination (for instance, the relation between cut-elimination and consistency). If you think briefly mentioning these things might take you too far afield, I suggest that you just write

“Since any proposition has just finitely many subformulas, this makes the set of possible proofs of a given sequent quite manageable. For more on this point and it’s application to automated deduction, we recommend [references]”

Similarly:

“From the two sequents on top, the cut rule gives us the sequent below. Note that the intermediate step A does not appear in the sequent below. It is ‘cut out’. So, the cut rule lacks the subformula property. One of Gentzen’s great achievements was to show that any proof in the classical propositional (or predicate) calculus that can be done with the cut
rule can also be done without it. This is called ‘cut elimination’. To see why cut-elimination is such a deep and important result, we recommend [references].”

In the actual paper, it *seems* like you are handwaving because you don’t know how to succinctly explain (1) why the sub-formula property is important and (2) why proof-theorists are so concerned with cut-elimination. If you are going to handwave, you should make it seem like you are doing so because you don’t want to get sidetracked with details that you could otherwise easily fill in. I didn’t get this sense when I read the portions about cut-elimination and the sub-formula property.

Having said all this, let me say that I am extremely excited about this paper and can’t wait to see the finished version!

### Re: Computer Scientists Needed Now

From my reading of his philosophical writings, I second that comment on Brouwer, who was very distrustful of logic. I don’t have a quotation to hand.

The Stanford Encyclopedia has this to say:

Brouwer holds that mathematics is an essentially languageless activity, and that language can only give descriptions of mathematical activity after the fact. This leads him to deny axiomatic approaches any foundational role in mathematics. Also, he construes logic as the study of patterns in linguistic renditions of mathematical activity, and therefore logic is dependent on mathematics (as the study of patterns) and not vice versa.

I supposed you might say he was only hostile to the logic of the time (classical) to the extent that it had lent itself to the firming up of the faulty intuitions of classical mathematics, and that a codification of the correct intuitions would be useful. But I don’t think this is right. There’s no sign of his approving of Heyting’s work.

Posted by: David Corfield on March 3, 2008 1:29 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

David Corfield wrote:

I supposed you might say he was only hostile to the logic of the time (classical) to the extent that it had lent itself to the firming up of the faulty intuitions of classical mathematics, and that a codification of the correct intuitions would be useful. But I don’t think this is right. There’s no sign of his approving of Heyting’s work.

However, I do not see the relation with the quote from the Stanford encyclopedia:

Brouwer holds that mathematics is an essentially languageless activity, and that language can only give descriptions of mathematical activity after the fact. This leads him to deny axiomatic approaches any foundational role in mathematics. Also, he construes logic as the study of patterns in linguistic renditions of mathematical activity, and therefore logic is dependent on mathematics (as the study of patterns) and not vice versa.

This quote does not show any hostility towards logic, but only denies that logic is prior to mathematics. Heyting’s codification of intuitionistic logic is a mathematical study of the language of intuitionistic mathematics after the fact. Brouwer would deny that this “axiomatic approach” has a foundational role. However, I do not read this is a hostility towards logic as the study of patterns.

Bas

Posted by: Bas Spitters on March 3, 2008 9:22 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

I wasn’t basing my comment merely on that quotation from the Encyclopedia. I read Brouwer’s philosophical writings extensively during my MSc, longer ago than I care to think. I don’t have quotations to hand, but I remember a great suspicion of language on his part.

The Heyting entry on Wikipedia has Brouwer describe his student’s work as a “sterile exercise”. Do you have evidence otherwise?

Posted by: David Corfield on March 3, 2008 10:19 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

The wikipedia article on Heyting indeed contains this quote, but without correct citation. [The page it refers to does not contain this phrase anymore. ]

Van Atten’s article in the Stanford Encyclopedia seems to give a balanced description of Brouwer’s ideas. Is it possible that Brouwer was reacting to the use of intuitionistic logic as a foundation for intuitionistic mathematics?

It seems that Brouwer’s `logic is part of mathematics’ is precisely the way in which we use it here at the n-cafe.

Bas

Posted by: Bas Spitters on March 4, 2008 8:53 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

The ‘sterility’ comment is said to appear in van Stigt’s book. I remember reading the comment somewhere, and I have read van Stigt’s book, but it may have been somewhere else.

Anyway, as to whether Brouwer did or did not approve of what Heyting did is of limited importance to me. At the time I was working on my masters I was interested in the case of intuitionistic logic for its illustration of two themes:

1) That logic is not so much about mathematics, as a part of it. Where the philosopher Michael Dummett claimed that the proof theoretic wing to intuitionistic semantics is the correct one, I delighted in the additional existence of a topological wing, and the movement towards toposes.

2) Against Lakatos’s (and it appears Brouwer’s) fears that formalisation of the informal is dangerous in that it may leave an important part out, I was interested to see how formalisation can act as a springboard for the formation of a new intuition.

There is no need to adhere to all the informal motivations of the originator of a theory, though, of course, this wouldn’t prevent the possibility of there being something further worth mining from Brouwer’s philosophical writings. As all I can recall is the pronouncement “Woman is a tiger”, it suggests that his ideas didn’t make much of an impression on me.

Posted by: David Corfield on March 4, 2008 12:04 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

First, I’d like to say that I’m also very excited about this paper! I agree with most of the comments so far, particularly Andrej and Derek. Here are a few more comments of my own…

In table 1.4, you say “programs executing in parallel”, and I believe this is the first occurrence of this operational notion of parallelism. It’s not clear to me that we need or ought to consider products of programs as being “parallel” in the familiar computational sense, although they could be considered parallel in the sense that they are semantically independent. In any case, if you want to use this characterization, it should probably be introduced/justified in the text.

On page 56, you say “In computer science this corresponds to the fact that programs are data…” I’m not sure I’d say this. Generally speaking, when programmers use the phrase “programs as data”, they’re referring to the direct manipulation or inspection of the representation or specification of a program. The classic examples of this are Lisp macros and manipulation of machine code (JIT compilation, for example). This is a somewhat different notion than currying and first-class functions, and is also often called metaprogramming.

It doesn’t help that the standard name in computer science for a function from a program representation to the program itself is also called “eval”. For this reason, a computer scientist might find it unfortunate that categorists use the name “eval” for the arrow $(Z^Y \times Y) \rightarrow Z$. We would mostly prefer to call this arrow “apply”. It is very, very important to understand the distinction between the programmer’s eval and apply, and this often causes problems for new programmers.

I strongly disagree with the idea that the unit type is not useful! Mostly it seems not useful because programmers are unused to thinking so abstractly. From the categorical point of view, all zero-parameter data constructors (e.g., True and False, each of the integers) are categorical elements (arrows $I \rightarrow X$), and these show up all over the place. In my opinion, this leads to useful insights, such as that the standard type of boolean values is $2 \cong 1 + 1$. You said this yourself on page 55, so I’m surprised that you’re so quick to slander the terminal object.

Regarding termination, I would not be so quick to introduce inconsistency. There’s interesting recent work on total functional programming, and I can imagine a bright future where we no longer say “Turing complete” (as though it’s a good thing), and just say “inconsistent”.

If you really do want to get Turing completeness, you might want to mention the standard semantic trick of using total functions between pointed objects, as Andrej mentioned, but I very strongly agree with Derek that this obscures the connections with logic.

Finally, as Derek suggests, I do urge you to at least mention polymorphic functions as natural transformations, as well as the connection between control operators and duality.

Posted by: Matt Hellige on March 3, 2008 11:27 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Comments about the unit type being useless are probably inspired by my first comment. While I certainly agree with myself the way I intended it (part of that being only in “practice”), I do certainly agree that the unit type is of crucial importance conceptually. However, you do have a good point that the unit type is used more often than it seems as it is encoded into the data type construction facility of ML/Haskell-style languages. One example is the Bool example you describe which could be rendered in Haskell as Either () (), another is Haskell’s Maybe type (1+Id) which could be rendered Either () a (Either corresponds to +). So my statement was probably overly harsh.

Also, when looking for something on linear combinatory logic, I was reminded of the perp operation some linear logics support. Classical (non-linear) logic’s negation tends to lead to control operators. The linear negation of classical linear logic seems to lead to concurrency. The following paper: Computational Interpretations of Linear Logic (PS.GZ) has some discussion. Continuations and concurrency are closely related. In fact, this relationship between linear and non-linear negation is unintentionally paralleled in the paper “Threads Yield Continuations” where threads are used to provide one-shot continuations…

Posted by: Derek Elkins on March 5, 2008 12:01 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

May be you can take a look at the work of L. Meertens. http://www.cs.uu.nl/research/techreps/RUU-CS-89-09.html.

In that report Meertens proposes a calculus of programs using compositional structures which seem to link up nicely with your treatement of CT.

kr

Twan

Posted by: Twan on March 4, 2008 1:00 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

The calculus Meertens proposes actually looks very much like what is done in process algebras, except that they are usually less concerned with the computation of functions than with the interaction between possibly non-terminating, reactive systems. How would these fit into the picture, btw?

Posted by: Klaus Dräger on March 5, 2008 1:07 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Considering your links to physics (and stuff types), I am surprised to find no references to
J.-Y. Girard, ‘Normal functors, power series and λ-calculus’, Ann. Pure Appl. Logic 37 (1988) 129–177
or to Hasegawa’s “Two Applications of Analytic Functors”.

Where are the 2-morphisms? Disclosure: parts of my active research involves program transformation and program generation. Another view of inference rules (instead of dinatural transformations) involves 2- and 3-cells in categories, see Giraud’s Three dimensions of Proofs.

Since products and coproducts are so nicely dual, and generalize nicely [to records and labeled unions respectively] in the indexed case, it is a bit surprising that does not make an appearance. I know they are less important in other areas than in computation, so maybe they really belong on the “extension” of the Rosetta stone…

Minor comment: on p.48, the two natural transformations involving (W,X,Y,Z) both have an extra ).

Posted by: Jacques Carette on March 6, 2008 3:16 AM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Thanks for the references! I hadn’t known of them.

Stuff types, analytic functors and higher categories are not at all what I want to explain in this paper. I have very limited ambitions, namely to explain the concept of ‘symmetric monoidal closed category’ and show — in the simplest possible terms! —- how such categories arise in physics, topology, logic and computation, and how you can reason with such categories using string diagrams. The intended audience is mathematical physicists, so I’m assuming a lot of knowledge of physics, a fair amount of knowledge of topology, and essentially zero knowledge of category theory, logic or computation.

So: no 2-categories, no stuff types, no analytic functors… nuthin’ fancy! But, I’ll enjoy reading these references myself, so thanks.

Thanks also for catching those typos!

Posted by: John Baez on March 6, 2008 9:58 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

Another possible use of (n-)categories in computer science is to provide static analysis tools for programs. For example, in Polygraphic programs and polynomial-time functions, written with Guillaume Bonfante, we give tools to study complexity.

For that, we use an encoding of programs where computations are 3-dimensional objects. Indeed, a first-order functional program can be seen as a 3-category freely generated by: one 0-cell; elementary types as 1-cells; constructor and function symbols as 2-cells; computation rules as 3-cells.

Then we prove that well-chosen functorial interpretations of the underlying 2-category give space bounds on the results of computation, while “derivations” of the same 2-category yield information on the time complexity. The same tools were already used to prove termination of 3-dimensional rewriting systems there.

The paper is not written using n-categories explicitely. One of the reasons is that I do not know how to define “derivations” of n-categories yet. Please note that the paper is currently under revision before publication and thus some minor modifications could occur in the near future.

(Just a remark: the paper Jacques Carette’s cites (thanks!) is a preliminary version. The final published version is there.)

Posted by: Yves Guiraud on March 6, 2008 1:43 PM | Permalink | Reply to this

### Re: Computer Scientists Needed Now

In case anyone wonders why I’m not commenting more on this thread: it’s not lack of interest, I’m just spending all my time trying to finish the paper on time! I’m taking all your wonderful suggestions into account, and I’ll try to say more in a few days…

Posted by: John Baez on March 6, 2008 10:00 PM | Permalink | Reply to this
Read the post Physics, Topology, Logic and Computation: a Rosetta Stone
Weblog: The n-Category Café
Excerpt: Toward a general theory of systems and processes.
Tracked: March 11, 2008 7:45 AM
Read the post New Structures for Physics I
Weblog: The n-Category Café
Excerpt: Please comment on two chapters of a forthcoming book edited by Bob Coecke: 'Introduction to categories and categorical logic' by Abramsky and Tzevelekos, and 'Categories for the practicing physicist' by Coecke and Paquette.
Tracked: August 29, 2008 4:26 PM
Read the post New Structures for Physics II
Weblog: The n-Category Café
Excerpt: You can help review some more papers for Bob Coecke's New Structures for Physics.
Tracked: September 10, 2008 4:28 PM

Post a New Comment