Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 3, 2006

Classical vs Quantum Computation (Week 1)

Posted by John Baez

This fall I’m teaching a seminar on classical versus quantum computation, with an emphasis on cartesian closed categories and their quantum generalizations. Derek Wise is taking notes, which you can get as PDF files here:

I’ll create a blog entry for each week’s lecture, and anyone with something to say or ask can post comments. So, it’s a bit like an online course! Here are last Thursday’s notes:

Next week’s notes are here. For a syllabus, read on…

In these lectures I hope to talk about:

  • the lambda calculus and its role in classical computation,
  • how quantum computation differs from classical computation,
  • the quantum lambda calculus and its role in quantum computation,
  • cartesian closed categories, algebraic theories, PROPs and operads,
  • how treating computation as a process makes us “categorify” all the above.

Here are some easy online references to start with:

These go into a bit more depth:

Posted at October 3, 2006 9:48 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/960

10 Comments & 2 Trackbacks

Question about the homework

Mike Stay has done the homework in TeX, so I’ll post the answers here sometime after I collect the homework on Thursday October 5th.

Some people were wondering about which order to evaluate the expression in exercise 1.1. Now that I look at it, I see there’s no ambiguity: Selinger packs this expression with enough parentheses to make it unambiguous. Just do stuff inside parentheses first! So, if you see something like

(f(g))(h)

you first apply f to g, and then apply the resulting function to h. If you see

f(g(h))

you first apply g to h, and then apply f to the result.

As you’ll see on page 9 of Selinger’s notes, experts on the lambda calculus get sick of seeing so many parentheses. So, they bring in a convention where

fg

is short for

f(g)

Then they bring a convention where

fgh

is short for

(fg)h

They still need parentheses for things like

f(gh)

However, in the homework you’re doing he hasn’t introduced these conventions yet! So, you don’t need to worry about this yet.

Posted by: John Baez on October 4, 2006 3:19 AM | Permalink | Reply to this

Re: Question about the homework

Already? I only just finished homework 1!

Posted by: John Armstrong on October 5, 2006 3:57 PM | Permalink | Reply to this

Re: Question about the homework

John Armstrong wrote:

Already? I only just finished homework 1!

If you want to resist the temptation to read the solutions before you’ve had time to do the homework, it should be pretty easy, since I’ll just post a link to a PDF files of the solution.

Just don’t click on the link!  

Posted by: John Baez on October 5, 2006 4:05 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation - Week 0

John wrote:

Some of you were wondering about which order to evaluate the expression in exercise 1.1. Now that I look at it, I see there’s no ambiguity: Selinger packs this expression with enough parentheses to make it unambiguous. Just do stuff inside parentheses first!

Also, something of the form λ f.λx.t is unambiguous, since λf.λx makes no sense; so it’s λf.(λx.t). As for λz.z + 1, this is ambiguous if you can add a function and a number, since that is what (λz.z) + 1 does; but Selinger probably meant λz.(z + 1).

While I’m here, I’ll make a few comments based on Derek’s notes.

While you’re contrasting classical logic with quantum logic here, there is a (different) contrast of classical with constructive logic (also called “intuitionistic” logic). I see the lambda calculus a lot in the metamathematics of constructive mathematics (which also involves a lot of Cartesian closed categories). So while the lambda calculus is classical as opposed to quantum, it’s quite happily constructive as well.

Interestingly, the “easy proof” of existence of noncomputable functions is nonconstructive, while the “better proof” is constructive. Indeed, in the Russian (Markov’s) school of constructive mathematics, it is accepted as a fact (axiomatic) that every total function from the set of natural numbers to itself is recursive/computable! (Thus, there is a set –the set of Turing machines that never halt– with an injection to a countable set –the set of Turing machines– and a surjection to an uncountable set –the set of total functions. This is consistent with, say, Aczel’s constructive set theory CZF.) However, all constructivists (well, except finitists) must agree that the partial function given on the top of page 3 is uncomputable. (Using classical logic, you could turn this into a total function, and many classical mathematicians would do so, but it’s not necessary.)

Also (this is less philosophical, more along the lines of errata), I’ve never seen the term “α-reduction”, only “α-equivalence”; this is because there is no preferred direction (unlike with β-reduction, which simplifies (λx.fx)a to fa, for example, but would rather not expand fa to (λx.fx)a). Even “η-reduction” is viewed with some suspicion, because the correct direction seems to a matter of dispute. I see that Selinger’s notes give a direction to it, but it’s not the direction that you might expect!

Posted by: Toby Bartels on October 4, 2006 7:25 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation - Week 0

I’ve known for some time (at a somewhat informal level) about the duality between beta reduction and eta expansion fλx.f(x)f \mapsto \lambda x.f(x) This type of thing shows up, for example, when one normalizes proof nets in linear logic, among others things (first do a bunch of eta expansions, then a bunch of beta reductions, to put it loosely). It is also good to compare beta reduction with cut elimination in proof theory, and eta expansion with how one reduces general identities to identities on atomic types.

But the main point of this comment is to suggest a possible categorified view on this. Both beta reduction and eta reduction/expansion refer (at least implicitly) to triangular equations for the ×\times-hom adjunction for ccc’s. This is pretty obvious for eta: it says the obvious composite

A B(A B×B) BA B A^B \to (A^B \times B)^B \to A^B fλx.(f,x)λx.f(x) f \mapsto \lambda x.(f, x) \mapsto \lambda x.f(x)

is the identity. Whereas a fundamental instance of beta is that

A×B(A×B) B×BA×B A \times B \to {(A \times B)}^B \times B \to A \times B (a,b)(λc.(a,c),b)[λc.(a,c)](b) (a, b) \mapsto (\lambda c.(a, c), b) \mapsto [\lambda c.(a, c)](b)

is also an identity – this is the other triangular equation.

Imparting preferred directions to eta, beta could then be seen as a shadow of a categorification process where these triangular equations are promoted to triangulators. Such triangulators are often invertible, of course (as in 2-adjunctions), but this need not be so (cf. lax adjunctions). But even if so, there may be good reason, if one prefers beta reduction, to also prefer eta expansion, as that better fits the usual expression of the “swallowtail coherence condition” for triangulators (here eta expansion would be the point of entry and beta reduction the point of exit – cf. what I said earlier about normalizing proof nets).

This is a bit of a “coffee thought”, but it’s possibly worth considering.

Posted by: Todd Trimble on October 12, 2006 5:10 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation - Week 0

Hi, Todd! Coffee thoughts are just what we want here at the n-Category Café!

Despite my use of the term “η\eta-reduction” in those notes, I’d been tipped off to the importance of η\eta-expansion ever since we started chatting about categories and computation here on this blog, and Robin Houston pointed out an interesting paper on the virtues of eta-expansion by C. Barry Jay and Neil Ghani.

I’ve just glanced at the paper so far, but somehow it blended in my mind with R. A. G. Seely’s work on weak adjoints in proof theory and modeling computation using 2-categories, and I’ve been meaning ever since to understand computation as a process by categorifying the notion of CCC and replacing certain equations not by 2-isomorphisms but by noninvertible 2-morphisms.

So, in my own slow grinding way, I’ve working towards this in my seminar here at UCR. In the seminar, I plan to first introduce string diagrams for closed categories, then surface diagrams for closed 2-categories, and then use these diagrams to illustrate the “process of computation” in categorified CCCs - in a less ad hoc way than Keenan’s cute bird diagrams.

But, this will take a while! You’re way ahead of me. So, I suggest that you take a little nap and wake up when my seminar hits those more advanced topics. Then you may have fun helping me out.

Posted by: John Baez on October 12, 2006 5:39 AM | Permalink | Reply to this

Re: Classical vs Quantum Computation - Week 0

eta expansion

For some reason I’ve never come across this phrase, but it seems exactly right … as does your connection of it to lax adjunctions! Someday, some historian of categorified mathematics will point out that this connection was first made here. (Unless, of course, it wasn’t.)

Posted by: Toby Bartels on October 13, 2006 11:08 PM | Permalink | Reply to this

Re: Classical vs Quantum Computation - Week 0

Todd wrote:

eta expansion

Toby wrote:

For some reason I’ve never come across this phrase…

Well, I just mentioned C. Barry Jay and Neil Ghani’s paper entitled The virtues of eta-expansion. In their abstract they say “This use of expansions is supported by categorical models of reduction, where beta-contraction, as the local counit, and eta-expansion, as the local unit, are linked by local triangle laws.”

But, I don’t know if they were 2-categorical enough to make the link to lax adjunctions!

Posted by: John Baez on October 20, 2006 10:31 PM | Permalink | Reply to this
Read the post λ-Calculus in Paris
Weblog: The n-Category Café
Excerpt: Typed Lambda Calculi and Applications - a conference in Paris, June 26-28, 2007.
Tracked: October 4, 2006 8:08 PM

Re: Classical vs Quantum Computation (Week 0)

Answers to the homework by Mike Stay, Toby Bartels and Jeffrey Morton are now available here!

Posted by: John Baez on October 6, 2006 7:50 PM | Permalink | Reply to this
Read the post Classical vs Quantum Computation (Week 2)
Weblog: The n-Category Café
Excerpt: An intro to closed monoidal categories.
Tracked: October 20, 2006 3:21 AM

Re: Classical vs Quantum Computation (Week 0)

Should I be reading as background?:

Patterns and Lax Lambda Laws for Relational and Imperative Programming
David A. Naumann

Abstract: Point-free relation calculi have been fruitful in functional programming, but in specific applications pointwise expressions can be more convenient and comprehensible than point-free ones. To integrate pointwise with point-free, de Moor and Gibbons [AMAST 2000] give a relational semantics for lambda terms with non-injective pattern matching. Alternative semantics has been given in the category of ideal relations and also in its associated category of predicate transformers….

and from
<http://guinness.cs.stevens-tech.edu/~naumann/publications.html>:

Predicate Transformer Semantics of a Higher Order Imperative… - Naumann (1998)

Two-Categories and Program Structure - Naumann

A Categorical Model for Higher Order Imperative Programming - Naumann (1993)

Posted by: Jonathan Vos Post on November 5, 2006 8:11 PM | Permalink | Reply to this

Post a New Comment