## October 29, 2009

### This Week’s Finds in Mathematical Physics (Week 282)

#### Posted by John Baez

In week282 of This Week’s Finds, visit Mercury:

Learn how this planet’s powerful magnetic field interacts with the solar wind to produce flux transfer events and plasmoids. Then read about the web of connections between associative, commutative, Lie and Poisson algebras, and how this relates to quantization. In the process, you’ll meet linear operads, their generating functions, and Stirling numbers of the first kind!

Some more photos of Mercury taken by the Messenger probe:

Posted at October 29, 2009 9:34 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2097

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Also, you might enjoy answering these questions, most of which I haven’t tried

Here’s another: if gr(Assoc) = Poisson, what is the meaning of the Rees and blowup algebras associated to this filtration?

(Given a filtration R = R_0 > R_1 > …, e.g. by powers of I, you can look at the subring of R[t] that has t^n R_n in the nth degree piece; that’s the blowup algebra. If you include t^{-n} R in the negative powers, that’s the Rees algebra. If you mod out Rees by (t - c), you get R for any nonzero c, and gr R for c=0.)

Posted by: Allen Knutson on October 29, 2009 10:07 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Here’s another: if gr(Assoc) = Poisson, what is the meaning of the Rees and blowup algebras associated to this filtration?

for various reasons it’s a bit tricky for me to try to answer this question; such as:

1 offhand the question seems to blur operads and rings together.

2 allen already substantially answered the question himself by describing what happens in general to the rees algebra when you specialize the variable t to a numerical value.

3 i don’t know standard terminology in this area too well.

4 i might be making a stupid mistake again.

nevertheless i’ll try to say some things here which might address the intent of the question.

let v be a finite-dimensional vector space, and let x be a commutative algebra structure on v. then a poisson algebra structure on the commutative algebra (v,x) is the same thing as “a point of the normal cone at x to the variety of commutative algebra structures on v as a sub-variety of the variety of associative algebra structures on v”.

furthermore, “it always works that way” whenever you take the ideal power filtration of an operad ideal.

(a point of the naive “zariski normal space” here is an anti-symmetric hochschild 2-cocyle; the cocycle condition corresponds to the leibniz compatibility condition between the commutative multiplication and the poisson bracket, while the anti-symmetry of the cocycle corresponds to the anti-symmetry of the poisson bracket. the jacobi condition on the poisson bracket carves out the normal cone within the zariski normal space.)

you can talk about the “rees operad” of a filtered operad. the rees operad lives in the symmetric tensor category of modules of the ring of polynomials in the variable t. for any element c of any commutative ring r, a “(t=c)-specialized algebra” of the rees operad of the ideal power filtration of the “commutator bracket” ideal of the associative operad is an r-module equipped with both an associative algebra structure and a lie algebra structure, satisfying a leibniz compatibility condition, plus the condition that c times the lie bracket equals the commutator bracket of the associative multiplication. when c is invertible this is essentially just an associative algebra; when c is zero it’s a poisson algebra.

instead of working with varieties of algebraic structures on a fixed vector space we could also somewhat more elaborately develop the idea that “the moduli stack of poisson algebras is the normal cone bundle to the moduli stack of commutative algebras as a sub-stack of the moduli stack of associative algebras”, though i haven’t worked out all the details.

Posted by: james dolan on October 31, 2009 6:48 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

After the line “Instead, I just want to show how the concept…” the diagram is backwards to what it should be, it seems.

Posted by: Urs Schreiber on October 29, 2009 10:16 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

In the line that starts with “You might think it was a 3d subspace…” it later says

“where [a] bracket shows up at leas[t] once.”

Posted by: Urs Schreiber on October 29, 2009 10:26 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thanks for these corrections, Urs. Fixed!

Posted by: John Baez on October 29, 2009 10:44 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

I forgot to say: stimulating TWF!

I keep forgetting if I knew the following: what’s the algebras for the cofibrant replacement of the Poisson operad, the $P_\infty$-algebras? (And worse, I keep forgetting how that answer relates to homotopy BV-algebras.)

Following Dmitry Roytenberg, Courant algebroids are supposed to be the 2-Poisson manifolds. And analogously his Courant-Dorfman algebras are supposed to be something like 2-Poisson algebras.

I am yearning to fully understand the pattern and to fit it in with that of symplectic groupoids and symplectic $\infty$-groupoids.

(Hm, no link to the $n$Lab in this post, what’s wrong with me?)

Posted by: Urs Schreiber on October 30, 2009 1:42 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

(Hm, no link to the nLab in this post, what’s wrong with me?)

I got over it:

My terminology (see discussion at the end). Slightly off:

0-symplectic manifold = symplectic manifold

1-symplectic manifold = Poisson manifold

2-symplectic manifold = Courant algebroid

Posted by: Urs Schreiber on October 30, 2009 2:29 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Where would symplectic Deligne-Mumford stacks fit into this pattern?

Posted by: Eugene Lerman on October 30, 2009 12:46 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Where would symplectic Deligne-Mumford stacks fit into this pattern?

I think symplectic Lie groupoids are suppossed to be the result of Lie integrating Poisson structures.

Pointers to a good reference and some reviews of that reference are at symplectic groupoid.

And the way to think about it is: An $n$-symplectic manifold is really a Lie $n$-algebroid with extra structure (as indicated indicated in the entry). A Poisson structure is a Lie 1-algebroid (namely the Poisson Lie-algebroid induced by the Poisson structure in its tradition incarnation). So, indeed, it should integrate to a 1-groupoid.

(And the DM-stack incarnation of that is just what encodes the smooth and/or algebraic structure.)

Posted by: Urs Schreiber on October 30, 2009 8:48 PM | Permalink | Reply to this

### multisymplectic

The n-lab entry on multisymplectic geometry has a reference to Hrabak’s multisymplectic BRST of 1999 but I don’t see any followup? Hrabak deals only with the
most well behaved kind of reduction -very regular and strictly for the Lie algebra equivariant case. Has no one done the general BFV version?

Notice also on p.3 of the arXiv version he notes that the bracket of Hamiltonian forms satisfies the Jacobi identity only mod exact forms. Has anyone pursued this to the full L_\infty structure?

Posted by: jim stasheff on February 22, 2010 1:55 PM | Permalink | Reply to this

### Re: multisymplectic

Hi Jim,

I don’t think there has been much of a follow-up to Hrabak’s work yet. However I met a graduate student of Alice Rogers at the Porto conference last summer who apparently is starting some work along these lines.

Hrabak’s bracket of Hamiltonian n-1 forms on p. 3 is the same as the bracket used in 2-plectic geometry when n=2.

I have worked out the full $L_{\infty}$ structure for arbitrary $n$. I’m currently writing it up in a draft which hopefully will be done soon.

Posted by: Chris Rogers on February 22, 2010 3:53 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Urs wrote:

I am yearning to fully understand the pattern and to fit it in with that of symplectic groupoids and symplectic $\infty$-groupoids.

Me too! My student Chris Rogers seems to be becoming an expert on categorified classical mechanics, and he’ll soon be coming out with a new paper on this topic, which may please you.

The presence of Yael Fregier here at UCR seems to be accelerating our progress! I will have to write some Week’s Finds about what we’re learning.

Posted by: John Baez on October 30, 2009 2:57 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

My student Chris Rogers seems to be becoming an expert on categorified classical mechanics, and he’ll soon be coming out with a new paper on this topic, which may please you.

These days there is nothing I can think of that not one grad student or other of you is working on, it seems.

I have added a quick section on relation to multisymplectic geometry to the entry on $n$-symplectic manifolds.

Maybe one of you likes to expand on that.

I would think the from the perspective of $\infty$-Lie theory using the language as in my article with Jim and Hisham, the pattern is this:

an $n$-symplectic manifold is a Lie $\infty$-algebroid equipped with a binary invariant polynomial of degee $n+2$.

As any invariant polynomial on a Lie $\infty$-algebroid, this suspends to a degree $(n+1)$-cocycle on the Lie $\infty$-algebroid. This cocycle is the $(n+2)$-ary multi-Poisson bracket.

Posted by: Urs Schreiber on October 30, 2009 9:33 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

You refer to a good proof of PBW, but unless I missed something in skimming, I think that Week 212 only says such a proof exists, not what it is. Will you please say more / post a link? I can imagine at least two possibilities:

Let L be a Lie algebra, UL its universal enveloping algebra and SL its symmetric algebra (as a vector space). Somehow recognize the dual Hops algebra to UL as the algebra S(L*) with a noncocommutative coproduct, forget the weird coproduct, and dualize back. But the details aren’t coming to me immediately. Indeed, I think that the coproduct on (UL)* is only topological — i.e. UL is filtered, and so (UL)* has a topology, and the coproduct lands in some completion of this of the tensor product. If my memory is correct, the coproduct on (UL)* corresponds to the “product” on L given by the BCH formula, if I try to think of (UL)* as the algebra of functions on L.

Alternately, let L be a Lie algebra, and L_h the algebra with the rescaled bracket [x,y]_h = h[x,y]. For non-zero h, L_h is isomorphic to L. Anyway, somehow build well-behaved isomorphisms UL_h = UL, and take the limit as h\to 0. Unfortunately, the only way I know to get the isomorphisms UL_h = UL to be well-behaved in the limit requires PBW.

Posted by: Theo on October 30, 2009 4:00 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

In week212 I said:

The kernel of the idea is this: if $L$ is the Lie algebra of a Lie group $G$, $U L$ consists of left-invariant differential operators on $G$, and there’s a map $U L \to S L$ sending any differential operator to its “symbol”. This is an isomorphism of vector spaces and even of coalgebras, but not of algebras.

It’s just a sketch of the proof, but it’s easy to fill in the details if you know about the symbol of a differential operator. $U L$ consists of the left-invariant differential operators on the Lie group $G$, while $S L$ consists of their symbols. $L$ itself consists of the first-order left-invariant differential operators on $G$!

The real point is that unlike the common textbook proof of the PBW theorem, which involves choosing an ordered basis $x_i$ of $L$ and writing any element of $U L$ as a unique linear combination of elements of the form $x_1^{k_1} \cdots x_n^{k_n}$ using a special case of the Diamond Lemma, the symbol map is manifestly basis-independent! So, we get a canonical isomorphism.

This proof works for finite-dimensional real or complex Lie algebras, though the existence of this canonical isomorphism is probably completely general.

Posted by: John Baez on October 30, 2009 5:04 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

OK, so the problem is just that I don’t know what the symbol of a differential operator is (I’ve certainly heard the words, and figured I’d absorb the definition by osmosis eventually). So I did see that paragraph, but I think I was secretly hoping you had spelled it out further down. Oh, well.

Posted by: Theo on October 30, 2009 10:05 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

The Wikipedia article I just linked to gives a pretty decent explanation. It focuses on the symbols of differential operators on $\mathbb{R}^n$, but they work very much the same way on other Lie groups.

Posted by: John Baez on October 30, 2009 11:12 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Well, so I definitely don’t believe you fully. We’ve been talking about symbols of differential operators over at MathOverflow. The consensus seems to be the following:

For a differential operator of degree at most k, one can define its kth-order symbol. This is well-defined, but not injective; the kernel consists of all operators of degree at most k-1. Since every operator has a degree, we can define for each operator a “principal symbol” which is the symbol of its highest part. But this function is not a linear map.

When you try to make this function into a linear map, by gluing together the data at different degrees, what you find out is: the algebra of differential operators on a space is filtered, and its associated graded algebra is isomorphic to the algebra of functions (smooth in the position, polynomial in momentum) on the cotangent bundle.

If your manifold comes equipped with a choice of affine structure, then you can define a linear map from differential operators to functions on the cotangent bundle (polynomial in the fibers, with coefficients that vary smoothly in the position variables). This is the “symbol” map of Wikipedia. What’s important to emphasize is that it depends on more than the smooth structure on R^n.

But it seems that for the PBW theorem all is not lost. A left-invariant differential operator is determined by its action in some open neighborhood of the identity element. And there one has a canonical choice of affine structure, given by the exponential map from the origin.

So essentially I choose these coordinates, and then I can pull any differential operator on the Lie group back to a differential operator on the Lie algebra, and define the symbol there. It’s not clear to me how the coefficients vary in space, but I guess I don’t care: I can now restrict my symbol (which is a function on the cotangent bundle to the Lie algebra) to the fiber over the identity; now I have a polynomial on (something canonically isomorphic to) the dual space to the Lie algebra, and thus an element of the symmetric algebra.

It’s probably obvious that the map I described is at least one of injective or surjective on terms of degree at most k, which is all I need, because I can then use the textbook PBW to count dimensions. Alternately, with a little thought it’s probably straightforward that the map I’ve described is an isomorphism (on terms at most k, then take a limit), and perhaps even an isomorphism of coalgebras. But I’m mostly thinking out-loud (or out-keyboard), so I haven’t thought the last step through yet.

Is this the proof you were imagining? It’s certainly not quite as easy as “take the symbol of the differential operator”. Or perhaps there’s something else magical about being left-invariant that allows a canonical symbol to be defined. If so, please do tell.

Posted by: Theo on October 31, 2009 5:21 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Is this the proof you were imagining? It’s certainly not quite as easy as “take the symbol of the differential operator”. Or perhaps there’s something else magical about being left-invariant that allows a canonical symbol to be defined. If so, please do tell.

as far as i know, the first time that john wrote about a proof of the pbw theorem using symbols of differential operators, he said that he heard it from me, which annoyed me because i know that i didn’t use the word or idea “symbol” in the proof that i told him.

the proof that i told him is this: the enveloping algebra of a lie algebra is the algebra of translation-invariant differential operators on the corresponding lie group. these can be thought of as (the convolution operators associated with) distributions supported at the identity element of the group, or, pulling back along the exponential map, as distributions supported at the additive identity of the lie algebra. since this is independent of the lie bracket operation, it’s equivalent (in particular as a coalgebra, not as an algebra) to the enveloping algebra that you get when the lie bracket operation is zero, which is the symmetric algebra of the underlying vector space.

i didn’t read the whole discussion here very carefully, but i still think that the above proof is pretty simple, and that “symbol” is mostly a red herring here.

Posted by: james dolan on October 31, 2009 8:11 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Yes, that’s pretty straightforward. Actually, I think that after unpacking the argument I sketched, it becomes your proof, just my version had lots of spurious words, because I was figuring it out as I went.

Posted by: Theo on October 31, 2009 8:26 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

i suspect that at the part in the proof where i say:

since this is independent of the lie bracket operation, it’s equivalent … to the enveloping algebra that you get when the lie bracket operation is zero,

john is imagining the lie bracket operation being actually continuously re-scaled down to zero, which does (quite in line with the thread-starter here) suggest ideas about “symbol” and about how what we’re really talking about is a coalgebra isomorphism between the filtered enveloping associative algebra and its associated graded algebra, aka the enveloping poisson algebra.

Posted by: james dolan on October 31, 2009 9:12 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

john is imagining the lie bracket operation being actually continuously re-scaled down to zero, which does (quite in line with the thread-starter here) suggest ideas about “symbol” and about how what we’re really talking about is a coalgebra isomorphism between the filtered enveloping associative algebra and its associated graded algebra, aka the enveloping poisson algebra.

in fact, maybe it’s worth pointing out how the associated graded algebra of this filtered associative algebra becomes a graded poisson algebra: we can treat the associative operad and its algebra the enveloping associative algebra as a single filtered algebraic object, and then take its associated graded object.

Posted by: james dolan on October 31, 2009 9:41 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

I see that Jim beat me to it, but here’s what I’d been writing in response to Theo’s post.

You’re well on the way to a proof. You’ve done the interesting part, which is to get a canonical map from left-invariant differential operators on a Lie group $G$ to elements of the symmetric algebra of its Lie algebra $L$. So now you just need to prove that it’s an isomorphism.

You could do this in the way you suggested: checking that it’s one-to-one and onto. It’s easy to see it’s one-to-one: a left-invariant differential operator is determined by its symbol at any point. You could then show that it’s onto by a dimension count, as you sketched — but it would be a bit sad to use the old clunky PBW theorem to prove this new improved one… and the most satisfying way to prove a map is an isomorphism is to construct an inverse. So, why not try that instead?

Basically you just need to show that given the symbol of a differential operator at the identity of $G$, you can use left translations to build a left-invariant differential operator on the whole group. There are lots of ways to do this. Here’s my favorite, which may be overly high-tech.

We’ve seen that the universal enveloping algebra $U L$ is canonically isomorphic to the space of left-invariant differential operators on $G$. Given such an operator, say $D$, we can apply it to $\delta$, the Dirac delta at the identity of the group. The result is a distribution supported at the identity. It’s well-known that on a finite-dimensional vector space like $L$, the space of distributions supported at the origin is canonically isomorphic to $S L$. (See for example Proposition 1 here.) So, we have a map $U L \to S L$.

So far this is just another way of talking about the map you already constructed: the map that assigns to any left-invariant differential operator its symbol. But this way makes it easy to construct the inverse of this map. Namely, given a distribution $X$ supported at the identity of $G$, convolution with it:

$f \mapsto X * f$

is a left-invariant differential operator. This gives a map $S L \to U L$ which is the inverse of our other map.

There must be a less fancy way to say this, but I’m too lazy to think of it. Basically I’m just confirming your suspicion:

… perhaps there’s something else magical about being left-invariant that allows a canonical symbol to be defined.

Posted by: John Baez on October 31, 2009 10:26 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Note that both UL and SL are Hopf algebras and the isomorphism is much easier to arrive at on the coalgebra side.

Posted by: jim stasheff on November 1, 2009 1:07 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

This thread drives me nuts. I have the impression that every proof of PBW is hard: It either goes through the Diamond lemma, or through Baker-Campbell-Hausdorff, or Ado’s theorem, or something. And then I think “but what about that post of John Baez’s?” And I think about it for hours until I get to a computer, reread the thread and remember what is going on. So I’m going to post it, in the hopes that I won’t get stuck like that again.

Here’s the point: You are only proving PBW for a Lie algebra which is the Lie algebra of a Lie group. That is substantially easier than proving it for an abstract Lie algebra. I don’t want to criticize this – it is definitely a great way of explaining how to do the proof in that case – but, unless you want to prove every Lie algebra comes from a Lie group, it is not the same thing.

The point is the following. We have maps $\mathrm{Sym}(g) \to \bigoplus g^{\otimes n} \to U(g)$. The composite is a map of coalgebras, so it must be the map you construct. One way to state PBW is that this map is an isomorphism. (We are in characteristic zero here. In characteristic $p$, this map is not an isomorphism, even when $g$ is abelian. For example, in characteristic $2$, if $x$ and $y$ are commuting elements of $g$, then $xy$ is sent to $x \otimes y + y \otimes x$, which is sent to $2xy = 0$ in $U(g)$.) Surjectivity is easy, the problem is injectivity.

If $f$ is sent to zero, then $\Delta(f)$ is sent to zero in $U(g) \otimes U(g)$, and thus $\Delta(f) - f \otimes 1 - 1 \otimes f$ is sent to $0$. Let $P_k$ be the subspace of $\mathrm{Sym}(g)$ consisting of polynomials of degree $\leq k$. If $f \in P_k$, then $\Delta(f) - f \otimes 1 - 1 \otimes f \in P_{k-1} \otimes P_{k-1}$. So the minimal degree element of the kernel must satisfy $\Delta(f)-f \otimes 1 - 1 \otimes f =0$. This forces $f$ to be linear (we are still in characteristic zero.)

So, in short, it is enough to show that $g$ injects into $U(g)$. (I first learned this argument from Noah Snyder.) In other words, we have to show that $g$ has some faithful representation – finite or infinite dimensional, we don’t care.

Now, when $g$ comes from a Lie group, this is easy: The left invariant vector fields give a faithful representation (on the space of smooth functions). When $g$ is a Lie algebra of matrices, this is easy: the matrices give your representation. If $g$ is the Lie algebra of a formal group, that can be made to work too.

But, purely given a Lie algebra which doesn’t come from anywhere, I continue to believe that PBW is hard.

Posted by: David Speyer on March 24, 2011 4:27 AM | Permalink | Reply to this

### Number of meaningful differential operations; Re: This Week’s Finds in Mathematical Physics (Week 282)

That gets to related enumeration problems.

A127935 Number of meaningful differential operations of the n-th order on the space R^(3+n).”

n a(n)
1 3
2 6
3 16
4 26
5 84
6 126
7 424
8 610
9 2068
10 2936

a(1) = number of meaningful differential operations of the 1st order on the space R^3 = 3 = A020701(1) namely {del_1, del_2, del_3} = {div, grad, curl}, since it is well-known that the 3 first-order differential operations on the space R^3 can be introduced using the operator of the exterior differentiation of differential forms [Bott].

a(2) = number of meaningful differential operations of the 2nd order on the space R^4 = A090989(2), namely 6 nontrivial second-order compositions del_j o del_k such that k + j = 4 + 1 and 2k not equal to 4.

a(3) = number of meaningful differential operations of the 3rd order on the space R^4 = 16 = A090990(3), namely 16 nontrivial third-order compositions del_k o del_j o del_k and del_j o del_k o del_j.

http://www.research.att.com/~njas/sequences/A116183

R. Bott, L. W. Tu, Differential forms in algebraic topology, New York: Springer, 1982.

Branko Malesevic: Some combinatorial aspects of differential operation composition on the space R^n, Univ. Beograd, Publ. Elektrotehn. Fak., Ser. Mat. 9 (1998), 29-33.

Posted by: Jonathan Vos Post on October 31, 2009 10:44 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Good to have you back explaining “serious math”! And nice timing with Mike’s post the next day.

A couple of trivial typos:

Linear operads are [a] lot like associative algebras…

Of course [he] means “operad” where he writes “algebra” or “ring” - while [the] constructions he describes are most familiar for algebras or rings, they work for operads too!

Posted by: David Corfield on October 30, 2009 9:47 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thanks, David! Yes, my pep is coming back after a rigorous regime of goofing off. I fixed the errors you pointed out and added your remarks about signed versus unsigned Stirling numbers of the first kind.

All that stuff about ‘inverse triangular matrices’ is probably related to Möbius inversion or some generalization thereof, so there’s probably more to be understood in the relation between $Assoc$, $Comm$ and both kinds of Stirling numbers.

Posted by: John Baez on October 30, 2009 7:03 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Wikipedia wants to tag your Stirling numbers as ‘unsigned’, and yet notes that “that nearly all the relations and identities given on this page are valid only for unsigned Stirling numbers”. Also the link with exponential generating functions goes through the unsigned version.

So why deal with the signed version? Is it because

The Stirling numbers of the first and second kind can be understood to be inverses of one-another, when taken as triangular matrices.

Posted by: David Corfield on October 30, 2009 12:32 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thank you for a great TWF! The structure of the Assoc operad is indeed much richer than it might first appear and you’ve done a great job communicating the details in such a clear fashion.

But I wonder if you’re aware of the more mysterious side of the story? You’ve dealt with the anti-symmetric operations of the Assoc operad and ideals generated by them, but what about the symmetric operations?

I believe the structure of the suboperad generated by ab+ba is an open problem.

Posted by: James Griffin on October 30, 2009 4:13 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thanks, James — I’m glad you liked “week282”. Do you — or anyone here — know if anybody has ever written before about the role of Stirling numbers in this filtration of the operad Assoc? I haven’t seen it anywhere.

I believe the structure of the suboperad generated by ab+ba is an open problem.

Here’s what I know about it, from week192:

People also say: to understand Jordan algebras, start with an associative algebra and see what you can do with just 1 and the operation

X o Y = XY + YX

This looks very similar; the only difference is a sign! But it’s harder to find all the identities this operation must satisfy. Actually, if you don’t mind, I think I’ll switch to the more commonly used normalization

X o Y = (XY + YX)/2

Two of the identities are obvious:

1 o X = X

X o Y = Y o X

The next one is less obvious:

X o ((X o X) o Y) = (X o X) o (X o Y)

At this point, Pascual Jordan quit looking for more and made these his definition of what we now call a "Jordan algebra":

12) Pascual Jordan, Ueber eine Klasse nichtassociativer hyperkomplexer Algebren, Nachr. Ges. Wiss. Goettingen (1932), 569-575.

He wrote this paper while pondering the foundations of quantum theory, since bounded self-adjoint operators on a Hilbert space represent observables, and they’re closed under the product ab + ba.

Later, with Eugene Wigner and John von Neumann, he classified all finite-dimensional Jordan algebras that are "formally real", meaning that a sum of terms of the form X o X is zero only if each one is zero. This condition is reasonable in quantum mechanics, because observables like X o X are "positive". It also leads to a nice classification, which I described in "week162".

Interestingly, one of these formally real Jordan algebras doesn’t sit inside an associative algebra: the "exceptional Jordan algebra", which consists of all 3x3 hermitian matrices with octonion entries.

This algebra has lots of nice properties, and it plays a mysterious role in string theory and some other physics theories. This is the main reason I’m interested in Jordan algebras, but I’ve said plenty about this already; now I want to focus on something else.

Namely: did Jordan find all the identities?

More precisely: if we set X o Y = (XY + YX)/2, can all the identities satisfied by this operation in every associative algebra be derived from the above 3 and the fact that this operation is linear in each slot?

This was an open question until 1966, when Charles M. Glennie found the answer is no.

It’s a bit like Tarski’s "high school algebra problem", where Tarksi asked if all the identities involving addition, multiplication and exponentiation which hold for the positive the natural numbers follow from the ones we all learned in high school. Here too the answer is no - see "week172"

for details. That really shocked me when I heard about it! Glennie’s result is less shocking, because Jordan algebras are less familiar… and the Jordan identity is already pretty weird, so maybe we should expect other weird identities.

It’s easiest to state Glennie’s identity with the help of the "Jordan triple product"

{X,Y,Z} = (X o Y) o Z + (Y o Z) o X - (Z o X) o Y

Here it is:

2{{Y, {X,Z,X},Y}, Z, XoY} - {Y, {X,{Z,XoY,Z},X}, Y} -2{XoY, Z, {X,{Y,Z,Y},X}} + {X, {Y,{Z,XoY,Z},Y}, X} = 0

Blecch! It makes you wonder how Glennie found this, and why.

I don’t know the full story, I know but Glennie was a Ph.D. student of Nathan Jacobson, a famous algebraist and expert on Jordan algebras. I’m sure that goes a long way to explain it. He published his result here:

13) C. M. Glennie, Some identities valid in special Jordan algebras but not in all Jordan algebras, Pacific J. Math. 16 (1966), 47-59.

Well, I’m afraid the title of the paper gives that away: in addition to the above identity of degree 8, Glennie also found another. In fact there turn out to be infinitely many identities that can’t be derived from the previous ones using the Jordan algebra operations.

As far as I can tell, the full story was discovered only in the 1980s. Let me quote something by Murray Bremner. It will make more sense if you know that the identities we’re after are called "s-identities", since they hold in "special" Jordan algebras: those coming from associative algebras. Here goes:

Efim Zelmanov won the Fields Medal at the International Congress of Mathematicians in Zurich in 1994 for his work on the Burnside Problem in group theory. Before that he had solved some of the most important open problems in the theory of Jordan algebras. In particular he proved that Glennie’s identity generates all s-identities in the following sense: if G is the T-ideal generated by the Glennie identity in the free Jordan algebra FJ(X) on the set X (where X has at least 3 elements), then the ideal S(X) of all s-identities is quasi-invertible modulo G (and its homogeneous components are nil modulo G) [….] Roughly speaking, this means that all other s-identities can be obtained by substituting into the Glennie identity, generating an ideal, extracting n-th roots, and summing up.

This is a bit technical, but basically it means you need to expand your arsenal of tricks a bit before Glennie’s identity gives all the rest. The details can be found in Theorem 6.7 here:

14) Kevin McCrimmon, Zelmanov’s prime theorem for quadratic Jordan algebras, Jour. Alg. 76 (1982), 297-326.

and I got the above quote from a talk by Bremner:

15) Murray Bremner, Using linear algebra to discover the defining identities for Lie and Jordan algebras, available at http://math.usask.ca/~bremner/research/colloquia/calgarynew.pdf

Now, the Jordan triple product

{X,Y,Z} = (X o Y) o Z + (Y o Z) o X - (Z o X) o Y

may at first glance seem almost as bizarre as Glennie’s identity, but it’s not! To understand this, it helps to think about "operads".

Etcetera…

Posted by: John Baez on October 30, 2009 9:59 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

15) Murray Bremner, Using linear algebra to discover the defining identities for Lie and Jordan algebras, available at http://math.usask.ca/~bremner/research/colloquia/calgarynew.pdf

Sadly, that doesn't seem to be available online anymore.

Posted by: Toby Bartels on October 31, 2009 12:30 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

But thanks, I should update my link.

Posted by: John Baez on October 31, 2009 3:47 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thanks for the reply, I hadn’t realised there was such a history of study into this problem.

As for the Stirling numbers and their presence in the literature, I’m not aware of anywhere where the precise connection is made, but then I’m just a PhD student. However people are very much aware of the dimensions and there are other ways of deriving them, I plan to “do it properly” in my thesis; there’s more than just a filtration in characteristic 0. And there are other ways to present the connection between Assoc and Poisson.

Posted by: James Griffin on November 2, 2009 10:59 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

The spirals of the “flux transfer events” seem to be consistent with vortices of dynamic mathematics.

They may also be related to the JC Maxwell original description of “corkscrew motion” describing the mechanics of electromagnetism.

Posted by: Doug on October 30, 2009 6:44 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

The idea here is to realize that classical mechanics isn't really true: the world is quantum mechanical. So, even when we think our algebra of observables is commutative, it’s probably not. This is probably just an approximation. It’s not really true that the commutator $[x,y]$ is zero. Instead, it's just tiny.

How do we formalize this? Well, in reality $[x,y]$ is often proportional to a tiny constant called Planck's constant, $h$. When this happens, we can write $[x,y] = h\{x,y\}$ where $\{x,y\}$ is some other element of our associative algebra.

It seems to me that there's no ‘if’ about this; it's always true in reality. If $x$ and $y$ are elements in the algebra of observables, then we can always define $\{x,y\}$ as the commutator $x y - y x$ divided by Planck's constant.

Unfortunately I can't derive this from my meager assumptions thus far, since I'm not allowed to divide by $h$. So let me also assume that multiplication by $h$ is one-to-one in $A$.

That's a pretty safe assumption, since it is true in reality. In fact, Planck's constant is invertible in reality!

Then let's consider the algebra $A/h A$, which we define by taking $A$ and imposing the relation $h = 0$. This amounts to neglecting quantum effects, so $A/h A$ is called the "classical limit" of our original algebra $A$.

Now here's the problem. If $h$ is invertible, then everything that you say about $A/h A$ is true, but so is something more: $A/h A$ is trivial! Then it is of no use in describing anything.

I can understand all of this at the level of an approximation, which perhaps should be called a ‘semiclassical’ approximation. In the classical approximation, $h = 0$; in the semiclassical approximation, $h \neq 0$ and multipliction by $h$ is even one-to-one, but $h$ is not invertible. And then passing from $A$ to $A/h A$ describes how the classical approximation is derived from the semiclassical approximation.

But none of this shows how these approximations are related to the fully quantum theory, because $h$ is invertible in that theory!

Posted by: Toby Bartels on October 31, 2009 1:30 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

I think Toby you did spot a mistake in John’s presentation, but I have a quarrel with your interpretation of the mistake.

For the argument that we want indeed multiplication by the element $\hbar \in A$ is not one-to-one.

We want to think of $A = A_0[[\hbar]]$ as being the ring of power series in $\hbar$ over a ground ring $A_0$.

In this ring, $\hbar$ is not invertible. Of course when you look at ring homomorphism into $A_0 \otimes_{\mathbb{Z}} \mathbb{R}$ that have trivial kernel, then these take $\hbar$ to an invertible element in $\mathbb{R}$. But that’s really a different issue.

But if we indeed assume that $A$ is a power series ring in $\hbar$, then the argument John needs goes through after all: then it does follow from

$\hbar \{x,y\} = -\hbar \{y,x\}$

in $A = A_0[[\hbar]]$ that

$\{x,y\} = -\{y,x\}$

in $A_0$.

Posted by: Urs Schreiber on October 31, 2009 2:10 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Urs wrote:

We want to think of $A=A_0[[\hbar]]$ as being the ring of power series in $\hbar$ over a ground ring $A_0$.

As a side note: while formal power series are very convenient here, I wrote “assume the algebra $A$ is actually an algebra over $\mathbb{C}[h]$, or maybe over the ring of analytic functions in $h$” because I didn’t want Toby coming back and pointing out that we can’t always evaluate formal power series at nonzero values of $h$. Basically in this game formal power series are a trick for putting off the hard work of proving that the relevant series converge.

Posted by: John Baez on October 31, 2009 10:37 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

There’s more to it than that. I think it was ?Kuranishi? in his major deformation result who deformed in the usual way order by order keeping track of the estimates as he went along. The other approach is to do _all_ the algebra first, i.e. find a formal power series solution and _then_ study convergence aka the analysis.

Posted by: jim stasheff on November 1, 2009 1:05 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Basically in this game formal power series are a trick for putting off the hard work of proving that the relevant series converge.

Yes, the real deformation quantization is $C^*$-algebraic deformation quantization as nicely described in Eli Hawkins’ A Groupoid Approach to Quantization.

Last time we had this conversation the other way round: I had mentioned the general case and you pointed out the formal example.

The general case here is really very beautiful: as recalled in that blog discussion from Eli Hawkins’ and Klaas Landsman’s and other’s work, full C-star algebraic quantization is really nothing but higher Lie integration of symplectic $\infty$-Lie algrboids.

That deserves to be said twice on a higher categorical blog interested in quantum physics. Which is hereby done.

Posted by: Urs Schreiber on November 2, 2009 10:57 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Toby wrote:

Right, but I’m not talking about reality, where Planck’s constant $h$ is a fixed nonzero number. I’m talking about theory, where $h$ is a variable that shows up in our equations.

In theory, it’s convenient to pretend we have the freedom to adjust the value of $h$, and take the limit $h \to 0$ to get classical mechanics. But this is just a trick. In reality, what we can do is to make other quantities — quantities with units of action — very large.

So, for example, suppose we’re using quantum mechanics to study two particles interacting by an inverse square force law, and we want to see what happens as $h \to 0$. Then what we should do is scale up all the positions and velocities and masses in a suitable way — making sure that all quantities with units of action get multiplied by $k$, and letting $k \to \infty$.

For example, we can start by looking at a hydrogen atom, and then look at the Earth going around the Sun.

Now here’s the problem. If $h$ is invertible, then everything that you say about $A/h A$ is true, but so is something more: $A/h A$ is trivial!

Right, and that’s why I carefully avoided saying that $h$ is invertible — or even a number, once I got rolling. Instead I said we had an associative algebra $A$ containing a central element $h$ with the property that multiplication by $h$ is one-to-one, and

$x y - y x = h \{ x , y \}$

for some element $\{x, y \}$.

Typically what we do is assume the algebra $A$ is actually an algebra over $\mathbb{C}[h]$, or maybe over the ring of analytic functions in $h$, where $h$ is some variable. This ensures that multiplication by $h$ is one-to-one, but $h$ is not invertible.

This amounts to assuming that our formulas — the relations in $A$ — are polynomial or analytic functions of $h$.

And this in turn amounts to assuming that in reality, as we scale up all quantities with units of action, the answers to our physics problems behave in a very nice way.

Posted by: John Baez on October 31, 2009 2:39 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Oh, I see. One-to-one means injective, not bijective.

Posted by: Urs Schreiber on October 31, 2009 2:45 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

In theory, it’s convenient to pretend we have the freedom to adjust the value of $h$, and take the limit $h \to 0$ to get classical mechanics. But this is just a trick. In reality, what we can do is to make other quantities — quantities with units of action — very large.

Yeah, I know that's what's really going on, but how come nobody ever says this? Instead, they take a quantity which is fixed in the real world (and which is fixed at the value of $1$ —or rather $2 \pi$, since you said $h$ instead of $\hbar$— in nice units), pretend that it's a variable, and then claim to change it.

I understand making the physical quantities with units of action arbitrarily large, at least in a vague hand-wavy way. And I understand the mathematics of modding out $A_0[\![h]\!]$ by $h$ and getting $A_0$ back, or even modding out any associative algebra $A$ by a central element $h$ and getting a quotient algebra, with nice properties (but not too nice) if multiplication by $h$ is injective (but not surjective).

What I don't understand is what the one has to do with the other. Or rather, I do understand that too, but only in the same vague hand-wavy way; the mathematics does not help make my understanding of the physics more precise, since it goes about things in (what seems to me to be) a backwards way. Instead, it just gives me one more thing to be vague and hand-wavy about.

So if you give me a Hilbert space $\mathcal{H}$ and a self-adjoint operator $\hat{H}$ on it, and ask me to come up with a symplectic manifold $X$ and a real-valued function $H$ on it (that's a lot of ‘H’s!), such that the classical Hamiltonian system $(X,H)$ is a classical approximation to the quantum Hamiltonian system $(\mathcal{H},\hat{H})$, I might be able to make my vague hand-wavy ideas precise enough to do so, at least in an ad-hoc way. But I'm sure that $A/h A$, where $A$ is an algebra of operators on $\mathcal{H}$, is not going to be a useful algebra of functions on $X$.

Posted by: Toby Bartels on October 31, 2009 8:21 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Several different productive and mathematically rigorous ways to think about classical limits of quantum systems are described here, here, here, here, and here.

Posted by: Arnold Neumaier on October 31, 2009 1:52 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Thanks, I'll read those (except maybe the first one, which is not free online but which I can get at the library) too.

Posted by: Toby Bartels on October 31, 2009 8:19 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

I think it is kind of presumptuous even for you guys to talk about “reality” and what’s “really” going on. Nobody really understands reality or what’s really going on.

For example, you’re requiring a continuum, which is debatable whether it has a place in describing reality. “Approximating” reality, sure.

Posted by: Eric Forgy on October 31, 2009 3:46 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

I know it’s presumptuous to use the word ‘reality’, and yet I do. The point is: I don’t take it seriously — indeed, this time I was just mimicking Toby’s use of it. And I doubt Toby was taking it very seriously either.

As for the possibility that continuum physics is an approximation to some underlying discrete theory, the hard part is coming up with theories that describe this in a nice way. I’ve spent years trying to develop such theories: spin foam models and groupoidification are two tries. So I don’t really need reminders about this. But maybe other people do.

Posted by: John Baez on October 31, 2009 6:05 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

John Baez wrote:

I know it’s presumptuous to use the word ‘reality’, and yet I do. The point is: I don’t take it seriously — indeed, this time I was just mimicking Toby’s use of it. And I doubt Toby was taking it very seriously either.

And I only used that word because you used it, in the first passage that I quoted. It was a little funny to write it at first, so I took it to mean the relevant theories of physics that are generally agreed to provide our best understanding of reality, which are fully quantum theories. (Now I just used the word ‘reality’ again, but this time in a more serious way; to explain what I mean by that would take us into metaphsysics. If you're a naïve realist, then interpret it literally.)

Posted by: Toby Bartels on October 31, 2009 6:52 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Toby wrote:

Yeah, I know that’s what’s really going on, but how come nobody ever says this?

What do you mean, nobody ever says this? I just did!

I bet that lots of mathematicians working on deformation quantization never think hard about what it means physically. But I’m sure lots of physicists have. Indeed, I think any physicist who thinks seriously about the ‘classical limit’ realizes that there isn’t really a knob in god’s basement that lets us adjust the value of $\hbar$: instead, systems act approximately classical in certain regimes. And some physicists study this stuff in detail, because it’s important for understanding our universe.

I understand making the physical quantities with units of action arbitrarily large, at least in a vague hand-wavy way. And I understand the mathematics of modding out $A_0[\![h]\!]$ by $h$ and getting $A_0$ back, or even modding out any associative algebra $A$ by a central element $h$ and getting a quotient algebra, with nice properties (but not too nice) if multiplication by $h$ is injective (but not surjective).

What I don't understand is what the one has to do with the other. Or rather, I do understand that too, but only in the same vague hand-wavy way…

Should we try an example? Say, the particle on a line? Here we can take our algebra $A$ to be the Weyl algebra: the complex algebra generated by $q$ (position) and $p$ (momentum) with the relation

$[q,p] = i \hbar$

For any fixed nonzero value of $\hbar$, we can think of the Weyl algebra as an algebra of densely defined operators on $L^2(\mathbb{R})$ — it’s nice to think of them as operators on the Schwartz space.

Then, we can compare the process of scaling down $\hbar$ in this algebra to the process of taking certain states in the Schwartz space — coherent states are nice — and scaling up the expectation values of position and momentum in these states.

And we can study the limit $\hbar \to 0$ from either viewpoint, and see how they’re just different ways of talking about the same thing.

It would take a bit of work, so maybe we can save time by just pretending we did it. But it’s this sort of work one needs to do, I think, to get over the feeling of ‘hand-waving’.

Anyway you’ve convinced me I should insert a sentence or two to the Addenda of “week282” warning the reader to look here for more information on the physical meaning of deformation quantization.

Posted by: John Baez on October 31, 2009 5:50 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

It would take a bit of work, so maybe we can save time by just pretending we did it. But it’s this sort of work one needs to do, I think, to get over the feeling of ‘hand-waving’.

Yeah, I've spent about a decade pretending that I've done this, and it's about time that I actually did it. So you can be the Wizard to me, and teach me something that I should have understood clearly a long time ago. (We could do it here or start a new thread.)

I'll try to write a more substantial reply in a little while, but first a question: When we pick the coherent states, is there any canonical way to select them (or something that will end up being equivalent to selecting them) from the data at hand? Probably not just from the algebra that we started with, but perhaps from the algebra together with a Hamiltonian such as $p^2/2$ or $p^2/2 + q^2/2$?, or even with $p^2/2m$ or $p^2/2m + k x^2/2$, where the scaling of the states' expectation values is justified as scaling $m$ or $m$ and $k$ (or $m/k$)?

Posted by: Toby Bartels on October 31, 2009 8:14 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Toby wrote:

So you can be the Wizard to me, and teach me something that I should have understood clearly a long time ago. (We could do it here or start a new thread.)

Here is fine.

POOF!

I’ll try to write a more substantial reply in a little while, but first a question: When we pick the coherent states, is there any canonical way to select them (or something that will end up being equivalent to selecting them) from the data at hand?

I’m actually not sure how important the coherent states will be in this game. It’s possible that the necessary ‘rescaling of states’ will be defined on the whole Hilbert space $L^2(\mathbb{R})$, and map the whole Schwartz space to itself.

But anyway, to specify a coherent state in $L^2(\mathbb{R})$ we need to specify not just a point $(q,p)$ in the position-momentum space $T^*\mathbb{R}$, but also an ellipse of unit area centered at this point. This ellipse describes the uncertainty of position-momentum in this coherent state!

Affine symplectic transformations act transitively on these position-momentum-ellipse triples: moving the points around, but also stretching and squashing the ellipses, while preserving their area.

If we put an inner product on $T^*\mathbb{R}$ we break some of this symmetry. We get a standard way to choose an ellipse centered at any point: namely, a circle.

This inner product also gives a classical Hamiltonian, namely the quadratic form

$\|(q,p)\|^2 = A q^2 + B q p + C p^2$

coming from this inner product. And it also gives a quantum Hamiltonian. And the time evolution generated by this quantum Hamiltonian maps coherent states (with the appropriate choice of ellipse) to coherent states!

Posted by: John Baez on October 31, 2009 11:03 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

This inner product also gives a classical Hamiltonian, namely the quadratic form ${\|(q,p)\|}^2 = A q^2 + B q p + C p^2$ coming from this inner product. And it also gives a quantum Hamiltonian.

And the harmonic oscillator $p^2/2 m + k q^2/2$ is an example of this.

But I don't want to talk about $T(\mathbb{R})$, with or without an inner product, at all —that's cheating! I want to start with a properly quantum system and derive the classical phase space from that. So I'd be willing to start with the Weyl algebra and the quantum Hamiltonian $A q^2 + B q p + B p q + C p^2$ (where my $B$ is half of your $B$), unless you think that the Hamiltonian is irrelevant.

So in the Weyl algebra $W$ that I know and love, $q p = p q + \mathrm{i} \hbar$, where $\hbar$ is a positive real number that depends on our units (and which can set to $1$ by picking them appropriately). That is, $W$ is the $\mathbb{C}$-algebra (possibly topological in some sense) freely generated by two elements $q$ and $p$ and that relation. There's also the polynomial algebra $W[h]$, which has elements $\hbar$ and $h$ that are not the same thing, but that's no help since $W[h]/\langle{h}\rangle$, is just $W$ again.

I might also consider the $\mathbb{C}$-algebra freely generated by three elements $q$, $p$, and $h$, with the relations $q h = h q$, $p h = h p$, and $q p = p q + \mathrm{i} h$. If I mod this algebra out by $h$, then I get a commutative Poisson algebra. But how do I get this algebra (or at least the Poisson algebra) from $W$? That is what I'm trying to understand.

Posted by: Toby Bartels on November 1, 2009 12:26 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Toby wrote:

But I don’t want to talk about $T (\mathbb{R})$, with or without an inner product, at all — that’s cheating! I want to start with a properly quantum system and derive the classical phase space from that.

Okay — good luck. I don’t particularly like it when physicists play this kind of game, since it often amounts to pretending they don’t already know what structure they’re trying to ‘derive’, and then doing a bunch of tricks to ‘derive’ it.

“Voilà! I have derived a rabbit!”

But maybe you can do better.

Personally I’d be content to do what I said we could do: show that at least for certain states, making $\hbar$ smaller can be reintepreted as scaling up the position and momentum of these states.

I might also consider the $\mathbb{C}$-algebra freely generated by three elements $q$, $p$, and $h$, with the relations $q h=h q$, $p h=h p$, and $q p=p q+i h$. If I mod this algebra out by $h$, then I get a commutative Poisson algebra. But how do I get this algebra (or at least the Poisson algebra) from $W$?

I don’t know — I’ve never thought about this. But it’s possible that what I had been going to explain would give you a clue.

Like maybe this. Maybe you could take ‘your’ Weyl algebra $W$, where $\hbar$ is a fixed number and throw in an extra parameter $c$ which rescales $p$ and $q$:

$P = c p$ $Q = c q$

and then define a new Planck’s constant

$H = c^2 \hbar$

and get

$Q P - P Q = i H$

Posted by: John Baez on November 1, 2009 1:12 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Maybe it’s worthwhile to point out that nobody knows what quantization really is and how it works - which applies to coquantization as well (deriving a classical description from a quantum one). So you should be confused about this, there is nothing wrong with that (no need to be frustrated because you don’t understand something that no one does :-).

If you start with a classical system and apply any of the known quantization procedures, there is IMHO no reason to believe that the classical system has any “physical significance”, e.g. if you were able to increase mass and size of the electron (because you are some playful god) trying to make it into a “classical object” I think we just don’t know what would happen along the way. Does it stay a quantum object? Does it “slowly” become a “classical object”?

The classical system itself is just a tool, a stepping stone, to construct the quantum system that you are interested in.

I think it is safe to say that the deformation parameter of deformation quantization does not have any physical significance, but is just a tool as well.

I don’t think there is reason to believe that every quantum system has a corresponding classical one in the first place (no matter how broad your interpretation of what I mean by “corresponding” is).

Side note: Physicists usually have a hard time to construct quantum systems without the help of the classical picture, after all this is what constructive and local quantum field theory is all about. One of the reasons why people get interested in this stuff is that they think quantum theory should stand on it’s own feet.

Posted by: Tim vB on November 1, 2009 10:02 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

But I don’t want to talk about $T^*(\mathbb{R})$, with or without an inner product, at all — that’s cheating! I want to start with a properly quantum system and derive the classical phase space from that.

Okay — good luck. I don’t particularly like it when physicists play this kind of game, since it often amounts to pretending they don’t already know what structure they’re trying to ‘derive’, and then doing a bunch of tricks to ‘derive’ it.

Well, maybe they're cheating too. Actually, it seemed like this is exactly what you were doing when I read TWF! On the other hand, if the method is a general one that actually derives the classical systems that we know from quantum systems that we believe, then that's a good thing. And it seems to me that we have to have something like that, if we ever want to make precise the idea that the classical theories developed by Galileo and his successors can be explained by the quantum theories that we believe describe reality.

Perhaps I have been deceived by the idea that quantisation is a mystery like categorification. Because while I've long given up the idea of a general method of always converting a classical theory to a quantum one, I've still had the idea that there is a general method of converting a quantum theory to a classical one, just as any category can be decategorified into its set of isomorphism classes. But I never quite understood it, so I was hoping to get you to explain it. Maybe it doesn't exist … but then, how do we explain these classical theories?

Personally I’d be content to do what I said we could do: show that at least for certain states, making $\hbar$ smaller can be reintepreted as scaling up the position and momentum of these states.

I may have given the impression that I'd be willing to look at the coherent states only if they could be derived systematically from the Weyl algebra and possibly a Hamiltonian on it. That would be great, but I don't insist on it. I'm willing to look at them anyway, although I reserve the right to come back and ask what is the physical justification for having done so. So if you ‘had been going to explain’ something about that, then please go ahead!

Maybe you could take ‘your’ Weyl algebra $W$, where $\hbar$ is a fixed number and throw in an extra parameter $c$ which rescales $p$ and $q$: $P = c p$ $Q = c q$ and then define a new Planck’s constant $H = c^2 \hbar$ and get $Q P - P Q = i H$

H'm, this could be good …

I'll send this message now, and then do some calculations.

Posted by: Toby Bartels on November 1, 2009 7:27 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Toby wrote:

I’ve still had the idea that there is a general method of converting a quantum theory to a classical one, just as any category can be decategorified into its set of isomorphism classes. But I never quite understood it, so I was hoping to get you to explain it.

I don’t know a fully general method. I only know how it works in large classes of examples — just like quantization. And I’ve never tried to fully systematize my knowledge.

I think you may have gotten the misimpression that ‘taking the classical limit’ was easier and more systematic than ‘quantization’ because when people get frustrated with quantization they like to knock it and say that taking the classical limit is better.

Why is it better? Well, I think it’s more realistic: we don’t start with a classical world and then quantize it; rather, the world is quantum yet acts classical in certain regimes. But, I don’t think taking the classical limit is much easier than quantization..

I may have given the impression that I'd be willing to look at the coherent states only if they could be derived systematically from the Weyl algebra and possibly a Hamiltonian on it. That would be great, but I don’t insist on it.

For any quadratic Hamiltonian on any Weyl algebra (that is, a Weyl algebra with any finite number of $p$’s and $q$’s), I can define coherent states, explain how they reduce to classical states in the $\hbar \to 0$ limit, and explain how this limit can be reintepreted as a limit where the $p$’s and $q$’s get large.

However, I think it will be easiest to start with the case of one $p$ and $q$, since that’ll keep the notation simple when we’re just getting started — and with the idea firmly in mind, it’ll be easy to generalize later.

I also find it psychologically essential to talk about the classical phase space when discussing coherent states, since the whole point of a coherent state is that in a certain limit it’ll reduce to a point in that classical phase space. This may be ‘cheating’, but we can always attempt to cover our tracks later!

Posted by: John Baez on November 2, 2009 3:24 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

JB: For any quadratic Hamiltonian on any Weyl algebra (that is, a Weyl algebra with any finite number of p’s and q’s), I can define coherent states, explain how they reduce to classical states in the ℏ→0 limit, and explain how this limit can be reinterpreted as a limit where the p’s and q’s get large.

Actually, one only needs a Lie group with a triangular decomposition (as defined, e.g., in Chapter 16 of Classical and Quantum Mechanics via Lie algebras), acting irreducibly on a Hilbert space, to get a family of coherent states in which a classical limit and a classical phase space can naturally be defined. This is the essence of coherent states,

See the book on coherent states by Perelomov, or Yaffe’s paper: Large N limits as classical mechanics, Rev. Mod. Phys. 54, 407 - 435 (1982) (though they are phrasing it in a slightly different language).

Posted by: Arnold Neumaier on November 2, 2009 1:43 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Posted by: Urs Schreiber on November 2, 2009 10:46 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Hi John, and thanks for a great TWF!

I saw a question on a good reference for PBW, which gives an coalgebra isomorphism between U(L) and S(L). My “bible” is Milnor-Moore, *On the structure of Hopf Algebras*, Theorem 5.16.

Here’s a question for you, now: in the operad Assoc, the space Assoc_n has an action of the symmetric group Sym(n); it’s actually the regular representation. Similarly, I^k is a Sym(n)-module. I know a good description of the decomposition of I^1 in irreps – it’s the complement of the trivial representation – and of I^{n-1}, the Lie submodule, in terms e.g. of Young tableaux (those with “major index” \equiv 1 \pmod n). Are you aware of a description of I^k, or I^k/I^{k+1}, for other values of k?

Posted by: Laurent on November 1, 2009 10:35 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Laurent wrote:

I saw a question on a good reference for PBW, which gives an coalgebra isomorphism between $U(L)$ and $S(L)$. My “bible” is Milnor-Moore, On the structure of Hopf Algebras, Theorem 5.16.

I’ve looked at that classic paper, but I don’t keep it under my pillow as I should. Jim Stasheff wrote that the isomorphism between $U(L)$ and $S(L)$ is “much easier to arrive at on the coalgebra side”. Could you (or anyone) say how to get at it this way?

The way that James Dolan described is utterly easy to arrive at if one likes Lie groups, differential operators, and distributions — but there must be a purely (co)algebraic way as well, which could handle Lie algebras over other fields. Maybe it’s just a more algebraic reformulation of James’ idea? Or maybe it’s based on the idea that $U(L)$ is like a deformation quantization of $S(L)$?

Here’s a question for you, now: in the operad Assoc, the space $Assoc_n$ has an action of the symmetric group $S_n$; it’s actually the regular representation. Similarly, $I^k$ is a $S_n$-module. I know a good description of the decomposition of $I^1$ in irreps — it’s the complement of the trivial representation — and of $I^{n-1}$, the Lie submodule, in terms e.g. of Young tableaux (those with “major index” $\equiv 1 \pmod n$). Are you aware of a description of $I^k$, or $I^k/I^{k+1}$, for other values of $k$?

No, but this sounds like a fun question — and I’m sure the Stirling numbers are a massive hint.

I don’t know what “major index” means, but that’s probably a clue too.

What naturally occuring (reducible) representation of the symmetric group $S_n$ has dimension equal to the number of permutations with $n-k$ cycles? This representation should be $Assoc(n) \cap I^k/I^{k+1}$ — the answer to Laurent’s question.

Taking the direct sum where $k$ goes from $0$ to $n-1$, we should get the regular representation of $S_n$.

Of course the regular representation of $S_n$ is the direct sum of all irreps coming from $n$-box Young diagrams, each appearing with multiplicity equal to its dimension.

So, I’m guessing the answer is: the direct sum of all irreps coming from $n$-box Young diagrams with $n-k$ rows, each appearing with multiplicity equal to its dimension.

One reason for my guess is that Young diagrams of this sort are in 1-1 correspondence with conjugacy classes of permutations with $n-k$ cycles.

I could easily be making some mistakes, like mixing up $k$ and $n-k$, or mixing up columns and rows.

Posted by: John Baez on November 1, 2009 5:36 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

If you are looking for a proof of PBW using highly efficient abstract operad nonsense have a look at this:

I only skimmed it, and didn’t quite understand enough to formulate a synopsis. In fact, I would like to try to trick YOU into reading it and explain it here :-)

Posted by: Tim vB on November 1, 2009 9:01 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Ugh, how do I post a link?

Posted by: Tim vB on November 1, 2009 9:04 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Tim: I fixed your post. Toby has explained what to do in the future. Thanks for this reference — I’ll look at it!

Posted by: John Baez on November 2, 2009 12:37 AM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

Ugh, how do I post a link?

The simplest way is to choose a Markdown filter and then put < and > around it. So <http://arxiv.org/PS_cache/math/pdf/0611/0611885v3.pdf> produces

http://arxiv.org/PS_cache/math/pdf/0611/0611885v3.pdf

Or better yet, <http://arxiv.org/abs/math/0611885v3> produces

http://arxiv.org/abs/math/0611885v3

Posted by: Toby Bartels on November 1, 2009 9:58 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

John wrote:

So, I’m guessing the answer is: the direct sum of all irreps coming from $n$-box Young diagrams with $n−k$ rows, each appearing with multiplicity equal to its dimension.

Unless I’m mixed up, this guess is wrong even for $n = 2$! For $k = 0$ we get

XXX

For $k = 1$ we get

XX
X

and

X
X
X

For $k = 2$ we get

XX
X

So there’s no incredibly simple relation between $k$ and the number of rows in the Young diagram.

Posted by: John Baez on November 2, 2009 4:11 PM | Permalink | Reply to this

### Re: This Week’s Finds in Mathematical Physics (Week 282)

1) I found more and more references to a basis-free proof of PBW. There’s a short one in appendix B of Quillen’s paper “Rational Homotopy Theory”, http://www.jstor.org/stable/1970725. Note that these proofs don’t work for completely general field; they need characteristic 0, for the good reason that the theorem isn’t true “as is” in positive characteristic.

2) About major indices etc.: the “major index” of a Young tableau is the sum of the entries $i$ such that $i+1$ lies on a row lower than $i$. It’s wierd that such a messy definition is useful, but it is!

The computations I could do give, for n=3:

k=0: 123

k=1: 12/3 1/2/3

k=2: 13/2

for n=4:

k=0: 1234

k=1: @@/@/@ @@@/@

k=2: 1/2/3/4 12/34 13/24 @@/@/@ @@@/@

k=3: 12/3/4 134/2

for n=5, I can’t guess so easily, but I have did some partial computations.

Posted by: Laurent on November 4, 2009 10:32 AM | Permalink | Reply to this
Read the post Courant Algebroids From Categorified Symplectic Geometry
Weblog: The n-Category Café
Excerpt: Read a draft of a new paper that gets Courant algebroids from 2-plectic manifolds.
Tracked: November 10, 2009 3:58 PM

Post a New Comment