Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

June 29, 2020

Getting to the Bottom of Noether’s Theorem

Posted by John Baez

Most of us have been staying holed up at home lately. I spent the last month holed up writing a paper that expands on my talk at a conference honoring the centennial of Noether’s 1918 paper on symmetries and conservation laws. This made my confinement a lot more bearable. It was good getting back to this sort of mathematical physics after a long time spent on applied category theory. It turns out I really missed it.

While everyone at the conference kept emphasizing that Noether’s 1918 paper had two big theorems in it, my paper is just about the easy one—the one physicists call Noether’s theorem:

People often summarize this theorem by saying “symmetries give conservation laws”. And that’s right, but it’s only true under some assumptions: for example, that the equations of motion come from a Lagrangian.

This leads to some interesting questions. For which types of physical theories do symmetries give conservation laws? What are we assuming about the world, if we assume it is described by a theories of this type? It’s hard to get to the bottom of these questions, but it’s worth trying.

We can prove versions of Noether’s theorem relating symmetries to conserved quantities in many frameworks. While a differential geometric framework is truer to Noether’s original vision, my paper studies the theorem algebraically, without mentioning Lagrangians.

Now, Atiyah said:

…algebra is to the geometer what you might call the Faustian offer. As you know, Faust in Goethe’s story was offered whatever he wanted (in his case the love of a beautiful woman), by the devil, in return for selling his soul. Algebra is the offer made by the devil to the mathematician. The devil says: I will give you this powerful machine, it will answer any question you like. All you need to do is give me your soul: give up geometry and you will have this marvellous machine.

While this is sometimes true, algebra is more than a computational tool: it allows us to express concepts in a very clear and distilled way. Furthermore, the geometrical framework developed for classical mechanics is not sufficient for quantum mechanics. An algebraic approach emphasizes the similarity between classical and quantum mechanics, clarifying their differences.

In talking about Noether’s theorem I keep using an interlocking trio of important concepts used to describe physical systems: ‘states’, ‘observables’ and `generators’. A physical system has a convex set of states, where convex linear combinations let us describe probabilistic mixtures of states. An observable is a real-valued quantity whose value depends—perhaps with some randomness—on the state. More precisely: an observable maps each state to a probability measure on the real line. A generator, on the other hand, is something that gives rise to a one-parameter group of transformations of the set of states—or dually, of the set of observables.

It’s easy to mix up observables and generators, but I want to distinguish them. When we say ‘the energy of the system is 7 joules’, we are treating energy as an observable: something you can measure. When we say ‘the Hamiltonian generates time translations’, we are treating the Hamiltonian as a generator.

In both classical mechanics and ordinary complex quantum mechanics we usually say the Hamiltonian is the energy, because we have a way to identify them. But observables and generators play distinct roles—and in some theories, such as real or quaternionic quantum mechanics, they are truly different. In all the theories I consider in my paper the set of observables is a Jordan algebra, while the set of generators is a Lie algebra. (Don’t worry, I explain what those are.)

When we can identify observables with generators, we can state Noether’s theorem as the following equivalence:

Thegeneratorageneratestransformationsthatleavetheobservablebfixed. The \; generator \; a \; generates \; transformations \; that \; leave \; the observable \; b \; fixed\!.

\Updownarrow

Thegeneratorbgeneratestransformationsthatleavetheobservableafixed.The \; generator \; b \; generates \; transformations \; that \; leave \; the \; observable \; a \; fixed\!.

In this beautifully symmetrical statement, we switch from thinking of aa as the generator and bb as the observable in the first part to thinking of bb as the generator and aa as the observable in the second part. Of course, this statement is true only under some conditions, and the goal of my paper is to better understand these conditions. But the most fundamental condition, I claim, is the ability to identify observables with generators.

In classical mechanics we treat observables as being the same as generators, by treating them as elements of a Poisson algebra, which is both a Jordan algebra and a Lie algebra. In quantum mechanics observables are not quite the same as generators. They are both elements of something called a ∗-algebra. Observables are self-adjoint, obeying

a *=aa^* = a

while generators are skew-adjoint, obeying

a *=aa^* = -a

The self-adjoint elements form a Jordan algebra, while the skew-adjoint elements form a Lie algebra.

In ordinary complex quantum mechanics we use a complex ∗-algebra. This lets us turn any self-adjoint element into a skew-adjoint one by multiplying it by 1\sqrt{-1}. Thus, the complex numbers let us identify observables with generators! In real and quaternionic quantum mechanics this identification is impossible, so the appearance of complex numbers in quantum mechanics is closely connected to Noether’s theorem.

In short, classical mechanics and ordinary complex quantum mechanics fit together in this sort of picture:

To dig deeper, it’s good to examine generators on their own: that is, Lie algebras. Lie algebras arise very naturally from the concept of ‘symmetry’. Any Lie group gives rise to a Lie algebra, and any element of this Lie algebra then generates a one-parameter family of transformations of that very same Lie algebra. This lets us state a version of Noether’s theorem solely in terms of generators:

Thegeneratorageneratestransformationsthatleavethegeneratorbfixed.The \; generator \; a \; generates \; transformations \; that \; leave \; the \; generator \; b fixed\!.

\Updownarrow

Thegeneratorbgeneratestransformationsthatleavethegeneratorafixed. The \; generator \; b \; generates \; transformations \; that \; leave \; the \; generator \; a \; fixed\!.

And when we translate these statements into equations, their equivalence follows directly from this elementary property of the Lie bracket:

[a,b]=0[a,b] = 0

\Updownarrow

[b,a]=0[b,a] = 0

Thus, Noether’s theorem is almost automatic if we forget about observables and work solely with generators. The only questions left are: why should symmetries be described by Lie groups, and what is the meaning of this property of the Lie bracket?

In my paper I tackle both these questions, and point out that the Lie algebra formulation of Noether’s theorem comes from a more primitive group formulation, which says that whenever you have two group elements gg and hh,

gcommuteswithh.g \; commutes \; with h\!.

\Updownarrow

hcommuteswithg.h \; commutes \; with \; g\!.

That is: whenever you’ve got two ways of transforming a physical system, the first transformation is ‘conserved’ by second if and only if the second is conserved by the first!

However, observables are crucial in physics. Working solely with generators in order to make Noether’s theorem a tautology would be another sort of Faustian bargain. So, to really get to the bottom of Noether’s theorem, we need to understand the map from observables to generators. In ordinary quantum mechanics this comes from multiplication by ii. But this just pushes the mystery back a notch: why should we be using the complex numbers in quantum mechanics?

For this it’s good to spend some time examining observables on their own: that is, Jordan algebras. Those of greatest importance in physics are the unital JB-algebras, which are unfortunately named not after me, but Jordan and Banach. These allow a unified approach to real, complex and quaternionic quantum mechanics, along with some more exotic theories. So, they let us study how the role of complex numbers in quantum mechanics is connected to Noether’s theorem.

Any unital JB-algebra OO has a partial ordering: that is, we can talk about one observable being greater than or equal to another. With the help of this we can define states on O,O, and prove that any observable maps each state to a probability measure on the real line.

More surprisingly, any JB-algebra also gives rise to two Lie algebras. The smaller of these, say L,L, has elements that generate transformations of OO that preserve all the structure of this unital JB-algebra. They also act on the set of states. Thus, elements of LL truly deserve to be considered ‘generators’.

In a unital JB-algebra there is not always a way to reinterpret observables as generators. However, Alfsen and Shultz have defined the notion of a ‘dynamical correspondence’ for such an algebra, which is a well-behaved map

ψ:OL\psi \colon O \to L

One of the two conditions they impose on this map implies a version of Noether’s theorem. They prove that any JB-algebra with a dynamical correspondence gives a complex ∗-algebra where the observables are self-adjoint elements, the generators are skew-adjoint, and we can convert observables into generators by multiplying them by ii.

This result is important, because the definition of JB-algebra does not involve the complex numbers, nor does the concept of dynamical correspondence. Rather, the role of the complex numbers in quantum mechanics emerges from a map from observables to generators that obeys conditions including Noether’s theorem!

To be a bit more precise, Alfsen and Shultz’s first condition on the map ψ:OL\psi \colon O \to L says that every observable aOa \in O generates transformations that leave aa itself fixed. I call this the self-conservation principle. It implies Noether’s theorem.

However, in their definition of dynamical correspondence, Alfsen and Shultz also impose a second, more mysterious condition on the map ψ\psi. I claim that that this condition is best understood in terms of the larger Lie algebra associated to a unital JB-algebra. As a vector space this is the direct sum

A=OLA = O \oplus L

but it’s equipped with a Lie bracket such that

[,]:L×LL[,]:L×OO [-,-] \colon L \times L \to L \qquad [-,-] \colon L \times O \to O

[,]:O×LO[,]:O×OL [-,-] \colon O \times L \to O \qquad [-,-] \colon O \times O \to L

As I mentioned, elements of LL generate transformations of OO that preserve all the structure on this unital JB-algebra. Elements of OO also generate transformations of O,O, but these only preserve its vector space structure and partial ordering.

What’s the meaning of these other transformations? I claim they’re connected to statistical mechanics.

For example, consider ordinary quantum mechanics and let OO be the unital JB-algebra of all bounded self-adjoint operators on a complex Hilbert space. Then LL is the Lie algebra of all bounded skew-adjoint operators on this Hilbert space. There is a dynamical correpondence sending any observable HOH \in O to the generator ψ(H)=iHL,\psi(H) = i H \in L, which then generates a one-parameter group of transformations of OO like this:

ae itH/ae itH/t,aO a \mapsto e^{i t H/\hbar} \, a \, e^{-i t H/\hbar} \qquad \forall t \in \mathbb{R}, a \in O

where \hbar is Planck’s constant. If HH is the Hamiltonian of some system, this is the usual formula for time evolution of observables in the Heisenberg picture. But HH also generates a one-parameter group of transformations of OO as follows:

ae βH/2ae βH/2β,aO a \mapsto e^{-\beta H/2} \, a \, e^{-\beta H/2} \qquad \forall \beta \in \mathbb{R}, a \in O

Writing β=1/kT\beta = 1/k T where TT is temperature and kk is Boltzmann’s constant, I claim that these are ‘thermal transformations’. Acting on a state in thermal equilibrium at some temperature, these transformations produce states in thermal equilibrium at other temperatures (up to normalization).

The analogy between it/i t/\hbar and 1/kT1/k T is often summarized by saying “inverse temperature is imaginary time”. The second condition in Alfsen and Shultz’s definition of dynamical correspondence is a way of capturing this principle in a way that does not explicitly mention the complex numbers. Thus, we may very roughly say their result explains the role of complex numbers in quantum mechanics starting from three assumptions:

  • observables form Jordan algebra of a nice sort (a unital JB-algebra)

  • the self-conservation principle (and thus Noether’s theorem)

  • the relation between time and inverse temperature.

I still want to understand all of this more deeply, but the way statistical mechanics entered the game was surprising to me, so I feel I made a little progress.

I hope the paper is half as fun to read as it was to write! There’s a lot more in it than described here.

Posted at June 29, 2020 10:21 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3229

30 Comments & 0 Trackbacks

Re: Getting to the Bottom of Noether’s Theorem

I look forward to reading this, but just to note here, on seeing

The self-adjoint elements form a Jordan algebra, while the skew-adjoint elements form a Lie algebra,

I was reminded of a section of the nLab page we put together on the Dwyer-Wilkerson H-space and how it might relate to 3×33 \times 3 skew-hermitian matrices over the octonions where the hermitian matrices form the exceptional Jordan algebra.

Posted by: David Corfield on June 30, 2020 12:01 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

That’s interesting: I hadn’t heard that this mysterious H-space could be nature’s best attempt at SO(3,𝕆)SO(3,\mathbb{O}).

In my paper I steer clear of talking about the exceptional Jordan algebra and the spin factors, even though they’re both unital JB-algebras where Noether’s theorem does not apply, because the self-adjoint real and quaternionic matrices are less exotic and they should satisfy most people’s thirst for examples of that sort.

In 1978 Alfsen, Shultz and Størmer showed that every unital JB-algebra is a direct sum of a part that can be embedded in the bounded operators on some real Hilbert space and a part that can be embedded in C(X,𝔥 3(𝕆))C(X,\mathfrak{h}_3(\mathbb{O})): that is, the continuous functions on a compact Hausdorff space XX taking values in the exceptional Jordan algebra!

So, the exceptional Jordan algebra is sitting there, waiting for us to put it to some good use in physics.

Posted by: John Baez on July 1, 2020 12:44 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

the exceptional Jordan algebra and the spin factors, even though they’re both unital JB-algebras where Noether’s theorem does not apply,…

Four of the spin factors coincide with 𝔥 2(𝔽)\mathfrak{h}_2(\mathbb{F}) for the normed division algebras. So presumably Noether’s theorem does apply to the spin factor in dimensions 3, 4 and 6, at least. I wonder which precisely.

Posted by: David Corfield on July 1, 2020 7:56 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

David wrote:

So presumably Noether’s theorem does apply to the spin factor in dimensions 3, 4 and 6, at least. I wonder which precisely.

Only dimension 4, which by some coincidence is where we live.

Noether’s theorem does not apply to real or quaternionic quantum mechanics: one point of my paper is to ponder the known fact that it singles out complex quantum mechanics.

More precisely, the only finite-dimensional unital JB-algebras where Noether’s theorem holds are direct sums of the Jordan algebras that show up in ordinary complex quantum mechanics, 𝔥 n()\mathfrak{h}_n(\mathbb{C}). And the spin factor 𝔥 2()\mathfrak{h}_2(\mathbb{C}) is isomorphic to 4-dimensional Minkowski spacetime!

People have been ruminating over this for a long time. For example, if you look at the heavenly sphere above, below, and around you, you’re looking at a copy of the Riemann sphere P 1\mathbb{C}\mathrm{P}^1, which is also the space of pure states of an ordinary complex qubit. This led Penrose to his twistors.

If you lived in 10d Minkowski spacetime your heavenly sphere would be the space of pure states of an octonionic qubit!

Posted by: John Baez on July 3, 2020 12:05 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

That got me thinking about your old student John Huerta and his work with Urs as reported in

This is being done in super-mathematics. Why does one not hear more of super Jordan algebras? A search finds a few hits. [Edit: so the answer is that it’s because most people call them Jordan superalgebras.]

Could your account be super-ized?

Posted by: David Corfield on July 3, 2020 8:15 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

Could your account be super-ized?

I don’t see why one would use Jordan superalgebras to describe observables in systems with bosonic and fermionic degrees of freedom: for example, the spin-1/2 particle is a fermion but people use the Jordan algebra 𝔥 2()\mathfrak{h}_2(\mathbb{C}) to describe its observables, at least in a nonrelativistic treatment. Perhaps I’m a bit confused. On the other hand 𝔥 2()\mathfrak{h}_2(\mathbb{C}) can also be seen as Minkowski spacetime, and this certainly deserves a super version.

Luckily, mathematicians don’t wait for mathematical physicists to figure out why something is useful! I should read these:

  • Victor Kac, Classification of simple \mathbb{Z}-graded Lie superalgebras and simple Jordan superalgebras, Communications in Algebra 5 (1977), 1375–1400.

  • Victor Kac, Consuelo Martinez, and Efim Zelmanov, Graded Simple Jordan Superalgebras of Growth One, Memoirs of the American Mathematical Society, vol. 711, 2001.

Posted by: John Baez on July 3, 2020 8:16 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

David wrote:

The capacity of mathematicians to work over a concept — 500 pages on Jordan algebras!

It’s a great book: I’ve been needing to read it lately. Most of it is still over my head, but it has large chunks that are written in a friendly expository style. What’s ironic is that it’s called a “taste” of Jordan algebras. One wonders what a whole meal would be like. But the expository passages summarize a lot of great work that’s not explained in detail in this book.

I’ve also benefited a lot from these:

  • E. M. Alfsen and F. W. Shultz, Geometry of State Spaces of Operator Algebras, Birkhäuser, Basel, 2003.

  • C-H. Chu, Jordan Structures in Geometry and Analysis, Cambridge University Press, Cambridge, 2011.

  • J. Faraut and A. Korányi, Analysis on Symmetric Cones, Oxford University Press, Oxford, 1994.

  • H. Hanche-Olsen and E. Størmer, Jordan Operator Algebras, Pitman, 1984.

  • N. Jacobson, Structure and Representations of Jordan Algebras, AMS, Providence, Rhode Island, 1968.

  • H. Upmeier, Jordan Algebras in Analysis, Operator Theory, and Quantum Mechanics, AMS, Providence, 1987.

So yes: while Jordan algebras are unloved compared to Lie algebras, there is still quite a lot to read about them!

You may wonder why I’m listing a book called Analysis on Symmetric Cones. The category of ‘formally real’ Jordan algebras, the kind that Jordan, von Neumann and Wigner classified in their work on quantum mechanics, turns out to be equivalent to the category of ‘symmetric cones’: cones in finite-dimensional Euclidean spaces that are self-dual and have a group of symmetries that acts transitively on their interior. I’m still struggling to grok the proof of the equivalence—but there’s a short proof in Faraut and Korányi, along with a lot of other nice stuff.

Posted by: John Baez on July 3, 2020 8:40 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

I came across your Week 192 while wondering whether there’s such a thing as a ‘Jordan operad’.

By the way, there’s a typo there:

The binary operation

XY + XX

is a palindrome, and it’s just the Jordan product!

It seems not much uptake of the idea, but it appears, for instance, in

Posted by: David Corfield on July 6, 2020 7:45 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

David wrote:

I came across your Week 192 while wondering whether there’s such a thing as a ‘Jordan operad’.

It exists in Platonic heaven: there’s an operad whose algebras in (Vect,)(Vect, \otimes) are Jordan algebras, because the Jordan algebra axioms are purely equational, and they don’t involve any duplication or deletion of variables.

And unlike the Lie operad, this ‘Jordan operad’ is a Set operad, since none of the laws involve linear combinations or minus signs. So we can talk about Jordan algebras in any symmetric monoidal category, while Lie algebras only make sense in a symmetric monoidal AbGp-enriched category.

On the other hand, I don’t know how much of this information has filtered down from Platonic heaven to earthly mathematicians. I was trying to get my grad student Jeffrey Morton to work on the Jordan operad, perhaps around the time I wrote “week192”, but he didn’t warm to that project. (That was probably wise of him.)

So, for example, I don’t even know the number of nn-ary operations in the Jordan operad. It’s a potentially interesting sequence of natural numbers. I don’t see Bagherzadeh, Bremner, and Madariaga attempting to compute it, though I might have missed it.

Thanks for catching the typo. I fixed it! The work of correcting typos is endless, yet strangely satisfying for me.

Posted by: John Baez on July 7, 2020 12:25 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

Did you ever consider higher Jordan algebras, along the lines of higher Lie algebras?

Posted by: David Corfield on July 7, 2020 8:09 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

Aha. From

More recently I have been increasingly interested in the theory of operads and, more specifically, operadic Koszulity and (co)homology. I am currently studying Koszul duality theory of the Jordan operad (in [Zap]), whose algebras are the well known Jordan algebras. For years, the problem of Koszul duality of Jordan operad has been considered not well posed, due to the fact that the Jordan operad is cubic. In this paper it is estabilished that, with a certain suitable presentation, the Jordan operad is quadratic-linear Koszul, whose main impact lies in the fact that the cobar construction of its dual cooperad is a resolution of it. An explicit description of the notion of Homotopy Jordan Algebra is given, as a corollary of the aforementioned Koszul duality, by means of Maurer-Cartan elements in the enveloping differential-graded Lie algebra. My next goal is to proceed to study infinity morphisms, homotopy transfer theorem and the deformation complex in this context. We expect that these results might be applied to the study of Jordan super-algebras. (pp. 143-144)

[Zap] is ‘Koszulity of jordan operad’.

Posted by: David Corfield on July 7, 2020 10:08 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

David wrote:

Did you ever consider higher Jordan algebras, along the lines of higher Lie algebras?

Yes, these days I’ve been wondering if they could lead to new insights in the foundations of quantum mechanics. Thanks for the reference!

Posted by: John Baez on July 7, 2020 7:45 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noethers Theorem

The third main branching of the Jordan River leads to Jordan superalgebras introduced by KKK (Victor Kac, Issai Kantor, Irving Kaplansky). (Kevin Crimmon, A Taste of Jordan Algebras, p. 9)

The capacity of mathematicians to work over a concept – 500 pages on Jordan algebras!

I see (p. 111) the ‘Mother of all Classification Theorems’ divides the Jordan landscape into: Quadratic, Hermitian and Albert types, representing “genetically different strains”.

Posted by: David Corfield on July 3, 2020 11:19 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

Another example of a rumination about the dimensionality of spacetime and of a qubit is

They invoke a few postulates, including one that is often called “local tomography,” to fix the dimension of the Bloch ball, and from there argue that the number of spatial dimensions should also be 3. “Local tomography” is basically a restriction on how the dimension of an operator space should scale with that of the underlying vector space. Instead of doing this by requiring that observables be generators, it does so by a condition on how to paste small systems together to make bigger ones. Its significance as an axiom is perhaps best appreciated by exploring how it might be relaxed:

  • L. Hardy and W. K. Wootters, “Limited Holism and Real-Vector-Space Quantum Theory,” Foundations of Physics 42 (2012), 454–73, arXiv:1005.4870.
Posted by: Blake Stacey on July 3, 2020 10:21 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

One oddity about 𝔥 2()\mathfrak{h}_2(\mathbb{C}) is that Gleason’s theorem does not apply there. This is essentially because once you pick a point on the Bloch sphere, there’s a unique antipodal point, and that pair of points then specifies an orthonormal basis. So, you don’t get interlocking bases that share common vectors, and the constraints in Gleason’s assumptions aren’t strong enough to imply very much. This would suggest that Gleason’s theorem also fails for spin factors, though I don’t know if anyone has said so explicitly. It also raises the question of whether a version of Gleason’s theorem can be proved for 𝔥 3(𝕆)\mathfrak{h}_3(\mathbb{O}).

Posted by: Blake Stacey on July 7, 2020 7:13 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

So, the exceptional Jordan algebra is sitting there, waiting for us to put it to some good use in physics.

Perhaps the very recent

  • Latham Boyle, The Standard Model, The Exceptional Jordan Algebra, and Triality, (arXiv:2006.16265)

One of its references might be of interest to you:

Jordan algebras were first introduced in an effort to restructure quantum mechanics purely in terms of physical observables. In this paper we explain why, if one attempts to reformulate the internal structure of the standard model of particle physics geometrically, one arrives naturally at a discrete internal geometry that is coordinatized by a Jordan algebra.

Posted by: David Corfield on July 1, 2020 12:51 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

Thanks! By some coincidence Latham Boyle had sent me his paper “The Standard Model, the exceptional Jordan algebra, and triality” and I just noticed it in my email now. In this paper he does some interesting things with the observation (due to Dubois–Violette, Todorov and maybe me) that if you take automorphisms of the exceptional Jordan algebra (𝔥 3(𝕆)\mathfrak{h}_3(\mathbb{O})) that preserve a copy of Minkowski spacetime (𝔥 2()\mathfrak{h}_2(\mathbb{C})) inside it, these transformations form the Standard Model gauge group.

Posted by: John Baez on July 3, 2020 12:08 AM | Permalink | Reply to this

Steve

Besides all that Wick rotation stuff, the KMS/Tomita-Takesaki perspective on statistical physics and operator algebras explains a lot of that temperature/time stuff in a dimensionally obvious way.

But the way I like to view it is as follows: suppose one changes the unit of time, dilating it by some constant CC. If we have a Hamiltonian \mathcal{H}, it undergoes the extended canonical transformation XX=XX \mapsto X' = X, PP=CPP \mapsto P' = C P, =C\mathcal{H} \mapsto \mathcal{H}' = C\mathcal{H}. Since this is a change of units, it leaves the Gibbs factor e βe^{-\beta \mathcal{H}} invariant, i.e. β=β/C\beta' = \beta/C, so β\beta scales as time.

Meanwhile one can derive the Gibbs distribution via the following elementary argument that I can’t find anywhere else…

Ansatz: the probability of a state depends only on its energy (a la Faddeev characterization of entropy)

Meanwhile, energy is only defined up to an additive constant ε\varepsilon, so f\exists f s.t. (E k)=f(E k) jf(E j)=f(E k+ε) jf(E j+ε) \mathbb{P}(E_k) = \frac{f(E_k)}{\sum_j f(E_j)} = \frac{f(E_k + \varepsilon)}{\sum_j f(E_j + \varepsilon)} Define g E(ε):= jf(E j+ε) jf(E j) g_E(\varepsilon) := \frac{\sum_j f(E_j + \varepsilon)}{\sum_j f(E_j)} Now g E(0)=1g_E(0) = 1 and (E k)=f(E k) jf(E j+ε)g E(ε)=f(E k+ε) jf(E j+ε) \mathbb{P}(E_k) = \frac{f(E_k)}{\sum_j f(E_j + \varepsilon)} g_E(\varepsilon) = \frac{f(E_k + \varepsilon)}{\sum_j f(E_j + \varepsilon)}

This implies f(E k)g E(ε)=f(E k+ε)f(E_k) \cdot g_E(\varepsilon) = f(E_k + \varepsilon)

\Rightarrow f(E k+ε)f(E k)=(g E(ε)1)f(E k)f(E_k + \varepsilon) - f(E_k) = (g_E(\varepsilon) - 1) \cdot f(E_k)

\Rightarrow f(E k)=g E(0)f(E k)f'(E_k) = g'_E(0) \cdot f(E_k) since g E(0)=1g_E(0) = 1

\Rightarrow f(E k)=Cexp(g E(0)E k)f(E_k) = C \exp (g'_E(0)E_k)

Set (w.l.o.g.) β:=g E(0)\beta := -g'_E(0) and C1C \equiv 1 to recover the Gibbs distribution. This argument is self-consistent since g E(ε)=exp(βε)g Egg_E(\varepsilon) = \exp(-\beta \varepsilon) \Rightarrow g_E \equiv g, i.e., the derivation is for the canonical ensemble (fixed β\beta).

Posted by: Steve Huntsman on June 30, 2020 7:39 PM | Permalink | Reply to this

Re: Steve

Sorry about my bad formatting. I should also mention the “thermal time hypothesis” of Connes and Rovelli in re: KMS/Tomita-Takesaki

Posted by: Steve Huntsman on June 30, 2020 7:41 PM | Permalink | Reply to this

That’s a nice derivation of the Gibbs state from 1) the probability of being in any energy eigenstate depends only on the energy and 2) the energy is only defined up to a constant!

The business of KMS states is somehow connected to what I’m talking about, but KMS condition involves

e βH/2ae βH/2 e^{\beta H/2} \, a \, e^{-\beta H/2}

while the transformations I’m talking about are

ae βH/2ae βH/2 a \mapsto e^{-\beta H/2} \, a \, e^{-\beta H/2}

so there’s something funny going on here.

I tried to fix your formatting a bit, but somehow people manage to get their names into the subject header—I don’t know how—and it seems impossible to fix that.

Posted by: John Baez on July 1, 2020 1:03 AM | Permalink | Reply to this

Re: Steve

Interesting. I once tried something similar, based on the idea that if AA and BB are at the same temperature, a noninteracting composite system ABAB is also at that temperature. Suppose that E jE_j is an energy level of system AA and E kE_k is an energy level of BB. Then, if there is no interaction between the two systems, ABAB will have an energy level E j+E kE_j + E_k. If we assume that for all systems prepared at temperature TT,

P(E n)=1Zf(E n), P(E_n) = \frac{1}{Z} f(E_n),

then we have

f(E j)f(E k)Z AZ B=f(E j+E k)Z AB. \frac{f(E_j) f(E_k)}{Z_A Z_B} = \frac{f(E_j + E_k)}{Z_{AB}}.

But we have the freedom to adjust ff by an overall multiplicative constant, since the meaningful quantities are the probabilities and any prefactor will cancel when we divide by the partition function. So, we can declare

f(0)=1, f(0) = 1,

which yields

Z AZ B=Z AB, Z_A Z_B = Z_{AB},

and thus

f(E j+E k)=f(E j)f(E k). f(E_j + E_k) = f(E_j) f(E_k).

And this is just Cauchy’s functional equation for the exponential. So, provided that ff is continuous at even a single point, then

f(E)=e βE, f(E) = e^{-\beta E},

where the “coolness” β\beta labels the equivalence classes of thermal equilibrium. Showing that β\beta ought to be inversely proportional to the temperature TT that we define in phenomenological thermodynamics takes a little more work. For example, you can do a bit of elementary kinetic theory on an ideal gas and convince yourself that the expectation value of a particle’s kinetic energy is proportional to k BTk_B T.

It’s a standard result that partition functions for independent systems multiply; after years of seeing and using this, I suddenly wondered if the argument could run in reverse. I liked this little exercise because it seemed to play nicely with the tensor-product rule for combining states in quantum theory. If our states for systems AA and BB are ρ A\rho_A and ρ B\rho_B respectively, then our joint state for the pair considered together without introducing any correlation is ρ Aρ B\rho_A \otimes \rho_B. And if ρ A\rho_A and ρ B\rho_B are to be “thermal states” in the same equivalence class of thermal equilibrium, then ρ Aρ B\rho_A \otimes \rho_B should belong to that equivalence class too. And this ties back to the original subject of John’s post, because the tensor-product rule for composing systems works well when we do quantum mechanics over \mathbb{C}, but not when we try it over \mathbb{R} or \mathbb{H}. In brief,

(d N) 2=(d 2) N, (d^N)^2 = (d^2)^N,

but

d N(d N+1)2(d(d+1)2) N. \frac{d^N(d^N + 1)}{2} \ge \left(\frac{d(d+1)}{2}\right)^N .

That is, in regular quantum mechanics over \mathbb{C}, the dimension of the space of d N×d Nd^N \times d^N density matrices is equal to the NNth power of the space of d×dd \times d density matrices. But if we try to do quantum mechanics over \mathbb{R}, there is a mismatch. Consequently, in the \mathbb{C} version, we can span the space of NN-partite joint states by taking tensor products of d×dd \times d density matrices, but we can’t in the \mathbb{R} version.

Of course, my favorite reason to prefer \mathbb{C} over \mathbb{R} goes back to equiangular lines. There’s a full set of them in 4\mathbb{C}^4 but not in 4\mathbb{R}^4, meaning that there’s a much nicer reference POVM for a pair of qubits than for a pair of “rebits”!

Some literature pointers: Brandenburger and Steverson also used the Cauchy functional equation to obtain the form of Gibbs states, from a different starting point. They do not invoke quantum concepts, and they do not investigate in depth the question of how the β\beta that arises from solving the Cauchy functional equation relates to the TT of thermodynamics. Yunger Halpern and Renes arrive at a special case of the Cauchy functional equation (Lemma 8), using more details about “resource theory” than I’ve developed here, and proceed from there to the Gibbs states.

Posted by: Blake Stacey on July 1, 2020 2:25 AM | Permalink | Reply to this

Cool stuff, Blake!

Since the foundations of quantum mechanics are what’s most on my mind, I’ll focus on your remark about tensor products in real versus complex quantum mechanics. Various people have tried to argue for the complex version based on its superiority for describing composite systems. Some of them may be saying what you just said, or something similar. See for example axiom 4 here:

In my recent thoughts on Jordan algebras I’ve completely focused on one quantum system at a time—not a monoidal category of quantum systems, which is a very popular approach these days, and one I wholly support. So, I should take my new ideas and see how they interact with the monoidal category approach!

I think it’s bad to talk about observables, as I did in my paper, without talking about how observables get observed. This is a physical process that involves a composite system: at the very least, the “observed” system and the “observer” system. So it deserves the monoidal category approach.

By the way, I just noticed a paper on Hardy’s work that has a nice phrase for philosophers of quantum mechanics: “Shut up and contemplate!”

Posted by: John Baez on July 3, 2020 9:25 PM | Permalink | Reply to this

Re: Steve

Cool stuff, Blake!

That puts a smile on my face, though of course I can’t strictly speaking claim that any of it is cool except in the limit of large β\beta. :-)

Thank you for pointing to Darrigol (2015). A lot of work has gone into “reconstructing quantum theory,” very much stimulated by Hardy, but less energy has been spent making reviews that provide bigger pictures of the field. One overview that I think merits attention is this lecture by Wilce:

In Darrigol’s review of Hardy, there are some things I quite like and others that I’m less moved by. For example, despite the fairly lengthy and clear discussion of what a “correspondence argument” might mean, it still seems under-defined to me. Of course, this is a challenge that goes back to Bohr (a very easy writer to misunderstand or find obscure). Darrigol seemingly wants to apply the “correspondence principle” to large ensembles of “identically prepared” particles, i.e., to the fictitious collections that physicists introduce in order to think of probabilities as relative frequencies in something. But when we actually do an experiment on a physical thing we are treating classically, we are not experimenting upon a notional ensemble, but upon a physical aggregate. Within such an aggregate, there can be interactions and correlations and variations. An observer is sometimes able to make confident predictions about macroscopic events precisely because they forsake the attempt to predict microscopic ones. We could imagine a theory that allows precise predictions for large-scale behaviors but not for the outcomes of experiments upon individual particles. So, taking the mathematics of angular momentum directly over to the microscopic physics, as Darrigol does between Eqs. (1) and (2), strikes me as making a stronger assumption than considerations of “correspondence” really justify.

On the other hand, I like this analogy:

Empirical veracity further requires the axiom to be a generalization of commonly accepted experimental facts. We would of course be happy if all the axioms met this criterion: we would thus be able to deduce quantum mechanics from a few empirically obvious principles just as we can, for instance, derive thermodynamics from the impossibility of two forms of perpetual motion (with some background knowledge of course).

In a way, this points to a reason why I feel that there’s more yet to do in the quantum-reconstruction line. When I look at these axiom systems (Hardy; Chiribella, D’Ariano and Perinotti; etc.), they strike me as trying to make quantum physics “benignly humdrum,” in David Mermin’s phrase. Each axiom is presented in a way that makes it sound as unremarkable as possible (and when there is something passingly unusual, it turns out to hold in theories much less strange than quantum mechanics, like the Spekkens toy model). It’s all very “yes, I’ll buy that” and “sure, that sounds like it should hold,” and then, after all the technical heavy lifting, we arrive at the formalism of quantum theory. The genuinely exotic features of that theory — like the violation of Bell inequalities — aren’t brought any closer; if anything, they’re pushed farther away.

Contrast this with thermodynamics. There’s drama in those laws! Energy is conserved, but useful energy diminishes. Or, take Einstein’s postulates for special relativity. Observers can do experiments and come to agree upon the laws of physics, but they cannot agree upon a standard of rest, even if they measure the speed of light. The rules have tension with each other, almost to the brink of contradiction. Quoting “On the Electrodynamics of Moving Bodies” with emphasis added:

We will raise this conjecture (the purport of which will hereafter be called the “Principle of Relativity”) to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity cc which is independent of the state of motion of the emitting body.

I want a page with the Four Laws of Quantum Mechanics, and I want to be able to say that one of the Laws is “only apparently irreconcilable” with another!

Of course, that’s not a very detailed basis for a research project, so I do try to descend from grandiosities to specifics. A while back, I grew intrigued by the study of convex cones mentioned upthread. (I worked through the classification theorem given in Faraut and Korányi, but the details didn’t stick very well, perhaps because I never had to teach the subject to someone else.) One question on my mind is why, on physical grounds, we should want our cone to be homogeneous — that is, why should we be so richly supplied with isometries that it is possible to map any interior point to any other?

Posted by: Blake Stacey on July 5, 2020 12:50 AM | Permalink | Reply to this

Re: Steve

Blake wrote:

One question on my mind is why, on physical grounds, we should want our cone to be homogeneous — that is, why should we be so richly supplied with isometries that it is possible to map any interior point to any other?

I don’t know! I actually find it easier to convince myself that observables should form a formally real Jordan algebra, and work from that to the conclusion that nonnegative linear functionals on this Jordan algebra form a homogeneous and self-dual cone.

The argument goes roughly like this this: any finite-dimensional vector space of observables equipped with a quadratic ‘squaring’ operation gets a commutative bilinear product defined by

ab=12((a+b) 2a 2b 2) a \circ b = \frac{1}{2} \left((a+b)^2 - a^2 - b^2 \right)

If we assume this product is power-associative (so that ‘raising to the nnth power’ is well-defined) and that squaring obeys

a 1 2++a n 2=0a 1,,a n=0 a_1^2 + \cdots + a_n^2 = 0 \implies a_1, \dots, a_n = 0

then we’ve got a formally real Jordan algebra. These can be classified, and they correspond to Euclidean vector spaces with homogeneous self-dual cones.

The idea that observables should have nnth powers is pretty reasonable: to measure a na^n I just measure aa and raise the answer to the nnth power. The harder part to physically justify is linear combinations, and that squaring should be a quadratic operation.

Posted by: John Baez on July 5, 2020 2:46 AM | Permalink | Reply to this

Re: Steve

I actually find it easier to convince myself that observables should form a formally real Jordan algebra, and work from that to the conclusion that nonnegative linear functionals on this Jordan algebra form a homogeneous and self-dual cone.

Also good!

The idea that observables should have nnth powers is pretty reasonable: to measure a na^n I just measure aa and raise the answer to the nnth power.

I have a vague worry about this. If the operator aa stands for some experiment that I imagine I can do, then there are a couple different meanings that “a na^n” might have. It could mean “do the action aa, obtain a numerical outcome and raise that number to the nnth power”. Or, it could mean “do the action aa repeatedly, nn times in succession, as quickly as possible, and multiply all the resulting numbers together”. Introductory quantum mechanics leaves us with the habit of thinking these are equivalent, but there’s no deep conceptual reason for that, and often it doesn’t make sense in practice. “For instance,” write Nielsen and Chuang, “if we use a silvered screen to measure the position of a photon we destroy the photon in the process. This certainly makes it impossible to repeat the measurement of the photon’s position!”

The harder part to physically justify is linear combinations, and that squaring should be a quadratic operation.

Making assumptions too readily about how linear combinations work seems to have tripped up von Neumann, leading him to think he had a stronger no-hidden-variables theorem than he did. (Though, according to Wigner, the theorem in von Neumann’s textbook wasn’t really what convinced von Neumann that there couldn’t be hidden variables.)

Perhaps justifying that “squaring should be a quadratic operation” is related to the worry I sketched above.

Posted by: Blake Stacey on July 6, 2020 6:28 PM | Permalink | Reply to this

Re: Steve

Blake wrote:

I have a vague worry about this. If the operator aa stands for some experiment that I imagine I can do, then there are a couple different meanings that “a na^n” might have. It could mean “do the action aa, obtain a numerical outcome and raise that number to the nnth power”. Or, it could mean “do the action aa repeatedly, nn times in succession, as quickly as possible, and multiply all the resulting numbers together”.

To me, only the first is a reasonable interpretation of measuring a na^n. If we want to measure a na^n in a given state, one way might be to measure a na^n ‘directly’ somehow. Another way is to measure aa in that state and raise the answer to the nnth power. These should agree when they’re both possible.

Repeatedly measuring aa and multiplying the answers is a poor substitute. Doing these repeated measurements “as quickly as possible” sounds like an attempt to prevent the state from having much time to change—but if doing a measurement very quickly requires a very large temporary change in the Hamiltonian of the system (by coupling it strongly to some other system), the state may change a lot in the process of each measurement. You gave the example of limiting case where the measurement has the potential to completely destroy the system being measured.

Addition of observables is tricky operationally. Raising an observable to a a power seems clearer, defined as I just did. From this if we have an ensemble of systems in the same state ω\omega we can in theory determine a probability measure μ\mu for aa in that state, the spectral measure, by

x ndμ(x)=ω(a n) \int_{\mathbb{R}} x^n \, d\mu(x) = \omega(a^n)

where the right side is operationally defined as the average measured value of a na^n.

But as usual, we never know for sure how many times we need to measure a na^n to get a good estimate of this average, and there are also practical problems when it comes to determining a measure from its moments, even though we have theorems saying we can do it, which apply when the spectrum of aa is bounded.

Posted by: John Baez on July 6, 2020 8:54 PM | Permalink | Reply to this

Re: Steve

John wrote:

Repeatedly measuring aa and multiplying the answers is a poor substitute.

Yes, I agree. My trouble is really that I’ve been trapped in too many conversations with people who find the two notions interchangeable, or who instinctively regard “you get the same result if you apply it twice” as a defining property of an ideal measurement. To me, that seems like a rather arbitrary choice of property to elevate, for reasons already sketched above; it might even be a notion of “ideal” that we should work to get away from. (When I’ve had blackboard conversations about this, what they generally seem to want is an analogue of a classical position measurement. But would we call a position measurement an ultimate “ideal” in classical mechanics? It only gives information about half of a canonical coordinate pair!) And a condition about pairs of measurements seems like it must be less fundamental than a statement about individual measurements.

Posted by: Blake Stacey on July 7, 2020 6:56 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

Hello.Thanks for sharing your paper, it is being very enlightening for me. But I have a difficulty at one point. In the proof of theorem 2 you assure “this is the unique solution of the desired differential equation and initial conditions”. But you are referring to the definition above (…’a’ generates a one-parameter family…). So, do we have an existence and uniqueness theorem for ODEs in infinite dimensional spaces? I am not aware of it.

Thanks in advance.

Posted by: Antonio J Pan on July 9, 2020 7:47 AM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

ok, dont worry, I found the argument by myself. By taking a point in MM it is a trivial result of analysis. Thanks anyway.

Posted by: Antonio J Pan on July 9, 2020 2:50 PM | Permalink | Reply to this

Re: Getting to the Bottom of Noether’s Theorem

I was almost done with writing the paper when I wrote that proof, and I was feeling impatient, so I didn’t explain it very well. I wanted to find a reference for it but didn’t succeed. I’ll try to make this clearer.

The desired result follows from a standard fact, namely that every smooth vector field on a compact smooth manifold generates a unique flow. But the result is phrased in terms of smooth functions on the manifold, so a bit of translation is required.

Posted by: John Baez on July 9, 2020 8:24 PM | Permalink | Reply to this

Post a New Comment