April 17, 2010

This Week’s Finds in Mathematical Physics (Week 295)

Posted by John Baez

In week295 of This Week’s Finds, learn about the principle of least power, Poincaré duality for electrical circuits — and a curious generalization of Hamiltonian mechanics that involves entropy as well as energy. Also, check out the eruption of Eyjafjallajökull!

Posted at April 17, 2010 1:36 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2205

Re: This Week’s Finds in Mathematical Physics (Week 295)

You start to see why Raoul Bott’s training in electrical engineering helped his mathematics.

To see if entropy and energy are linked as you say, it might be worth exploring other varieties of entropy: Kolmogorov-Sinai entropy, topological entropy, volume entropy, etc. I wonder what, if anything, is the common essence.

I see that Roland Gunesch is now hosting the material that Chris Hillman had collected on entropy.

Posted by: David Corfield on April 17, 2010 10:52 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Possible typo? In the paragraph starting ‘Instead of giving you…’, I thought the word ‘parallel’ in the second sentence should be ‘series’.

Posted by: Tom Leinster on April 17, 2010 1:17 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

You’re right! Thanks! Fixed.

Posted by: John Baez on April 17, 2010 8:45 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

On the subject of degenerate structures…

The first thing that comes to mind (always a dangerous way to start something!) is that the boundary of a Stein Manifold has a few degenerate structures on it; in particular is the distinguished boundary, the maximal $J$-stable subbundle of the real tangent bundle (where $J$ is the complex structure induced by $i$). Hörmander, for one, studied the flows of smooth sections of the distinguished boundary. In particular, for some purposes it’s to ask whether the space of smooth sections of some sub-tangent-bundle is closed under Lie Bracket, which is sometimes called Hörmander’s condition, due to this work of his. If one starts reading into the Neulander-Nirenberg theorem, I’m sure they’ll be bound to find their way backwards to what I should have in mind, but is too hazy in my present memory.

On the other hand, I’m guessing this isn’t what John wants, because the boundary of a Kähler Stein manifold isn’t symplectic, but contact, for obvious reasons.

Posted by: some guy on the street on April 17, 2010 4:56 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

What I’m really looking for, in fact, is the completion of this analogy:

$symplectic: Poisson :: Kaehler : ???$

Posted by: John Baez on April 17, 2010 8:44 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Do you really want Kaehler? It looks like you don’t need integrability of the complex structure, so almost Kaehler would do just as well if not better.

Overall my wild guess is that you’re looking for Courant algebroids, or something even more general than a Courant algebroid (not that I know what I am talking about here).

Posted by: Eugene Lerman on April 18, 2010 6:49 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Eugene wrote:

Do you really want Kaehler? It looks like you don’t need integrability of the complex structure, so almost Kaehler would do just as well if not better.

Good point! I’ll be happy for the answer to any question along these lines:

$symplectic: Poisson :: Kaehler : ???$

or

$symplectic: Poisson :: almost Kaehler : almost ???$

And when I finally get an answer, I’ll be eager to know the physical difference between dissipative systems described by ‘??? manifolds’ and those described by ‘almost ??? manifolds’.

My student Chris Rogers is becoming an expert on Courant algebroids, so I should ask him about your guess…

Posted by: John Baez on April 18, 2010 9:43 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

I think a whole bunch of messages over the past 30 hours have been gobbled up, including some thoughts of Alan Weinstein that you posted. Anyway, to repeat what I asked, what are the best ways to put some order on the zoo of geometries being mentioned here (complex, Poisson, Kähler, symplectic, etc. and any ‘almost’ versions)?

I know there’s the Cartan classification, about which Arnold writes in Symplectic geometry and topology (J. Math. Phys. 41(6), June 2000)

Symplectic and contact geometries are of course differential geometries of manifolds with some additional structures. Some rather natural axioms led Cartan to a small list of natural geometries of this kind, associated with the simple (pseudo)-groups of diffeomorphisms.

The Cartan list of simple pseudo-groups contains real and complex differential and volume-preserving geometries, symplectic and contact geometries, and a few conformal versions of the preceding geometries. (p. 3340)

And these “should be considered as sisters of ordinary geometry and topology rather than as the parts of it”. How do other geometries fit in at the family gathering?

I wonder what he means by the second paragraph of his closing comments:

Some 19th century mathematicians objected to the tendency toward projectivization of affine and Euclidean geometries. Cayley finally settled the problem, proclaiming “projective geometry is all geometry.”

In the same sense one might now say “symplectic geometry is all geometry,” but I prefer to formulate it in a more geometrical form: contact geometry is all geometry. (p. 3342)

Hmm, if you worked on a notion of $n$-symplectic manifolds which are closely related to multisymplectic geometry, is there an $n$-contact manifold? There does seem to be a notion of multicontact structure.

Posted by: David Corfield on April 21, 2010 8:42 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

David wrote:

Anyway, to repeat what I asked, what are the best ways to put some order on the zoo of geometries being mentioned here (complex, Poisson, Kähler, symplectic, etc. and any ‘almost’ versions)?

First of all, I guess you know that Klein’s program for classifying ‘rigid’ geometries using group theory fits neatly into the more general ‘Cartan geometry’ idea. A vast number of different kinds of geometry can be seen as Cartan geometries.

I’m biased, but I think the best explanation of Cartan geometry is by Derek Wise. It may actually be good to start with the intuitive explanation and then work backwards.

Second of all, it’s good to read about $G$-structures. This is another group-based classification of the zoo of geometries.

However, it’s important to note that lots of geometries are $G$-structures together with ‘integrability conditions’. This, for example, is what makes the difference between an almost complex manifold and a complex manifold.

I don’t know the state of the art on general abstract classifications of ‘integrability conditions’. But I know they’re incredibly important!

Posted by: John Baez on April 21, 2010 7:02 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Thanks. And apparently you can go From G-structures to Cartan geometries and back.

Posted by: David Corfield on April 21, 2010 8:45 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

David wrote:

I think a whole bunch of messages over the past 30 hours have been gobbled up, including some thoughts of Alan Weinstein that you posted.

Yuck!

Here’s what I wrote:

Alan Weinstein says my question reminds him of three things:

1. Work of Boucetta on Riemann–Poisson geometry
2. Work of Marsden and Ratiu on dissipative systems
3. Work of Huebschmann on Kähler/Poisson geometry

Does anyone know enough about these to comment on their relevance to my question about filling in the following analogy?

$symplectic: Poisson :: Kaehler : ???$

Posted by: John Baez on April 21, 2010 6:53 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

JB: Has anyone thought about such things? They remind me a little of “Dirac structures” and “generalized complex geometry” - but I don’t know enough about those subjects to know if they’re relevant here.

Yes, it is known that Dirac structures are relevant. I’ll dig out the refrences when I have a bit more time.

JB: This […] reminds me of other strange things, like the idea of using complex-valued Hamiltonians to describe dissipative systems, or the idea of “inverse temperature as imaginary time”. I can’t tell yet if there’s a big idea lurking here, or just a mess….

Yes, this is also related, You’d find the KMS condition that relates inverse temperature and imaginary time (which is a quantum property) in the book by Breuer et al. that I had recommended. In some sense, he looks at the quantum version of GENERIC, with commutators in place of Poisson brackets and something related to Jordan products in place of the dissipative bracket. But there is no prominent reference directly relating the two. (I probably saw something in work by Oettinger, though. Ask him by email – also about Dirac structures, he’ll know – to find out!)

But things are more complex than that! It is a very long story, distributed in small bits and pieces in the literature. It has taken me a long time to figure out how things are connected.

Posted by: Arnold Neumaier on April 18, 2010 8:39 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Arnold Neumaier wrote:

But things are more complex than that! It is a very long story, distributed in small bits and pieces in the literature. It has taken me a long time to figure out how things are connected.

Are you going to tell the world? A paper that explained this stuff well could be very important.

I recently got ahold of

• A.N. Beris and B.J. Edwards, Thermodynamics of flowing systems with internal microstructure, Oxford U. Press, Oxford, 1994.

and I found to my surprise that their ‘dissipative bracket’ formalism seems quite different from Öttinger’s formalism (explained in week295). In particular, the basic equation is not

$d F/d t = \{H,F\} + [S,F]$

where $H$ is the energy and $S$ is the entropy. Instead, it’s

$d F / d t = \{H,F\} + [H,F]$

Energy is not necessarily conserved, and thus this other formalism can easily handle the damped harmonic oscillator. But they seem to hint that this formalism is not really so different! Do you know what’s going on here? I don’t want a lot of different formalisms…

Posted by: John Baez on April 23, 2010 3:02 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

The ‘dissipative bracket’:

dF/dt={H,F}+[S,F]

So how would this work for a simple example, like the damped harmonic oscillator?

Gerard

Posted by: Gerard Westendorp on April 24, 2010 4:24 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Gerard wrote:

So how would this work for a simple example, like the damped harmonic oscillator?

Are you trying to ask me how Beris and Edwards’ equation

$d O/d t = \{H,O\} + [H,O]$

works for the damped harmonic oscillator? You actually asked me how Öttinger’s equation

$d O/d t = \{H,O\} + [S,O]$

works for the damped harmonic oscillator.

Öttinger’s formalism won’t work for the damped harmonic oscillator if the only degrees of freedom we use are the position and momentum of the oscillator, $q$ and $p$. Why? Because energy is conserved in Öttinger’s formalism! He demands

$[S,H] = 0$

so that

$d H/d t = \{H,H\} + [S,H] = 0$

Since energy is not conserved for the damped harmonic oscillator, we’re stuck.

In Beris and Edwards’ formalism, you just take the usual Hamiltonian for the harmonic oscillator

$H = \frac{1}{2} (p^2 + q^2)$

and the usual Poisson bracket

$\{F,G\} = \frac{\partial F}{\partial p} \frac{\partial G}{\partial q} - \frac{\partial F}{\partial q} \frac{\partial G}{\partial p}$

but then define a dissipative bracket

$[F,G] = -k \frac{\partial F}{\partial p} \frac{\partial G}{\partial p}$

where $k$ is the damping constant.

I’m annoyed that Öttinger’s formalism doesn’t seem to apply to the damped harmonic oscillator. But I can imagine three ways around this.

First of all, switch to Beris and Edwards’ formalism.

Second of all, explicitly couple the harmonic oscillator to a ‘heat bath’, which has extra degrees of freedom, and make friction transfer energy from the harmonic oscillator to the heat bath. I don’t actually know if this works, but I know an analogous trick works for the quantum version of the damped harmonic oscillator — it’s called the Caldeira–Leggett model. In this model, the heat bath is treated as an infinite collection of harmonic oscillators. If you follow the link, you’ll see people have considered the classical limit of the Caldeira–Leggett model. But it would also be nice to formulate a purely classical version of this model, if possible.

Third, drop the assumption that

$[S,H] = 0$

since this seems to be a removable portion of Öttinger’s formalism. I recently noticed that if we do this, Öttinger’s formalism actually includes Beris and Edwards’ formalism as a special case! We can take Öttinger’s formalism and choose $S = H$. Then Öttinger’s evolution equation

$d O/d t = \{H,O\} + [S,O]$

reduces to

$d O/d t = \{H,O\} + [H,O]$

which is Beris and Edwards’ equation. And then with a suitable choice of the dissipative bracket $[\cdot, \cdot]$ we can treat the damped harmonic oscillator.

However, this raises other questions. It seems physically absurd to take $S = H$, since entropy and energy have different units. I would like to get the free energy

$F = H - T S$

into the game, where $T$ is an adjustable constant (not an observable) called the temperature. So, I would like to replace Öttinger’s evolution equation

$d O/d t = \{H,O\} + [S,O]$

(where $O$ is any observable) by something like this:

$d O/d t = \{H,O\} - T [S,O]$

This is easy to do by rescaling the dissipative bracket by a factor of $-1/T$. If I use Öttinger’s strong assumption

$\{S, O \} = [H, O] = 0$

for all observables $O$, then I can rewrite Öttinger’s evolution equation as follows:

$d O / d t = \{F, O\} + [F , O]$

Now time evolution is generated by a single observable — not the energy, but the free energy! I like this idea.

Posted by: John Baez on April 24, 2010 5:12 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Oh! I haven’t been following this and haven’t fully absorbed the story yet either, but while reading this latest comment, some thoughts popped into my head so I thought I’d share them.

These brackets remind me of some stuff I did with noncommutative differential graded algebras. Instead of writing those relations as evolution equations $dO/dt$, I wonder if you could instead express them as noncommutative 1-forms

$d O = \left(\stackrel{\leftarrow}{\partial_t} O\right) d t + \left(\stackrel{\leftarrow}{\partial_\mu} O\right) d x^\mu$

where $x^\mu$ includes all coordinates besides time. The derivation $d$ satisfied the relations

$d^2 = 0\quad\text{and}\quad d(a b) = (d a)b + (-1)^{|a||b|} a (d b).$

The thing that makes this “interesting” is that functions and differentials do not commute, i.e.

$H (d O) \ne (d O) H.$

The applications I had in mine were stochastic evolution equations, but the authors I learned this from applied it to a whole host of nonlinear evolution equations, e.g. see this list. I’ve written up some of the basics here.

The dissipative bracket, in particular, looks like the kind of thing that naturally pops out when considering the coordinate expressions resulting from the commutative relations between functions and differentials.

Posted by: Eric on April 25, 2010 4:32 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

John Baez wrote:

Now time evolution is generated by a single observable — not the energy, but the free energy! I like this idea.

You might try Exergy instead of Free energy. This looks almost the same:

Free energy = H - T S
Exergy = H - T0 S

With T0 a fixed reference temperature, rather than the temperature of the system considered.

Actually, in chemical thermodynamics, the usual notation is:

U = internal energy
H = U+pV = enthalpy
F = U-TS = (Helmhotlz) free energy
G = U+pV-TS = (Gibbs) free enthalpy

We are using H for ‘Hamiltonian’, which is more or less equivalent to U.

Anyway, exergy is the theoretical work that you can do with a certain amount of energy, given a lowest heat bath of temperature T0. So heat at T0 has zero exergy. Exergy is more or less the stuff you pay for. You might try arguing with your power company why you should pay for energy, which is conserved. Paying for exergy, which always decreases, acually makes more sense.

I’ll be on a week vacation, I’ll think about how to construct a dissipative bracket while relaxing. I also plan to build a Stirling engine.

Gerard

Posted by: Gerard Westendorp on April 30, 2010 10:57 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

AN: But things are more complex than that! It is a very long story, distributed in small bits and pieces in the literature. It has taken me a long time to figure out how things are connected.

JB: Are you going to tell the world? A paper that explained this stuff well could be very important.

I have several partial papers on this but nothing finished. It is really a complex story, and I haven’t found so far the time to give a publishable account of it. It is related to everything dissipative discussed in physics, from the anharmonic oscillator to damped mechanical structures and nonlinear elasticity to electric circuits to the Navier-Stokes equations to the Boltzmann equation to resonances in quantum systems to retarded solutions in QED, and to deeper levels, probably even to quantum gravity. On the other hand, it is related to existence problems in symmetric hyperbolic systems, to the ergodic hypothesis, to photons on demands, and lots of other stuff. The stuff is everywhere, and a good presentation is not much short of a theory of everything – not the pale shadow string theory is after, which has difficulties being a theory of anything at all, but a theory that connects lots of different applications under a single umbrella.

JB: Do you know what’s going on here? I don’t want a lot of different formalisms…

The multiple and slightly incompatible versions are proof that the field still has some research potential for theoreticians wanting to bring clarity into the picture. There are many schools of nonequilibrium thermodynamics, and they all have their own version of the theory. I just pointed you to some of the best stuff. There is more: For example, you should also read

Mueller and Ruggeri, Rational extended thermodynamics, Springer 1998

who proceed without any Poisson bracket, but use variational principles - still another very useful side of the same coin. Neither theory is the final word, but each contains some aspect of the truth. As I said, the real thing is a complex story, and it takes time and effort to see the connections and implications.

Getting an account that fits everything in a uniform way and makes the connections transparent is a big job that will fill a book (maybe a second volume of my book ”Classical and Quantum Mechanics via Lie algebras”, which apart from a few general remarks only treats equilibrium and the conservative case). Worse, in trying to do it well I stumble over lots of unsolved or poorly solved side issues. Neither Oettinger nor Beris and Edwards have the best formulation of the theory - many things are there too ad hoc. The correct formulation, which provides the full structure, is via dynamics in Lie algebras, but Oettinger is almost silent about this aspects, while Beris and Edwards don’t see the trees in their forest. (Almost all their Poisson brackets are in fact Lie-Poisson algebras. I treat these from an abstract point of view in my book, but had no space for the dynamical implications.)

At the moment I am trying to make sense from this perspective of Kadanoff-Baym equations (kinetic equations generalizing the Boltzmann equation to particles with a finite life time). Getting a match in each application takes some time, but then it shows the application from a new, useful perspective.

Maybe I can write about it in a more informal manner; perhaps I’ll try in July, after our summer term is over. But should you have time for a visit in Vienna, I could show you many things – much quicker than when having to type it into a keyboard!

Posted by: Arnold Neumaier on April 26, 2010 3:26 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Arnold wrote:

The multiple and slightly incompatible versions are proof that the field still has some research potential for theoreticians wanting to bring clarity into the picture.

That’s me! Don’t worry, I regard a mess like this as an opportunity for research, not a problem. I just don’t want to spend a lot of time comparing and reconciling different formalisms for nonequilibrium thermodynamics if someone has already done it. If someone — for example, you —- has already done it, I’d rather just know what they did.

The correct formulation, which provides the full structure, is via dynamics in Lie algebras, but Oettinger is almost silent about this aspects, while Beris and Edwards don’t see the trees in their forest. (Almost all their Poisson brackets are in fact Lie–Poisson algebras. I treat these from an abstract point of view in my book, but had no space for the dynamical implications.)

What do you mean by ‘Lie–Poisson algebra’? Some people use that to mean the Poisson algebra of functions on the dual of a Lie algebra. That’s a very nice Lie algebra, but is there a more or less canonical way of equipping it with a symmetric ‘dissipative bracket’ as well?

Am I on the right track here, or hopelessly lost?

Something about your phrase ‘dynamics in Lie algebras’ reminds me of this paper by Bloch, Krishnaprasad and Marsden.

But should you have time for a visit in Vienna, I could show you many things – much quicker than when having to type it into a keyboard!

That’s very kind of you, and indeed it would be more efficient, as well as pleasant. Alas, I’m heading to Singapore in July and I don’t plan to visit Europe before then. But my student Chris Rogers will be visiting the Erwin Schrödinger Institut during September and October, for the workshop on Higher Structures in Mathematics and Physics. I wonder if I can get him to absorb and then emit the desired information? He knows a lot about symplectic and Poisson geometry and their higher analogues… but he’s less passionate about dissipative systems than I am.

Posted by: John Baez on April 27, 2010 2:18 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

JB: If someone — for example, you —- has already done it, I’d rather just know what they did.

Well, I have a full time job as a math professor, and have to do do all the physics in my spare time; so I can’t give long explanations unless I have them ready in my head and few formulas are sufficient to tell the story. This is not the case here. So until I have more time (which means not before July), I can only make superficial comments.

Your trick with the free energy goes in the right direction. It is not quite right since people want to study temperature dependence in a thermodynamically consistent way, which is lost the way you proceeded. What is ”right” is determined by lots of ”boundary conditions” that I collected over the years – one must recover all the major forms of dissipative systems that have been studied by physicists. If one can, then one knows one is right.

JB: What do you mean by ‘Lie–Poisson algebra’? Some people use that to mean the Poisson algebra of functions on the dual of a Lie algebra.

Yes, almost. The notion is used in the literature (e.g., the mechanics book by Marsden and Ratiu) in the sense you describe. But in Section 9.5 of my book

A. Neumaier and D. Westra, Classical and Quantum Mechanics via Lie algebras. arXiv:0810.1019

I consider a slight generalization, which identifies a distinguished central element 1 of the Lie algebra with the constant function 1, which is needed in many physical applications. This identification is not made in the standard construction, but has the advantage of adding flexibility. For example, in this way, the standard Poisson bracket on phase space functions f(p,q) becomes the Lie-Poisson algebra over the Heisenberg Lie algebra. This is part of my unification program. One gets all 2-cocycle stuff for free (well, at least hidden under the carpet).

JB: is there a more or less canonical way of equipping it with a symmetric ‘dissipative bracket’ as well?

No. And there shouldn’t. The dissipative bracket is far from unique. The reason is that dissipation is due to interaction with an unmodelled environment (which may be an external environment or neglected internal degrees of freedom, or both). Therefore the details depend on how the system is embedded into the unmodelled part.

In the examples, the dissipative part depends on phenomenological ”constants” (actually thermodynamic functions) such as transport coefficients, diffusion constants, and the like. The dissipative bracket can become extremely complicated (e.g., for combustion processes), while the conservative Poisson bracket nearly always comes from a fairly nice Lie-Poisson algebra (in my generalized sense) – although the underlying Lie algebra can also become a bit complex if a really complex physical system is studied. I have met only a single example of a dissipative system with a Poisson bracket that was not Lie-Poisson, and I suspect this arises due to poor modeling rather than something of importance.

JB: Something about your phrase ‘dynamics in Lie algebras’ reminds me of this paper by Bloch, Krishnaprasad and Marsden.

Yes, there is a connection. The double bracket dynamics is one of the standard ways dissipation arises in systems with a Lie structure, actually the simplest instance. There is a significant literature on double bracket flow that you can easily retrieve using http://scholar.google.com

Variants of the double bracket flow arise in similarity renormalization (perhaps the most elegant way to renormalize quantum fields numerically); see, e.g.,

T. S. Walhout, Similarity renormalization, Hamiltonian flow equations, and Dyson’s intermediate representation, Phys. Rev. D 59, 065009 (1999)

Double bracket terms are also characteristic of dissipative Lindblad terms in quantum optics, where they govern the losses of virtually all quantum optical dynamical systems. A glimpse on how it relates to my unification vision is given in Section 7.2 of my book. Actually, in the quantum case, (7.4) is the most general linear dissipative dynamics of interest! The classical case allows a bit more variety, essentially capturing all Markov processes and their deterministic approximations. In both cases, the interesting dynamics often has the double bracket form given in (3.9) on p.63, and for good reasons – not spelled out there, though. However, the two references given there include the book by Breuer and Petruccione that I had recommended before!

Let me also remind you again that Section 7.1 of my book contains a table that is similar to your earlier tables of analogies between mechanics and thermodynamics! Moreover, these are not just analogies, but special cases of a unified picture.

Thus there are good reasons to emphasize the Lie algebra aspect of everything. Unfortunately (but for you perhaps fortunately), many of the Lie algebras of interest are infinite-dimensional, and the mathematical structure of the corresponding Lie groups is highly nontrivial and full of topological pitfalls.

JB: my student Chris Rogers will be visiting the Erwin Schrödinger Institut during September and October, for the workshop on Higher Structures in Mathematics and Physics. He knows a lot about symplectic and Poisson geometry and their higher analogues… but he’s less passionate about dissipative systems than I am.

He is welcome, too. He will be well prepared if he has read my book. And perhaps he catches the virus while reading it!

JB: I wonder if I can get him to absorb and then emit the desired information?

Maybe you can talk him into writing a set of Latex lecture notes about what I can tell him. This would motivate me to present things coherently and in some detail – in September I’d have fairly much time to prepare some lectures on the topic. Then you and others could read them afterwards.

Posted by: Arnold Neumaier on April 27, 2010 1:27 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Arnold wrote:

Maybe you can talk him into writing a set of LaTeX lecture notes about what I can tell him.

Alas, he’ll be too busy writing his thesis to tackle a job like that. But I bet he’d be glad to talk to you.

Thanks for the references! I’ve been a bit distracted — I’m organizing my thoughts on electrical circuits made of linear resistors, getting ready for a talk on that next week. But I will keep thinking about nonequilibrium thermodynamics. And indeed, circuits made of linear resistors are a very simple special case of nonequilibrium thermodynamics! So all the stuff I discussed in week296 should someday fit into a larger picture.

Posted by: John Baez on May 11, 2010 1:43 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

JB: Alas, he’ll be too busy writing his thesis to tackle a job like that.

I understand. On the other hand, I always recommend to my Ph.D. students that they have always time left to do new things not directly related to their main work – at least half a day each week. If this habit is acquired early it prevents them from getting stale later.

Posted by: Arnold Neumaier on May 11, 2010 12:34 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

JB: Has anyone thought about such things? They remind me a little of “Dirac structures” and “generalized complex geometry” - but I don’t know enough about those subjects to know if they’re relevant here.

AN: Yes, it is known that Dirac structures are relevant. I’ll dig out the refrences when I have a bit more time.

Here is a key reference: R. Jongschaap and H.C. Oettinger, J. Non-Newtonian Fluid Mech. 120 (2004), 3-9.

The same journal, 96 (2001), 119-136 relates the Beris–Edwards formulation and the GENERIC formulation. A second paper comparing the two (which gives GENERIC a slight edge) is in J. Non-Equilib. Thermodyn. 23 (1998), 334-350.

Posted by: Arnold Neumaier on May 9, 2010 4:42 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

I find it MUCH more useful if references included titles! why did physicists ever adopt the ‘no title needed’ choice?

Posted by: jim stasheff on May 10, 2010 12:45 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

JS: I find it MUCH more useful if references included titles!

I had been in a hurry, writing in between other things. Here are the full references:

R. Jongschaap and H.C. Oettinger, The mathematical representation of driven thermodynamic systems, J. Non-Newtonian Fluid Mech. 120 (2004), 3-9.

A.N. Beris, Bracket formulation as a source for the development of dynamic equations in continuum mechanics, J. Non-Newtonian Fluid Mech. 96 (2001), 119-136

B.J. Edwards, A.N. Beris, H.C. Oettinger, An analysis of single and double generator thermodynamic formalism for complex fluids. II. The microscopic description, J. Non-Equilib. Thermodyn. 23 (1998), 334-350.

Posted by: Arnold Neumaier on May 11, 2010 12:29 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Thanks to Arnold and Simon

Posted by: jim stasheff on May 11, 2010 12:57 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Full reference for you Jim:

Posted by: Simon Willerton on May 10, 2010 1:05 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

The next information may improve your “This Week’s Finds”:

The principle of minimum production of entropy was established by Ilya Prigogine. He did very clear that it is only valid in the linear regime of non-equilibrium thermodynamics.

As is well-known the principle does not apply far from equilibrium, where the equations of motion cannot be obtained from minimizing some given action.

Entropic extensions of mechanics have been known for about 30 years now, much before Oettinger work!

The GENERIC equation of motion is a special case of the
more general canonical theory equation first obtained by Keizer for random variable n

dn/dt = (dn/dt)_mech + (dn/dt)_diss + (dn/dt)_rand

The first term is the usual reversible term, the second term is a dissipative term obtained from entropy S

(dn/dt)_diss = Omega [ exp(-n^+ @S/@n) - exp(-n^- @S/@n)]

The third term, (dx/dt)_rand, accounts for fluctuations.

GENERIC ignores the fluctuation term (read page 27 of the book that you cited). And only covers dissipation in the Markovian approximation.

Their derivation of dS/dt >= 0 is another proof that this approach is approximate. In more general non-Markovian evolutions, entropy increases according to the second law S(t) >= S(0), but the increasing is not locally monotonic. I.e. the local law dS/dt >= 0 is not valid at *all* times.

It is only in the Markovian approximation in which dS/dt >= 0 is valid.

Oettinger applies this approximation for instance in his (6.72). Thus the resulting M matrix associated to his dissipative bracket [S,n] is only asymptotically valid in the ‘kinetic’ range t >> tau.

There is further limitations and oversimplifications of the GENERIC approach which I am not noticing here. Also their ‘understanding’ of irreversibility as CC is without mathematical basis. It is just another instance of what van Kampen named “mathematical funambulism”.

http://www.canonicalscience.org/research/time.html

http://www.canonicalscience.org/research/canonical.html

http://www.canonicalscience.org/research/nanothermodynamics.html

NOTE: Using my correct Spanish name, your blog gives encoding error and stop from posting! I am forced to post with an incorrect name using American encoding!

Posted by: Juan R. Gonzalez-Alvarez on April 27, 2010 9:07 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Of course, all nonequilibrium concepts have a history reaching far back, beginning perhaps with Navier and Stokes.

Also of course, there are different levels of modeling. Oettinger, or Beris and Edwards only treat the deterministic case without memory, since this is what is most often used in practice. There are also stochastic versions and versions with memory of all that, and these have their applications, too. But the complexity increases significantly, and these are much less used. Moreover, if phrased appropriately, these generalization can be made deterministic (if written in terms of the probability density) and Markovian (if additional variables are introduced to encode the memory), and then they also fit a deterministic framework.

Also, a dynamics where $S(t)\ge S(0)$ but $S(t)$ does not increase monotonically cannot be fundamental in any sense since it assigns a special role to the initial time. But a typical physical system has no well-defined initial time – and once the 0 can be chosen arbitrarily without leaving the class of problems described by a theory, any theory where $S(t)\ge S(0)$ implies that $S(t)$ increases monotonically.

Finally, as one can see, there is sometimes a bit of a war between competing groups in nonequilibrium thermodynamics. Some people in each group think their version is the best and only fundamental one, and freely apply negative labels such as ”mathematical funambulism” to the work of others. This should make one a bit cautious about their contributions. Good science shines by its virtues, not by calling the competition words.

Posted by: Arnold Neumaier on April 28, 2010 10:07 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

What you say about stochastic non-Markovian equations is a rather extended opinion, but it is not right. In any case, the main question here really was that the GENERIC equation is built over several approximations, whereas other formalisms cited are more general.

Dynamics where S(t) does not increase monotonically are fundamental. It is only for large times over the scale (tau_C), that molecular details of the real initial state are ‘erased’ by the destruction fragment of the evolutor and one recovers the usual exponential relaxation which gives the law dS/dt >= 0.

There is some well-known effects that arise when people ignore all this. Balescu reports the early times of kinetic theory, when people took the Markovian equation obtained by Boltzman as fundamental and tried to seek for generalized Boltzmann equations applied to arbitrary densities in the supposition that relaxation was exponential and that dS/dt>=0 always hold. How wrong they were!

Some people claim to derive irreversible equations from reversible ones. As said by van Kampen “Obviously it is a logical impossibility to deduce from reversible equations an irreversible consequence” and emphasized that “One cannot escape from this fact by any amount of mathematical funambulism.”

I can understand that you dislike the label funambulism [1], but you just take another of his statements as “is a logical impossibility” or “your derivation cannot be right” instead doing ad hominem as your last:

“This should make one a bit cautious about their contributions.”

Sorry to remark here the obvious, but Nico van Kampen contributions to science and education are well-known. As D ter Haar has said he is one “of the most outstanding theoretical physicists of the second half of the 20th century” [2].

NOTES:

[1] However, it correctly labels the so called ‘derivations’.

[2] “Nico van Kampen: charlatans beware!” in Physics World, January 2001, p.46

Posted by: Juan R. Gonzalez-Alvarez on April 30, 2010 6:52 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

Juan, sorry you couldn’t use your correct name. To help us fix this, can you tell us what your correct name is? And do you know what aspect of your name the software objected to?

Posted by: Tom Leinster on April 28, 2010 10:25 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 295)

My correct Spanish name is Juan R. González-Álvarez, but when posting it in the Name field gives an XML validation error “Line 10, column 131: non SGML character number 129”.

In the raw code, the error is associated to my name inside the class “comments-post”. I can see my name therein with strange invalid characters.

Note that the same characters give no error in the message body, only in the above class output from parsing the name field!

Posted by: Juan R. Gonzalez-Alvarez on April 30, 2010 6:00 PM | Permalink | Reply to this

Post a New Comment