Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 1, 2010

This Week’s Finds in Mathematical Physics (Week 288)

Posted by John Baez

Happy New Year!

In week288 of This Week’s Finds, start learning about an enormous set of analogies linking electrical circuits, classical mechanics, hydraulics, thermodynamics and chemistry. And continue learning about rational homotopy theory! This time we’ll dig deeper into the ‘commutative cochain problem’, and see why the ordinary cup product on singular cochains is almost but not quite commutative.

Also, guess what this is a picture of:

Posted at January 1, 2010 4:10 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2145

28 Comments & 0 Trackbacks

Re: This Week’s Finds in Mathematical Physics (Week 288)

John wrote:

homotopy theorists feel perfectly fine about studying simplicial sets rather than topological spaces. The reason…

for us oldsters is the adjoint pair: realization denoted | | and Sing

This preceded Quillen but I would guess that it inspired him

Posted by: jim stasheff on January 1, 2010 2:36 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

It certainly inspired him, because the first big example of a ‘Quillen equivalence of model categories’, constructed by him in his book Homotopical Algebra, is this pair of adjoint functors.

The trick was formalizing the sense in which this pair, while not an equivalence of categories, is still an ‘equivalence’ in some homotopy-theoretic sense. Everyone felt it was true (or so I gather), but I’m not sure if anyone tried to formalize it before Quillen.

Posted by: John Baez on January 1, 2010 6:31 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

I think people at least knew, before Quillen, that it induced an equivalence of homotopy categories. That’s one precise statement, which of course turns out to be equivalent to being a Quillen equivalence. (I still haven’t decided whether or not I think it’s surprising that you get a full-blown equivalence of homotopy theories for free once you have an adjunction of 1-categories that induces an equivalence of homotopy categories.)

Posted by: Mike Shulman on January 1, 2010 7:14 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

This time we’ll meet differential graded Lie algebras, and see how they yield a massive generalization of Lie theory.

Really? (-:O

Posted by: Mike Shulman on January 1, 2010 7:16 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Wow. It took me quite a while to figure out what your remark meant! You see, I accidentally described the section on rational homotopy theory in the This Week’s Finds that I’d just finished writing — not ‘week288’, but the next one! I’ve been taking advantage of the vacation to build up a kind of reserve of future issues.

Posted by: John Baez on January 1, 2010 11:07 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Heh, sorry for being too cryptic.

Posted by: Mike Shulman on January 2, 2010 5:07 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Does that also apply to the “thermodynamics and chemistry” part of the intro:

start learning about an enormous set of analogies linking electrical circuits, classical mechanics, hydraulics, thermodynamics and chemistry.

?

(Just proving that I “read these things”, I don’t think that more examples are needed :-)

Posted by: Tim vB on January 2, 2010 11:57 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Tim vB wrote:

Does that also apply to the “thermodynamics and chemistry” part of the intro:

start learning about an enormous set of analogies linking electrical circuits, classical mechanics, hydraulics, thermodynamics and chemistry.

?

No — my error-ridden mental processes are inexplicable. In this case I wasn’t screwing up: I deliberately said we’d start learning about this enormous set of analogies, quite conscious of the fact that only later would I bring in the examples of thermodynamics and chemistry.

… I don’t think that more examples are needed :-)

I do!

I want to understand how engineers describe everything using ‘bond graphs’, and then formalize what they’re doing, with the help of category theory. Thermodynamics is too interesting to leave out.

Posted by: John Baez on January 2, 2010 5:04 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

“The basic inspiration is that electical circuit diagrams look sort of like Feynman diagrams” should be “The basic inspiration is that electrical circuit diagrams look sort of like Feynman diagrams”

Otherwise, wonderful on first reading!

Posted by: Jonathan Vos Post on January 1, 2010 8:22 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Thanks. I actually made that typo twice!

Posted by: John Baez on January 1, 2010 10:18 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Maxwell was also heavily influenced by fluid dynamics. One of the more difficult concepts to understand is electric displacement. Electric displacement fields are used to attribute charge to an object. The conceptually difficult part, and one that caused some consternation for physicists in the 19th century, was the acceptance of an internal degree of freedom (e.g. what is meant by displacement when there is no actual physical displacement).

Max Born makes a good go at trying to explain this concept in his book on Einstein’s Theory of Relativity (start around pg 171…the book can be found on google books)

In any case, displacement implies a measurement of degrees of freedom. So if you have voltage, which is analogous to temperature, given as Energy/unit charge (eg energy per degree of freedom), then entropy is Energy/Volt. Your household electric system is designed to maintain a constant voltage, so as you draw power out your socket, you are actually being provided with the ability to reduce entropy in your home (remember Energy = Power * time). In other words, as you draw power, you are working to maintain a constant entropy in you home.

Another way to think about it is in terms of current. Current is a measurement of charges/unit time; so you can imagine that as current flows into your house, the number of degrees of freedom are increasing, and the new degrees of freedom have more energy associated with them. This causes the energy distribution across all the degrees of freedom in your house to be less uniform; meaning that the entropy has decreased relative to from what it was before the current flowed.

All very interesting stuff.

Posted by: H Matthew on January 1, 2010 8:48 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Quoth H Matthew: … so as you draw power out your socket, you are actually being provided with the ability to reduce entropy in your home (remember Energy = Power * time). In other words, as you draw power, you are working to maintain a constant entropy in you home.

Air conditioning is indeed a nifty thing! Although in my neck of the woods, we prefer to localize entropy within the home at this time of year. Some clever folks nowadays do this in part by drawing entropy out of the environment, and they get a small bonus, too… though who really reaps that reward is perhaps a contentious question.

Posted by: Jesse McKeown on January 1, 2010 10:46 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

I don’t get it. I understand that some people like to heat their houses, but I’m not sure which technique the ‘clever folks’ are using, and what ‘small bonus’ they get. Tax incentives for heat pumps?

Maybe we should let other folks at the nn-Café guess, before you give away the answer!

Posted by: John Baez on January 2, 2010 5:09 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

John asked about Martian Longitude; and high-up in a google search for “martian longitude” we get this handy site. (In my firefox it doesn’t come out very tidy; the “caption” is squished in a column about 18ex wide on the left edge.)

Posted by: Jesse McKeown on January 1, 2010 10:32 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Nice! I’ve added this information to the ‘Addenda’ of week288.

Posted by: John Baez on January 2, 2010 12:05 AM | Permalink | Reply to this

Power and Energy Conservation

I have a question about ‘power’ and ‘conservation of energy’

The power drawn by an electrical circuit is ‘voltage times current’, or in the more general notation I introduced here, ‘effort times flow’, or p˙q˙\dot{p} \dot{q}. I’ll use that more general notation.

For a capacitor we have

q=Cp˙ q = C \dot{p}

so we can define an `internal energy’ for the capacitor

H(p,q)=q 2/2CH(p,q) = q^2 / 2 C

such that

dHdt=p˙q˙\frac{d H}{d t} = \dot{p} \dot{q}

According to some electrical engineering books I’m reading, this equation is the reason we say a capacitor conserves energy: the power we pour into it equals the rate of increase of its internal energy.

We can also define an internal energy for an inductor, which again satisfies

dHdt=p˙q˙\frac{d H}{d t} = \dot{p} \dot{q}

Capacitors and inductors are analogous to springs and masses in mechanics, so everything I’ve said has an analogue for those too. A spring stores internal energy that’s a function of qq (potential energy); a mass stores internal energy that’s a function of pp (kinetic energy).

But I’m quite puzzled by the general status of this equation:

dHdt=p˙q˙\frac{d H}{d t} = \dot{p} \dot{q}

It seems to be hinting at a generalization of Hamiltonian mechanics for ‘open systems’, where we no longer have

dHdt=0 \frac{d H}{d t} = 0

as we do in a closed system, but instead our system can gain or lose energy via interaction with its environment.

In ordinary Hamiltonian mechanics, Hamilton’s equations

p˙=Hq \dot{p} = \frac{\partial H}{\partial q}

q˙=Hp \dot{q} = -\frac{\partial H}{\partial p}

automatically give

dHdt=0 \frac{d H}{d t} = 0

I think I’ve seen a generalization of Hamiltonian mechanics to open systems where we can also have an ‘external force’. And I’ve seen Hamiltonian mechanics for systems that have a time-dependent Hamiltonian. But I don’t see the equation

dHdt=p˙q˙\frac{d H}{d t} = \dot{p} \dot{q}

arising naturally from these formalisms! So what’s the realm of validity of this equation? Or is it just a special case of something better?

Posted by: John Baez on January 1, 2010 11:28 PM | Permalink | Reply to this

Re: Power and Energy Conservation

I think the answer lies in the product rule which can be generalized from the chain rule (also remembering that dH/dt is a total derivative). If we look at Leibniz’s argument for the product rule (see the wikipedia entry), he discard’s the infinitesimally small product of differential elements. It would seem that this discarding is analogous to the assumption of a closed, conservative system (although I don’t know if I could argue that rigorously).
The only problem I have in using that as a starting point is that dp/dt * dq/dt gives me dpdq/dt^2 which is one too many dt’s.

Posted by: H Matthew on January 2, 2010 3:34 AM | Permalink | Reply to this

Re: Power and Energy Conservation

Just another thought though, if dH/dt = IV as suggested, only I has an explicit unit of time in its definition; so viewing the product IV as being analogous to the discarded differentials in the product rule might work (don’t even get me started on the analogies to geometry).

Posted by: H Matthew on January 2, 2010 4:05 AM | Permalink | Reply to this

Re: Power and Energy Conservation

This equation is called the “power balance equation” and is a consequence of the definition of so-called “Input-output Hamiltonian systems)”.

I’m referring to the book

- Brogliato et. alt., “Dissipative Systems, Analysis and Control”, Springer, 2nd edition,

and here to the Definition 6.21 and Lemma 6.23.

(on google books you will only find excerpts of it, unfortunatly not including the page we need here).

That’s probably not the kind of generality you are looking for, and you probably knew that alrady, but, well…at least we establish a bit terminology :-)

Posted by: Tim vB on January 2, 2010 11:17 AM | Permalink | Reply to this

Re: Power and Energy Conservation

The control handbook By W. S. Levine pg 102 is available on google books and has some discussion on the power balance equation. It also introduces Henry Paynter and Bond graphs.

Posted by: H Matthew on January 2, 2010 12:36 PM | Permalink | Reply to this

Re: Power and Energy Conservation

Thanks, Tim vB, for the reference. Deriving this equation for dH/dtd H / d t from some definition of ‘input-output Hamiltonian systems’ is exactly what I want to see. Mainly, I want to see the definition. I’ll get ahold of that book.

Thanks, H Matthew, for the link to The Control Handbook. I see that discussion of the power balance equation; it’s part of a delightfully prickly and opinionated discussion of “the most common misconceptions” about the modeling of physical systems. Unfortunately it doesn’t seem to derive that equation — it seems to treat it as an axiom. Which is interesting in itself, but not quite what I want.

The Control Handbook is 1548 pages long! The reviews are all good… I’ll have to get ahold of this, too! I checked out Forbes T. Brown’s Engineering System Dynamics: A Unified Graph-Centered Approach, but that’s only 1058 pages long — clearly missing a lot of important stuff.

Posted by: John Baez on January 2, 2010 4:10 PM | Permalink | Reply to this

Re: Power and Energy Conservation

The section Modeling from Physical Principles from the above reference book is available in its entirety by clicking the link (one of the authors keeps a website on his published material on bond graphs here.

Here is a more rigorous mathematical treatment of Hamiltonian input output systems from the book Dynamics and control of multibody systems.

This is for the people who might be interested in the material but aren’t interested in buying the books.

Posted by: H Matthew on January 2, 2010 5:59 PM | Permalink | Reply to this

Re: Power and Energy Conservation

Okay, I was able to download the 2007 edition of Dissipative Systems, Analysis and Control using my U.C. Riverside superpowers. Now the relevant stuff is in chapter 6 — ‘Lagrangian control systems’ and ‘input-output Hamiltonian systems’. Lemma 6.23 states the ‘power balance equation’ for input-output Hamiltonian systems. This what I want to learn about!

Thanks again!

Posted by: John Baez on January 2, 2010 5:30 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Thanks for this! A great way to start the year :)

You discuss two things close to my heart: algebraic topology as a driver of improvements in computational physics and the “cochain problem” involving the failure of cup product to be graded commutative “on the nose”. Of course, my interest in both stem from the fact I think the two are related.

I didn’t quite understand the bit about Eilenberg-Zilber map (EZ) and Alexander-Whitney map (AW). Are these related to the de Rham map (R) and the Whitney map (W)?

I wish I knew the correct “maths” way to express this, but given a manifold MM and differential forms Ω(M)\Omega(M) on MM, we can construct a simplicial complex SS with cochains C *(S)C^*(S). The de Rham map RR takes forms on MM and turns them into cochains on SS

R:Ω(M)C *(S).R:\Omega(M)\to C^*(S).

The Whitney map (W) takes cochains on SS and turns them into (Whitney) forms on MM

W:C *(S)Ω(M).W:C^*(S)\to\Omega(M).

We have

RW=Id C *(S)R\circ W = Id_{C^*(S)}

and

WRId Ω(M),W\circ R \sim Id_{\Omega(M)},

which sounds similar to EZ and AW.

As you pointed out, the wedge product in Ω(M)\Omega(M) is graded commutative and the cup product in C *(S)C^*(S) is not graded commutative “on the nose” so all these maps do not quite fit together perfectly, e.g.

W(ab)W(a)W(b)W(a\smile b) \ne W(a)\wedge W(b)

and

R(αβ)R(α)R(β).R(\alpha\wedge\beta) \ne R(\alpha)\smile R(\beta).

Some people have proposed a modified cup product for computational physics via

a˜b:=R(W(a)W(b)).a\tilde\smile b := R(W(a)\wedge W(b)).

I don’t think this helps much with the “cochain problem”. In particular, this modified cup product is not even associative!

I have proposed an alternative, which at first will seem radical but I actually think might help with things like the “cochain problem”. That is to introduce a modified wedge product

α˜β:=W(R(α)R(β)).\alpha\tilde\wedge\beta := W(R(\alpha)\smile R(\beta)).

How dare we mess with the continuum :)

This modified wedge product has the (uber) nice property that it is an algebra homomorphism “on the nose”, i.e.

R(α˜β)=R(α)R(β).R(\alpha\tilde\wedge\beta) = R(\alpha)\smile R(\beta).

The undesirable property of this modified wedge product is that it will depend on SS, but that dependence disappears when you pass to cohomology, homotopy, etc, which is what the mathematicians really care about.

My gut tells me that having a true algebra morphism like this will give you true functors and the category theoretic analysis will be much cleaner. But that is just a hunch.

Also note that in a suitable limit of refinements of SS, i.e. a kind of “continuum limit” we have

αβ=lim continuumα˜β.\alpha\wedge\beta = \lim_{continuum} \alpha\tilde\wedge\beta.

Posted by: Eric Forgy on January 2, 2010 2:08 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Eric wrote:

As you pointed out, the wedge product is graded commutative and the cup product in C *(S) is not graded commutative so all these maps do not quite fit together perfectly, e.g.
they are not multiplicative


****But R is strongly homotopy multiplicative

MR0418083 (54 #6127) Gugenheim, V. K. A. M. On the multiplicative structure of the de Rham theory. J. Differential Geometry 11 (1976), no. 2, 309–314.

Some people have proposed a modified cup product for computational physics via


R(W W)

I dont think this helps much with the cochain problem In particular, this modified cup product is not even associative!

ditto for the symmetrization of the ordinary cup product

Posted by: jim stasheff on January 2, 2010 1:53 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Thanks for your comments Jim. I prepared a longish reply, but decided to move it to my personal nLab page, where it grew a life of its own. I’m very happy with the result. Please have a look:

Modified Wedge Product

I also gave a notice on the nForum:

Differential Graded (Noncommutative) Algebra of Whitney Forms

Posted by: Eric Forgy on January 3, 2010 4:46 PM | Permalink | Reply to this

Operational Amplifiers

In your 10-watt circuit diagram, IC1 is an operational amplifier which is a cool electronic abstraction that was developed for cutting edge applications (e.g. military weapons targeting) though in these days most of such exotic uses have been replaced by digital systems and most op-amps are found as simple pre-amplifiers in consumer electronics.

An ideal op-amp is completely linear and has infinite gain. Through feedback it can be configured to perform analog computations such as integrating or differentiating its input signal. Patch a bunch together as an analog computer and they can model physical systems. For example they were used to model helicopter rotors or used in flight control systems.

In high school I devoured a book I found in the library on the theory and programming of analog computers. These days they don’t exist except maybe in a few courses that teach control theory, or renamed as analog music synthesizers.

The point is that operational amplifiers were developed so that there could be an easy isomorphism between electrical circuits and physical systems even though they are now often re-purposed in bass-boost circuits.

Posted by: RodMcGuire on January 2, 2010 7:29 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 288)

Phew, back online again after a massive failure by my internet provider.

Preece was another electrical engineer, who liked the hydraulic analogy and disliked Heaviside’s fancy math.

Which goes to show how relative these feelings are. Heaviside was a critic of Hamilton’s quaternions and their use in physics, so came into conflict with defenders such as Tait.

Regarding nineteenth century physics analogies, Smith and Norton Wise’s Energy and empire: a biographical study of Lord Kelvin is a must.

Posted by: David Corfield on January 4, 2010 1:34 PM | Permalink | Reply to this

Post a New Comment