The Space of Physical Frameworks (Part 5)
Posted by John Baez
In Part 4, I presented a nifty result supporting my claim that classical statistical mechanics reduces to thermodynamics when Boltzmann’s constant approaches zero. I used a lot of physics jargon to explain why I care about this result. I also used some math jargon to carry out my argument.
This may have been off-putting. But to understand the result, you only need to know calculus! So this time I’ll state it without all the surrounding rhetoric, and then illustrate it with an example.
At the end, I’ll talk about the physical meaning of it all.
The main result
Here it is:
Main Result. Suppose is a concave function with continuous second derivative. Suppose that for some the quantity has a unique minimum as a function of , and at that minimum. Then for this value of we have
when approaches from above.
Let’s make up names for the things on both sides of the equation:
and
So, the result says
when the hypotheses hold.
An example
Let’s do an example! Let’s try
where is any positive number. This meets the hypotheses of the result for all .
So, let’s show
in this case. Of course, this follows from what I’m calling the ‘main result’ — but working through this example really helped me. While struggling with it, I discovered some confusion in my thinking! Only after straightening this out and fixing Part 4 of my series was I able to get this example to work.
I’ll say more about the physics of this example at the end of this post. For now, let’s just do the math.
First let’s evaluate the left hand side, . By definition we have
Plugging in we get this:
To evaluate this integral we can use the formula
which makes sense for all positive numbers, even ones that aren’t integers, if we interpret using the gamma function. So, let’s use this change of variables:
We get
Now let’s take the limit. For this we need Stirling’s formula
or more precisely this version:
Let’s compute:
Here is the base of natural logarithms, our friend .
Now let’s compute the right hand side of the main result. By definition this is
Plugging in we get this:
Since , the function has strictly positive second derivative as a function of wherever it’s defined, namely . It also has a unique minimum. To find this minimum we solve
which gives
or
We thus have
or
Yes, we’ve seen this before. So we’ve verified that
in this example.
The physical meaning
We can interpret the function as the entropy of a system when its energy is exactly . There are many systems whose entropy is given by the formula for various values of . Examples include:
1) an -dimensional harmonic oscillator,
2) a collection of noninteracting classical particles in a box (a so-called ‘ideal gas’),
3) a collection of so-called ‘energy particles’, which are imaginary entities that have no qualities except energy, which can be any nonnegative number.
Examples 1 and 2 are worked out in my book, while I used Example 3 to prove Stirling’s formula here:
So, it’s no coincidence that I used Stirling’s formula today: I’m just walking across the same bridge in the opposite direction!
For any system of this sort, its temperature is proportional to its energy. We saw this fact today when I showed the infimum in this formula:
occurs at . The quantity is the ‘coldness’ or inverse temperature , so we’re seeing . This type of result fools some students into thinking temperature is always proportional to energy! That’s obviously false: think about how much more energy it take to melt an ice cube than it takes to raise the temperature of the melted ice cube by a small amount. So we’re dealing with a very special, simple sort of system here.
In equilibrium, a system at fixed coldness will choose the energy that maximizes . The maximum value of this quantity is called the system’s ‘free entropy’. In other words, the system’s negative free entropy is .
All of my physics remarks so far concern ‘thermostatics’ as defined in Part 2. In this approach, we start with entropy as a function of energy, and define using a Legendre transform. Boltzmann’s constant never appears in our calculations.
But we can also follow another approach: ‘classical statistical mechanics’, as defined in Part 4. In this framework we define negative free entropy in a different way. We call the ‘density of states’, and we define negative free entropy using the Laplace transform of the density of states. It looks like this:
In this approach, Boltzmann’s constant is important. However, the punchline is that
Thus classical statistical mechanics reduces to thermostatics as , at least in this one respect.
In fact various other quantities should work the same way. We can define them in classical statistical mechanics, where they depend on Boltzmann’s constant, but as they should approach values that you could compute in a different way, using thermostatics. So, what I’m calling the ‘main result’ should be just one case of a more general result. But this one case clearly illustrates the sense in which the Laplace transform reduces to the Legendre transform as . This should be a key aspect of the overall story.
Warning
By the way, I simplified the calculations today by choosing units of energy where the so-called ‘energy width’ from last time equals 1. Thus, energy is treated as dimensionless. However, I am not treating entropy as dimensionless.