Chasing the Tail of the Gaussian (Part 2)
Posted by John Baez
Last time we began working on a puzzle by Ramanujan. This time we’ll solve it — with some help from a paper Jacobi wrote in Latin, and also from my friend Leo Stein on Twitter!
In 1914 Ramanujan posed this problem to the readers of The Journal of the Indian Mathematical Society:
Prove that
We can reduce this to two separate sub-puzzles. First, prove this:
Second, prove this:
You could say these concern the ‘body’ and the ‘tail’ of the Gaussian, respectively. Setting and adding these two equations we get the answer to Ramanujan’s problem.
I did the first sub-puzzle last time. Now we’ll do the second one! But it’s good to compare them first.
If we were handed the two sub-puzzles separately, a quick way to do the first would be to notice that
obeys this differential equation:
We can solve this equation using power series and get
I did this last time, except working in the other direction, starting from the power series.
To tackle the second sub-puzzle, we could note that
obeys a very similar differential equation:
If we could solve this using continued fractions, maybe we’d be done.
Unfortunately I never learned how to solve differential equations using continued fractions! So I soon gave up and looked around.
Hardy got a version of the second sub-puzzle in his first letter from Ramanujan. He wasn’t too impressed by this one: he thought it seemed vaguely familiar, and he later wrote:
…. it is a formula of Laplace first proved properly by Jacobi.
David Roberts found the original references. It turns out Laplace solved this problem in Equation 8359 of Book 10 of his Traité de Mécanique Céleste. In fact he was in the midst of estimating the refraction caused by the Earth’s atmosphere. It’s nice to see this stuff showing up in a practical context! Unfortunately Laplace does the calculation instantaneously, with scarcely a word of explanation. So I turned to Jacobi:
- C. G. J. Jacobi, De fractione continua, in quam integrale evolvere licet, Journal für die reine und angewandte Mathematik 12 (1834) 346–347.
Despite being in Latin and full of algebra mistakes, I thought this was one of the most exciting papers I’ve read in quite a while. For one things, it’s just 2 pages long.
I explained how this method works… but then Leo Stein figured out how to simplify it a lot. So let me give his simplified method.
We want to write
as a continued fraction. We differentiate and easily note that
But next we do something sneaky: we keep on differentiating , as if we were trying to work out its Taylor series. Let’s see what happens:
We notice a recurrence relation, which is easy to prove inductively:
Basically, the plan is now to write the original function in terms of and , and then use this equation to rewrite those in terms of and , and so on ad infinitum, getting a continued fraction for .
It’s like solution by indefinite postponement!
But to get a continued fraction, let’s take our recurrence relation and divide it by . We get:
This looks simpler in terms of the ratios
Indeed, it becomes this:
or solving for , this:
In this notation, our original equation gets a similar look:
Starting from here, and repeatedly using , we get
and so on. If we’re allowed to go on forever, we get
A bit of algebra gives
or in other words:
And so we’re done!
There seems to be something ‘coalgebraic’ or ‘coinductive’ going on here: underneath all the trickery, we are solving for the function at left by writing it in terms of its derivative, and writing that in terms of its derivative, and going on forever… until we get the answer! There’s a deep connection between continued fractions and coalgebra, so this would be nice to explore further.
Of course we need to check that the limiting procedure is valid. But since we obtained the infinite continued fraction from a sequence of fractions that are all equal, that shouldn’t be too hard.
I’m left with a sense of fascination and a deep unease. I need to learn more about solving differential equations using continued fractions, to feel I better understand these tricks. This book seems really good to start with: down-to-earth, with a sense of humor too:
- Lisa Lorentzen and Haakon Waadeland, Continued Fractions with Applications, North Holland, 1992.
By the way, I seem to have found some errors in Jacobi’s paper De fractione continua, in quam integrale evolvere licet. It’s a bit late to point them out, but here they are:
1) In Equation (7), should be .
2) Also in Equation (7), should be . In fact he never uses a superscript 2 to mean squaring — you can see this from his title.
3) At the top of the second page, right after Equation (7)
should be
This mistake continues in all the equations on that line.
4) As a result of the previous mistake, his Equation (8):
should be
which is more beautiful as well as being correct. But when he gets to the final result, which he states at the start of the paper, this mistake mysteriously evaporates!
Re: Chasing the Tail of the Gaussian (Part 2)
Ah. That’s why there’s an “Extra” . Or a missing . You’re indexing by derivatives of , while Jacobi is indexing by powers of .
Not that thus effervesces the mystery, for it doesn’t…
Let me channel my inner Pooh. The primary reason for being multiplied by that I know of is for Summing a Taylor Series.
And I don’t know that that helps you much either, but it’s the naturalest thing I could think of.