### Chasing the Tail of the Gaussian (Part 2)

#### Posted by John Baez

Last time we began working on a puzzle by Ramanujan. This time we’ll solve it — with some help from a paper Jacobi wrote in Latin, and also from my friend Leo Stein on Twitter!

In 1914 Ramanujan posed this problem to the readers of *The Journal of the Indian Mathematical Society*:

Prove that

$\left(\frac{1}{1} + \frac{1}{1 \cdot 3} + \frac{1}{1 \cdot 3 \cdot 5} + \cdots\right) \; + \; \frac{1}{1 + \frac{1}{1 + \frac{2}{1 + \frac{3}{1 + \frac{4}{1 + \frac{5}{1 + \ddots}}}}}} = \sqrt{\frac{\pi e}{2}}$

We can reduce this to two separate sub-puzzles. First, prove this:

$x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t$

Second, prove this:

$\frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}} = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t$

You could say these concern the ‘body’ and the ‘tail’ of the Gaussian, respectively. Setting $x = 1$ and adding these two equations we get the answer to Ramanujan’s problem.

I did the first sub-puzzle last time. Now we’ll do the second one! But it’s good to compare them first.

If we were handed the two sub-puzzles separately, a quick way to do the first would be to notice that

$f(x) = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t$

obeys this differential equation:

$f'(x) = x f(x) + 1$

We can solve this equation using power series and get

$f(x) = x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots$

I did this last time, except working in the other direction, starting from the power series.

To tackle the second sub-puzzle, we could note that

$g(x) = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t$

obeys a very similar differential equation:

$g'(x) = x g(x) - 1$

If we could solve this using continued fractions, maybe we’d be done.

Unfortunately I never learned how to solve differential equations using continued fractions! So I soon gave up and looked around.

Hardy got a version of the second sub-puzzle in his first letter from Ramanujan. He wasn’t too impressed by this one: he thought it seemed vaguely familiar, and he later wrote:

…. it is a formula of Laplace first proved properly by Jacobi.

David Roberts found the original references. It turns out Laplace solved this problem in Equation 8359 of Book 10 of his *Traité de Mécanique Céleste*. In fact he was in the midst of estimating the refraction caused by the Earth’s atmosphere. It’s nice to see this stuff showing up in a practical context! Unfortunately Laplace does the calculation instantaneously, with scarcely a word of explanation. So I turned to Jacobi:

- C. G. J. Jacobi, De fractione continua, in quam integrale $\int_x^\infty e^{-x x} d x$ evolvere licet,
*Journal für die reine und angewandte Mathematik***12**(1834) 346–347.

Despite being in Latin and full of algebra mistakes, I thought this was one of the most exciting papers I’ve read in quite a while. For one things, it’s just 2 pages long.

I explained how this method works… but then Leo Stein figured out how to simplify it *a lot*. So let me give his simplified method.

We want to write

$g(x) = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t$

as a continued fraction. We differentiate $g$ and easily note that

$g'(x) = x g(x) - 1$

But next we do something sneaky: we *keep on* differentiating $g$, as if we were trying to work out its Taylor series. Let’s see what happens:

$\begin{array}{ccl} g''(x) &=& x g'(x) + g(x) \\ \\ g\!'''(x) &=& x g''(x) + 2 g'(x) \\ \\ g^{(4)}(x) &=& x g\!'''(x) + 3 g''(x) \end{array}$

We notice a recurrence relation, which is easy to prove inductively:

$g^{(n+2)} = x g^{(n+1)} + (n+1) g^{(n)} \qquad \text{for} \; n \ge 0$

Basically, the plan is now to write the original function $g$ in terms of $g'$ and $g''$, and then use this equation to rewrite those in terms of $g''$ and $g'''$, and so on *ad infinitum*, getting a continued fraction for $g$.

It’s like solution by indefinite postponement!

But to get a continued fraction, let’s take our recurrence relation and divide it by $g^{(n+1)}$. We get:

$\frac{g^{(n+2)}}{g^{(n+1)}} = x + (n+1) \frac{g^{(n)}}{g^{(n+1)}} \qquad \text{for} \; n \ge 0$

This looks simpler in terms of the ratios

$r_n = \frac{g^{(n+1)}}{g^{(n)}}$

Indeed, it becomes this:

$r_{n+1} = x + \frac{(n+1)}{r_n}$

or solving for $r_n$, this:

$r_n = \frac{n+1}{-x + r_{n+1}} \qquad \text{for} \; n \ge 0 \qquad \qquad \spadesuit$

In this notation, our original equation $g' = x g - 1$ gets a similar look:

$g = \frac{1}{x - r_0}$

Starting from here, and repeatedly using $\spadesuit$, we get

$\begin{array}{ccl} g &=& \frac{1}{x - r_0} \\ \\ &=& \frac{1}{x - \frac{1}{-x + r_1}} \\ \\ &=& \frac{1}{x - \frac{1}{-x + \frac{2}{-x + r_2}}} \\ \\ &=& \frac{1}{x - \frac{1}{-x + \frac{2}{-x + \frac{3}{-x + r_3}}}} \end{array}$

and so on. If we’re allowed to go on forever, we get

$g = \frac{1}{x - \frac{1}{-x + \frac{2}{-x + \frac{3}{-x + \frac{4}{-x + \ddots}}}}}$

A bit of algebra gives

$g = \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \ddots}}}}}$

or in other words:

$e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t = \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \ddots}}}}}$

And so we’re done!

There seems to be something ‘coalgebraic’ or ‘coinductive’ going on here: underneath all the trickery, we are solving for the function at left by writing it in terms of its derivative, and writing that in terms of *its* derivative, and going on forever… until we get the answer! There’s a deep connection between continued fractions and coalgebra, so this would be nice to explore further.

Of course we need to check that the limiting procedure is valid. But since we obtained the infinite continued fraction from a sequence of fractions that are all *equal*, that shouldn’t be too hard.

I’m left with a sense of fascination and a deep unease. I need to learn more about solving differential equations using continued fractions, to feel I better understand these tricks. This book seems really good to start with: down-to-earth, with a sense of humor too:

- Lisa Lorentzen and Haakon Waadeland,
*Continued Fractions with Applications*, North Holland, 1992.

By the way, I seem to have found some errors in Jacobi’s paper De fractione continua, in quam integrale $\int_x^\infty e^{-x x} d x$ evolvere licet. It’s a bit late to point them out, but here they are:

1) In Equation (7), $y_n = y_{n+1} + (n+1)y_{n+2}$ should be $y_n = y_{n+1} + q(n+1)y_{n+2}$.

2) Also in Equation (7), $1 = y_1 + q y^2$ should be $1 = y_1 + q y_2$. In fact he never uses a superscript 2 to mean squaring — you can see this from his title.

3) At the top of the second page, right after Equation (7)

$\frac{1}{y_1} = q + \frac{y_2}{y_1}$

should be

$\frac{1}{y_1} = 1 + \frac{q y_2}{y_1}$

This mistake continues in all the equations on that line.

4) As a result of the previous mistake, his Equation (8):

$\frac{1}{y_1} = q + \frac{1}{1 + \frac{2q}{1 + \frac{3q}{\ddots}}}$

should be

$\frac{1}{y_1} = 1 + \frac{1}{1 + \frac{q}{1 + \frac{2q}{1 + \frac{3q}{\ddots}}}}$

which is more beautiful as well as being correct. But when he gets to the final result, which he states at the start of the paper, this mistake mysteriously evaporates!

## Re: Chasing the Tail of the Gaussian (Part 2)

Ah. That’s why there’s an “Extra” $x$. Or a missing $\frac{-1}{n+1}$. You’re indexing by derivatives of $g$, while Jacobi is indexing by powers of $x$.

Not that thus effervesces the mystery, for it doesn’t…

Let me channel my inner Pooh. The primary reason for being multiplied by $\frac{a^n}{n!}$ that I know of is for Summing a Taylor Series.

$\sum t^n y_n = x \sum \frac{(-t x)^n}{n!} g^{(n)}(x) = x g((1-t)x)$

And I don’t know that that helps you much either, but it’s the naturalest thing I could think of.