Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 3, 2020

Chasing the Tail of the Gaussian (Part 2)

Posted by John Baez

Last time we began working on a puzzle by Ramanujan. This time we’ll solve it — with some help from a paper Jacobi wrote in Latin, and also from my friend Leo Stein on Twitter!

In 1914 Ramanujan posed this problem to the readers of The Journal of the Indian Mathematical Society:

Prove that

(11+113+1135+)+11+11+21+31+41+51+=πe2 \left(\frac{1}{1} + \frac{1}{1 \cdot 3} + \frac{1}{1 \cdot 3 \cdot 5} + \cdots\right) \; + \; \frac{1}{1 + \frac{1}{1 + \frac{2}{1 + \frac{3}{1 + \frac{4}{1 + \frac{5}{1 + \ddots}}}}}} = \sqrt{\frac{\pi e}{2}}

We can reduce this to two separate sub-puzzles. First, prove this:

x+x 313+x 5135+=e x 2/2 0 xe t 2/2dt x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

Second, prove this:

1x+1x+2x+3x+4x+5x+=e x 2/2 x e t 2/2dt \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}} = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

You could say these concern the ‘body’ and the ‘tail’ of the Gaussian, respectively. Setting x=1x = 1 and adding these two equations we get the answer to Ramanujan’s problem.

I did the first sub-puzzle last time. Now we’ll do the second one! But it’s good to compare them first.

If we were handed the two sub-puzzles separately, a quick way to do the first would be to notice that

f(x)=e x 2/2 0 xe t 2/2dt f(x) = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

obeys this differential equation:

f(x)=xf(x)+1 f'(x) = x f(x) + 1

We can solve this equation using power series and get

f(x)=x+x 313+x 5135+ f(x) = x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots

I did this last time, except working in the other direction, starting from the power series.

To tackle the second sub-puzzle, we could note that

g(x)=e x 2/2 x e t 2/2dt g(x) = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

obeys a very similar differential equation:

g(x)=xg(x)1 g'(x) = x g(x) - 1

If we could solve this using continued fractions, maybe we’d be done.

Unfortunately I never learned how to solve differential equations using continued fractions! So I soon gave up and looked around.

Hardy got a version of the second sub-puzzle in his first letter from Ramanujan. He wasn’t too impressed by this one: he thought it seemed vaguely familiar, and he later wrote:

…. it is a formula of Laplace first proved properly by Jacobi.

David Roberts found the original references. It turns out Laplace solved this problem in Equation 8359 of Book 10 of his Traité de Mécanique Céleste. In fact he was in the midst of estimating the refraction caused by the Earth’s atmosphere. It’s nice to see this stuff showing up in a practical context! Unfortunately Laplace does the calculation instantaneously, with scarcely a word of explanation. So I turned to Jacobi:

Despite being in Latin and full of algebra mistakes, I thought this was one of the most exciting papers I’ve read in quite a while. For one things, it’s just 2 pages long.

I explained how this method works… but then Leo Stein figured out how to simplify it a lot. So let me give his simplified method.

We want to write

g(x)=e x 2/2 x e t 2/2dt g(x) = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

as a continued fraction. We differentiate gg and easily note that

g(x)=xg(x)1 g'(x) = x g(x) - 1

But next we do something sneaky: we keep on differentiating gg, as if we were trying to work out its Taylor series. Let’s see what happens:

g(x) = xg(x)+g(x) g(x) = xg(x)+2g(x) g (4)(x) = xg(x)+3g(x) \begin{array}{ccl} g''(x) &=& x g'(x) + g(x) \\ \\ g\!'''(x) &=& x g''(x) + 2 g'(x) \\ \\ g^{(4)}(x) &=& x g\!'''(x) + 3 g''(x) \end{array}

We notice a recurrence relation, which is easy to prove inductively:

g (n+2)=xg (n+1)+(n+1)g (n)forn0 g^{(n+2)} = x g^{(n+1)} + (n+1) g^{(n)} \qquad \text{for} \; n \ge 0

Basically, the plan is now to write the original function gg in terms of gg' and gg'', and then use this equation to rewrite those in terms of gg'' and gg''', and so on ad infinitum, getting a continued fraction for gg.

It’s like solution by indefinite postponement!

But to get a continued fraction, let’s take our recurrence relation and divide it by g (n+1)g^{(n+1)}. We get:

g (n+2)g (n+1)=x+(n+1)g (n)g (n+1)forn0 \frac{g^{(n+2)}}{g^{(n+1)}} = x + (n+1) \frac{g^{(n)}}{g^{(n+1)}} \qquad \text{for} \; n \ge 0

This looks simpler in terms of the ratios

r n=g (n+1)g (n) r_n = \frac{g^{(n+1)}}{g^{(n)}}

Indeed, it becomes this:

r n+1=x+(n+1)r n r_{n+1} = x + \frac{(n+1)}{r_n}

or solving for r nr_n, this:

r n=n+1x+r n+1forn0 r_n = \frac{n+1}{-x + r_{n+1}} \qquad \text{for} \; n \ge 0 \qquad \qquad \spadesuit

In this notation, our original equation g=xg1g' = x g - 1 gets a similar look:

g=1xr 0 g = \frac{1}{x - r_0}

Starting from here, and repeatedly using \spadesuit, we get

g = 1xr 0 = 1x1x+r 1 = 1x1x+2x+r 2 = 1x1x+2x+3x+r 3 \begin{array}{ccl} g &=& \frac{1}{x - r_0} \\ \\ &=& \frac{1}{x - \frac{1}{-x + r_1}} \\ \\ &=& \frac{1}{x - \frac{1}{-x + \frac{2}{-x + r_2}}} \\ \\ &=& \frac{1}{x - \frac{1}{-x + \frac{2}{-x + \frac{3}{-x + r_3}}}} \end{array}

and so on. If we’re allowed to go on forever, we get

g=1x1x+2x+3x+4x+ g = \frac{1}{x - \frac{1}{-x + \frac{2}{-x + \frac{3}{-x + \frac{4}{-x + \ddots}}}}}

A bit of algebra gives

g=1x+1x+2x+3x+4x+ g = \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \ddots}}}}}

or in other words:

e x 2/2 x e t 2/2dt=1x+1x+2x+3x+4x+ e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t = \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \ddots}}}}}

And so we’re done!

There seems to be something ‘coalgebraic’ or ‘coinductive’ going on here: underneath all the trickery, we are solving for the function at left by writing it in terms of its derivative, and writing that in terms of its derivative, and going on forever… until we get the answer! There’s a deep connection between continued fractions and coalgebra, so this would be nice to explore further.

Of course we need to check that the limiting procedure is valid. But since we obtained the infinite continued fraction from a sequence of fractions that are all equal, that shouldn’t be too hard.

I’m left with a sense of fascination and a deep unease. I need to learn more about solving differential equations using continued fractions, to feel I better understand these tricks. This book seems really good to start with: down-to-earth, with a sense of humor too:

  • Lisa Lorentzen and Haakon Waadeland, Continued Fractions with Applications, North Holland, 1992.

By the way, I seem to have found some errors in Jacobi’s paper De fractione continua, in quam integrale x e xxdx\int_x^\infty e^{-x x} d x evolvere licet. It’s a bit late to point them out, but here they are:

1) In Equation (7), y n=y n+1+(n+1)y n+2y_n = y_{n+1} + (n+1)y_{n+2} should be y n=y n+1+q(n+1)y n+2y_n = y_{n+1} + q(n+1)y_{n+2}.

2) Also in Equation (7), 1=y 1+qy 21 = y_1 + q y^2 should be 1=y 1+qy 21 = y_1 + q y_2. In fact he never uses a superscript 2 to mean squaring — you can see this from his title.

3) At the top of the second page, right after Equation (7)

1y 1=q+y 2y 1\frac{1}{y_1} = q + \frac{y_2}{y_1}

should be

1y 1=1+qy 2y 1\frac{1}{y_1} = 1 + \frac{q y_2}{y_1}

This mistake continues in all the equations on that line.

4) As a result of the previous mistake, his Equation (8):

1y 1=q+11+2q1+3q \frac{1}{y_1} = q + \frac{1}{1 + \frac{2q}{1 + \frac{3q}{\ddots}}}

should be

1y 1=1+11+q1+2q1+3q \frac{1}{y_1} = 1 + \frac{1}{1 + \frac{q}{1 + \frac{2q}{1 + \frac{3q}{\ddots}}}}

which is more beautiful as well as being correct. But when he gets to the final result, which he states at the start of the paper, this mistake mysteriously evaporates!

Posted at September 3, 2020 6:57 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3248

15 Comments & 0 Trackbacks

Re: Chasing the Tail of the Gaussian (Part 2)

his y n+1y_{n+1} is my y ny_n

Ah. That’s why there’s an “Extra” xx. Or a missing 1n+1\frac{-1}{n+1}. You’re indexing by derivatives of gg, while Jacobi is indexing by powers of xx.

Not that thus effervesces the mystery, for it doesn’t…


Let me channel my inner Pooh. The primary reason for being multiplied by a nn!\frac{a^n}{n!} that I know of is for Summing a Taylor Series.

t ny n=x(tx) nn!g (n)(x)=xg((1t)x)\sum t^n y_n = x \sum \frac{(-t x)^n}{n!} g^{(n)}(x) = x g((1-t)x)

And I don’t know that that helps you much either, but it’s the naturalest thing I could think of.

Posted by: Jesse McKeown on September 3, 2020 11:06 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

The Taylor series idea is very plausible, and there must be something to it. But note that Jacobi is not dealing with

t nn!g (n)(x) \frac{t^n}{n!} g^{(n)}(x)

as would be reasonable for computing

g(x+t)= n=0 t nn!g (n)(x) g(x+t) = \sum_{n = 0}^\infty \frac{t^n}{n!} g^{(n)}(x)

He’s dealing with something more like

x nn!g (n)(x) \frac{x^n}{n!} g^{(n)}(x)

This would show up if you were computing

g(2x)= n=0 x nn!g (n)(x) g(2x) = \sum_{n = 0}^\infty \frac{x^n}{n!} g^{(n)}(x)

In fact to be honest he’s actually dealing with something more like

(x) nn!g (n)(x) \frac{(-x)^n}{n!} g^{(n)}(x)

This would show up if you were computing

g(0)= n=0 (x) nn!g (n)(x) g(0) = \sum_{n = 0}^\infty \frac{(-x)^n}{n!} g^{(n)}(x)

In fact to be really honest he’s dealing with

(1) n2x n+1g (n)(x) (-1)^n 2 x^{n+1} g^{(n)}(x)

The extra power of xx is what really bugs me. This would show up if you were computing

2xg(0)= n=0 (1) n2x n+1n!g (n)(x) -2x g(0) = \sum_{n = 0}^\infty \frac{(-1)^n 2 x^{n+1}}{n!} g^{(n)}(x)

Posted by: John Baez on September 3, 2020 11:58 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

“Let me channel my inner Pooh.”

Referencing perhaps Benjamin Hoff’s The Tao of Pooh. (He wrote a sequel, The Te of Piglet, which for deep and deeply personal reasons is a more attractive premise for me, but which is perhaps not as fine a book as its predecessor.)

On a slightly different line, it won’t have escaped John’s notice that e x 2/2e^{x^2/2} is the exponential generating function for the sequence a na_n where a na_n counts the number of ways of partitioning a set of nn elements into n/2n/2 2-element sets, or the number of fixed-point free involutions on an nn-element set, or the number of different simultaneous handshake configurations at a party of nn people. I say this particularly in view of the Baez-Dolan paper From Finite Sets to Finite Diagrams. I don’t know the quantum physics where I can fake being fluent, but [mutter mutter] something about free non-interacting particles.

Posted by: Todd Trimble on September 4, 2020 1:23 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

All the stuff about species and Feynman diagrams in quantum field theory is deeply connected to integrals of polynomials multiplied by a Gaussian, so I was delighted when proving

(11+113+1135+)+11+11+21+31+41+51+=πe2 \left(\frac{1}{1} + \frac{1}{1 \cdot 3} + \frac{1}{1 \cdot 3 \cdot 5} + \cdots\right) \; + \; \frac{1}{1 + \frac{1}{1 + \frac{2}{1 + \frac{3}{1 + \frac{4}{1 + \frac{5}{1 + \ddots}}}}}} = \sqrt{\frac{\pi e}{2}}

turned out to be all about doing integrals of characteristic functions multiplied by a Gaussian!

The infinite sum really should have a species interpretation, since it comes from this formula:

x+x 313+x 5135+=e x 2/2 0 xe t 2/2dt x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

The continued fraction is not something I associate with quantum field theory or species, but there are systematic ways to turn power series into continued fractions and vice versa, so there could be something interesting to discover here.

Posted by: John Baez on September 4, 2020 6:56 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

John, you say

1y 1=1+11+q1+2q1+3q. \frac{1}{y_1} = 1 + \frac{1}{1 + \frac{q}{1 + \frac{2q}{1 + \frac{3q}{\ddots}}}}.

Given David’s record of connecting bits of things that have appeared at the Café, I’m a little surprised that he hasn’t already pointed out that a similar looking continued fraction was in a post of mine from three years ago: Lattice Paths and Continued Fractions II.

I derived a continued fraction expansion of the generating function of the odd double factorials, where the mmth double factorial m!!m!! is m(m2)(m4)m (m-2) (m-4) \dots.

n=0 (2n1)!!t 2n=11t 212t 213t 21 \sum_{n=0}^\infty (2n -1)!!\, t^{2n} = \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\dots }}}}

I said that I thought this was originally due to Gauss.

Posted by: Simon Willerton on September 4, 2020 1:49 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

In contrast to your

(11+113+1135+)+11+11+21+31+41+51+=πe2 \left(\frac{1}{1} + \frac{1}{1 \cdot 3} + \frac{1}{1 \cdot 3 \cdot 5} + \cdots\right) \; + \; \frac{1}{1 + \frac{1}{1 + \frac{2}{1 + \frac{3}{1 + \frac{4}{1 + \frac{5}{1 + \ddots}}}}}} = \sqrt{\frac{\pi e}{2}}

in this case, taking liberties, by taking t=1t=1, we get

(1+13+135+)111121314151=1, \left({1} + {1 \cdot 3} + {1 \cdot 3 \cdot 5} + \cdots\right) \; - \; \frac{1}{1 - \frac{1}{1 - \frac{2}{1 - \frac{3}{1 - \frac{4}{1 - \frac{5}{1 - \ddots}}}}}} = -1,

where the right hand side comes from the n=0n=0 term in the sum, with (1)!!=1(-1)!! = 1.

Posted by: Simon Willerton on September 4, 2020 3:11 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Must have missed that one. Bad timing, a late September posting, with the academic year beginning.

Seeing you write about Motzkhin paths there, reminds of a much earlier conversations about species, alluded to here. Now, this feels like a very long time ago.

Posted by: David Corfield on September 4, 2020 3:25 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Hi, Simon!

Given David’s record of connecting bits of things that have appeared at the Café, I’m a little surprised that he hasn’t already pointed out that a similar looking continued fraction was in a post of mine from three years ago.

So you’re surprised that someone else didn’t point out a three-year-old post of yours before you did? I think you’re getting spoiled!

It would be interesting if Gauss knew

x+x 313+x 5135+=11t 212t 213t 21 x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\ddots }}}}

It’s really easy to show

x+x 313+x 5135+=e x 2/2 0 xe t 2/2dt x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

but I’m pretty sure the same style of argument that Jacobi used to show

1x+1x+2x+3x+4x+5x+=e x 2/2 x e t 2/2dt \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}} = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

can used to give a continued fraction expansion of

e x 2/2 0 xe t 2/2dt e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

because they obey almost the same differential equation (as noted in my post here), and the differential equation is what leads to the continued fraction expansion. So I bet the argument in my post can be adapted to show

x+x 313+x 5135+=11t 212t 213t 21 x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\ddots }}}}

Conversely, Gauss should have easily been able to show

1x+1x+2x+3x+4x+5x+=e x 2/2 x e t 2/2dt \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}} = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

before Jacobi did it in 1834. Maybe he just didn’t get around to it, or didn’t bother publishing it. Gauss’ main work on continued fractions seems to date to 1813.

I will have to reread your post about the connection to lattice paths! I was probably distracted by ‘applied category theory’ when it come out. My interest is heightened now that I’m trying to understand continued fractions a bit better.

I’d love to find a general theory relating continued fractions to combinatorics, the way species relate power series to combinatorics. It looks like your post on Flajolet’s Fundamental Lemma points the way! But at the very least I’d like to overcome my massive ignorance of them.

The first step was to learn the \ddots command in LaTeX. It makes the difference between this:

11t 212t 213t 21 \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\cdots }}}}

and something much cooler:

11t 212t 213t 21 \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\ddots }}}}

Posted by: John Baez on September 4, 2020 7:06 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Did you really mean to write

x+x 313+x 5135+=11t 212t 213t 21? x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\ddots }}}} \,?

The left hand side should be

1+(1)t 2+(13)t 4+(135)t 5+ 1+ (1)t^2 + (1\cdot 3) t^4 + (1\cdot 3\cdot 5) t^5+\dots

unless I’m missing something (which is quite possible). So it ought to be even powers of tt rather than odd powers of xx, with the double factorials in the numerator, not the denominator.

Posted by: Simon Willerton on September 4, 2020 9:16 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Oh, I probably screwed it up. I just meant that you can probably use Jacobi’s trickery to get a continued fraction expansion for

x+x 313+x 5135+=e x 2/2 0 xe t 2/2dt x + \frac{x^3}{1 \cdot 3} + \frac{x^5}{1 \cdot 3 \cdot 5} + \cdots = e^{x^2/2} \int_0^x e^{-t^2/2} \, d t

just as easily as Jacobi got one for

e x 2/2 x e t 2/2dt=1x+1x+2x+3x+4x+5x+ e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t = \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}}

I should not have tried to cut-and-paste a guess of the continued fraction expansion you actually get!

Posted by: John Baez on September 5, 2020 2:28 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Over on Twitter Leo Stein found a vastly simplified way to show

1x+1x+2x+3x+4x+5x+=e x 2/2 x e t 2/2dt \frac{1}{x + \frac{1}{x + \frac{2}{x + \frac{3}{x + \frac{4}{x + \frac{5}{x + \ddots}}}}}} = e^{x^2/2} \int_x^\infty e^{-t^2/2} \, d t

so I’ve rewritten the whole blog article to replace Jacobi’s more clunky calculation with this one.

This answers my questions about why Jacobi did exactly what he did. The answer seems to be: because he didn’t find the really nice argument.

Posted by: John Baez on September 5, 2020 2:31 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Take a look at Proposition 14 on p. 44 of “Combinatorial models of creation-annihilation” by Blasiak and Flajolet, giving Simon Willerton’s identity and relating 0(2n1)!!z 2n=11z(x+D)1| x=0\sum_{\ge 0} (2n-1)!! \; z^{2n} = \frac{1}{1 - z \; (x+D) } \; 1 \; |_{x=0} to your continued fraction. Note that R=(x+D)R = (x + D) is the raising/creation op for a family of Hermite polynomials H n(x)H_n(x) with e h.t=e t 2/2e^{h.t} = e^{t^2/2}, the exponential generating function of the moments; that is, H n(x)=R n1=(x+D) n1=(x+h.) nH_n(x)= R^n \; 1 = (x + D)^n \; 1 = (x + h.)^n. For more on the double factorials, see OEIS A001147 and A094638. They are simply related to the Catalan numbers, binary trees, random walks on graphs … . Viennot gave a talk on Hermite and other orthogonal polynomial ‘histories’ (lattice paths) in, aptly, Chennai in 2019–“Combinatorial theory of continued fractions and orthogonal polynomials” (slides here).

Posted by: Tom Copeland on March 19, 2021 1:23 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

In the article I reference by Flajolet and an earlier one of his on varieties of trees, Cayley is mentioned only once without discussion of Cayley’s general tree rep of diff ops (nor the Kirkman-Cayley numbers, so important in combinatorics and analysis). I regard this as a serious omission of well-deserved credit.

Posted by: Tom Copeland on March 19, 2021 5:46 AM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

“we need to check that the limiting procedure is valid”

It seems to me that there is quite a bit of subtlety hidden here, because the method as explained could just as easily be applied to the differential equation for f(x) and would then yield a continued fraction expansion for f(x); in fact it would yield the negative of the continued fraction of g(x), and that can’t be true. Somehow the value g(0) must enter the story.

Posted by: Axel Boldt on November 12, 2022 9:41 PM | Permalink | Reply to this

Re: Chasing the Tail of the Gaussian (Part 2)

Also note that the given continued fraction for g(x) doesn’t work when x=0 since 1/(1/(2/(3/(4/…)))) diverges.

So g(0) is probably not a good initial value to use. But my general point stands: the solution to the differential equation for g(x) depends on an initial value, say g(1), but the given derivation of the continued fraction does not seem to depend on g(1).

Posted by: Axel Boldt on November 12, 2022 11:12 PM | Permalink | Reply to this

Post a New Comment