## January 17, 2013

### Carleson’s Theorem

#### Posted by Tom Leinster

I’ve just started teaching an advanced undergraduate course on Fourier analysis — my first lecturing duty in my new job at Edinburgh.

What I hadn’t realized until I started preparing was the extraordinary history of false beliefs about the pointwise convergence of Fourier series. This started with Fourier himself about 1800, and was only fully resolved by Carleson in 1964.

The endlessly diverting index of Tom Körner’s book Fourier Analysis alludes to this:

Here’s the basic set-up. Let $\mathbf{T} = \mathbf{R}/\mathbf{Z}$ be the circle, and let $f\colon \mathbf{T} \to \mathbf{C}$ be an integrable function. The Fourier coefficients of $f$ are

$\hat{f}(k) = \int_\mathbf{T} f(x) e^{-2\pi i k x} \,dx$

($k \in \mathbf{Z}$), and for $n \geq 0$, the $n$th Fourier partial sum of $f$ is the function $S_n f\colon \mathbf{T} \to \mathbf{C}$ given by

$(S_n f)(x) = \sum_{k = -n}^n \hat{f}(k) e^{2\pi i k x}.$

The question of pointwise convergence is:

For ‘nice’ functions $f$, does $(S_n f)(x)$ converge to $f(x)$ as $n \to \infty$ for all $x \in \mathbf{T}$?

And if the answer is no, does it at least work for most $x$? Or if not for most $x$, at least for some $x$?

Fourier apparently thought that $(S_n f)(x) \to f(x)$ was always true, for all functions $f$, although what a man like Fourier would have thought a ‘function’ was isn’t so clear.

Cauchy claimed a proof of pointwise convergence for continuous functions. It was wrong. Dirichlet didn’t claim to have proved it, but he said he would. He didn’t. However, he did show:

Theorem (Dirichlet, 1829)  Let $f\colon \mathbf{T} \to \mathbf{C}$ be a continuously differentiable function. Then $(S_n f)(x) \to f(x)$ as $n \to \infty$ for all $x \in \mathbf{T}$.

In other words, pointwise convergence holds for continuously differentiable functions.

It was surely just a matter of time until someone managed to extend the proof to all continuous functions. Riemann believed this could be done, Weierstrass believed it, Dedekind believed it, Poisson believed it. So, in Körner’s words, it ‘came as a considerable surprise’ when du Bois–Reymond proved:

Theorem (du Bois–Reymond, 1876)  There is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for some $x \in \mathbf{T}$, the sequence $((S_n f)(x))$ fails to converge.

Even worse (though I actually don’t know whether this was proved at the time):

Theorem  Let $E$ be a countable subset of $\mathbf{T}$. Then there is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in E$, the sequence $((S_n f)(x))$ fails to converge.

The pendulum began to swing. Maybe there’s some continuous $f$ such that $((S_n f)(x))$ doesn’t converge for any $x \in \mathbf{T}$. This, apparently, became the general belief, solidified by a discovery of Kolmogorov:

Theorem (Kolmogorov, 1926)  There is a Lebesgue-integrable function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in \mathbf{T}$, the sequence $((S_n f)(x))$ fails to converge.

It was surely just a matter of time until someone managed to adapt the counterexample to give a continuous $f$ whose Fourier series converged nowhere.

At best, the situation was unclear, and this persisted until relatively recently. I have on my shelf a 1957 undergraduate textbook called Mathematical Analysis by Tom Apostol. In the part on Fourier series, he states that it’s still unknown whether the Fourier series of a continuous function has to converge at even one point. This isn’t ancient history; Apostol’s book was even on my own undergraduate recommended reading list (though I can’t say I ever read it).

The turning point was Carleson’s theorem of 1964. His result implies:

If $f\colon \mathbf{T} \to \mathbf{C}$ is continuous then $(S_n f)(x) \to f(x)$ for at least one $x \in \mathbf{T}$.

In fact, it implies something stronger:

If $f\colon \mathbf{T} \to \mathbf{C}$ is continuous then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

In fact, it implies something stronger still:

If $f\colon \mathbf{T} \to \mathbf{C}$ is Riemann integrable then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

The full statement is:

Theorem (Carleson, 1964)  If $f \in L^2(\mathbf{T})$ then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

This was soon strengthened even further by Hunt (in a way that apparently Carleson had anticipated). ‘Recall’ that the spaces $L^p(\mathbf{T})$ get bigger as $p$ gets smaller; that is, if $1 \leq q \leq p \leq \infty$ then $L^q(\mathbf{T}) \supseteq L^p(\mathbf{T})$. So, if we could change the ‘2’ in Carleson’s theorem to something smaller, we’d have strengthened it. We can’t take it all the way down to 1, because of Kolmogorov’s counterexample. But Hunt showed that we can take it arbitrarily close to 1:

Theorem (Hunt, 1968)  If $f \in \bigcup_{p > 1} L^p(\mathbf{T})$ then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

There’s an obvious sense in which Carleson’s and Hunt’s theorems can’t be improved: we can’t change ‘almost all’ to ‘all’, simply because changing a function on a set of measure zero doesn’t change its Fourier coefficients.

But there’s another sense in which they’re optimal: given any set of measure zero, there’s some $L^2$ function whose Fourier series fails to converge there. Indeed, there’s a continuous such $f$:

Theorem (Kahane and Katznelson, 196?)  Let $E$ be a measure zero subset of $\mathbf{T}$. Then there is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in E$, the sequence $((S_n f)(x))$ fails to converge.

I’ll finish with a question for experts. Despite Carleson’s own proof having been subsequently simplified, the Fourier analysis books I’ve seen say that all proofs are far too hard for an undergraduate course. But what about the corollary that if $f$ is continuous then $(S_n f)(x)$ must converge to $f(x)$ for at least one $x$? Is there now a proof of this that might be simple enough for a final-year undergraduate course?

Posted at January 17, 2013 2:15 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2591

### Re: Carleson’s Theorem

I don’t think there is much simplification to the proof of Carleson’s theorem in restricting attention to continuous functions, even if one only wants convergence somewhere rather than almost everywhere. It’s now realised that pointwise convergence questions of $S_n f(x)$ are closely related to the boundedness properties of the Carleson maximal operator $S^* f(x) := \sup_n |S_n f(x)|.$ (This is only a sublinear operator rather than a linear operator, but it’s still an operator nonetheless.) Roughly speaking, if one can prove a non-trivial bound on $S^* f$ (and in particular keep it finite almost everywhere) for all $f$ in a function space (e.g. $L^p$), then it is a relatively routine matter to demonstrate almost everywhere convergence in $L^p$; and conversely, if no such bound exists, then it is likely that (with perhaps a bit of nontrivial trickery) one can eventually cook up a counterexample in this function space for which one has pointwise convergence nowhere. If one has a bit of regularity, e.g. $C^1$, then bounding $S^* f$ is relatively straightforward, but in spaces with zero regularity (E.g. $L^p$ or $C^0$) it’s much more difficult. The key problem here is the modulation invariance of $S^*$: if one multiplies $f$ by a character $e^{2\pi i kx}$, this essentially does not change $S^*$. (This invariance is more apparent if one replace $S_n$ with to the closely related double sum $S_{m,n} f(x) = \sum_{-m}^n \hat f(k) e^{2\pi i kx}$ in the definition of $S^*$.) The spaces $L^p$ and $C^0$ are also modulation invariant, and this basically forces any proof of almost everywhere (or even somewhere) pointwise convergence in these spaces to also be modulation invariant, which rules out a lot of standard techniques and requires instead tools such as time-frequency analysis.
Posted by: Terence Tao on January 17, 2013 4:02 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Thanks very much. I finished my lecture just now by telling them that I’d asked this question here, and that I’d consider including the proof if it turned out there was a suitably simple one. Now I have an answer — one they might be relieved about.

Incidentally, I’m glad you’ve figured out how to typeset math here. Jacques Distler, who runs the blog, made a small change to the interface. There may be further changes soon.

Posted by: Tom Leinster on January 17, 2013 4:45 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

It at least is within reach of an undergraduate class to prove more general results about everywhere convergence than Dirichlet’s theorem, for example for Hölder continuous functions.

Posted by: Mark Meckes on January 17, 2013 4:43 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Aha. Thanks. The previous lecturer did it for Lipschitz, and I was going to follow suit. I should look at the notes and see if his argument extends easily to Hölder.

Posted by: Tom Leinster on January 17, 2013 4:48 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Regarding pointwise convergence for Hölder continuous functions, I like the two page note by Chernoff, Pointwise convergence of Fourier series.

On another note, as you write, the Carleson-Hunt theorem is optimal for the exponent in L^p because of Kolmogorov’s L^1 example, but there are still open questions for intermediate spaces like L log(L) (see Lacey, Carleson’s theorem: proof, complements, variations).

Posted by: Benoit Jubin on January 18, 2013 2:11 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

So will the café commenting interface also remember names, emails and urls like it never did? (Not to get too off topic; we could discuss at the n-forum instead.)

Posted by: David Roberts on January 18, 2013 10:38 AM | Permalink | Reply to this

### Re: Carleson’s Theorem

Such questions are on topic on this page. (In part of the discussion there you can see that the cafe actually did used to remember names, etc., but mysteriously stopped at some point.)

Posted by: Mark Meckes on January 18, 2013 2:21 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

I think the lecture note proof of the decay rate for the Fourier coefficients of Lipschitz functions is needlessly complicated. If we consider, more generally, a Holder continuous function $f$ of order $\alpha$ on $\mathbb{T}$ then then decay rate $|\hat{f}(k)| \lesssim (1+|k|)^{-\alpha}$ is an obvious consequence of the identity $\hat{f}(k) = -\int_{\mathbb{T}} f(x - \tfrac{1}{2k})e^{-2\pi i kx} \,dx$ However, I don’t think the summation method argument adapts if you assume the coefficients are just $O(k^{-\alpha})$.

Posted by: Jonathan on January 18, 2013 11:38 AM | Permalink | Reply to this

### Re: Carleson’s Theorem

Are you talking about last year’s notes? If you’re the Jonathan I think you are, let’s meet and have a chat about this. I’ll mail you.

Posted by: Tom Leinster on January 18, 2013 5:15 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

With regards to Carleson’s theorem a MUCH simpler result along similar lines is the almost everywhere convergence of the Fejer means of an $L^1(\mathbb{T})$ function. The reason I mention this is that proof has the same general outline as that of Carleson’s Theorem - here we consider the Fejer maximal function, defined in an analogous way to the Carleson maximal function, and try to show it satisfies some interesting bound. I mention this as a point of interest rather than something suitable for inclusion in the course.

Posted by: Jonathan on January 18, 2013 12:37 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

That’s interesting. Thanks.

Posted by: Tom Leinster on January 18, 2013 3:13 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Even if the usual Fourier series of a continuous function $f: \mathbf{T} \to \mathbb{C}$ does not always converge pointwise to $f$ (much less uniformly to $f$), it might be nice to point out to your students that the averages of the finite Fourier approximations

$A_N(f) = \frac1{N+1} \sum_{n=0}^N S_n(f)$

do converge to $f$ uniformly! These are the Fejer means that Jonathan referred to.

The way I think of this goes as follows. (All of this is really well known. Certainly it’s much better known to the analysts reading this thread than it is to a semi-ignorant category theorist like myself. And for all I know it’s covered in Körner’s book – I’ve never looked at the book, but have heard very nice things about it.)

First, there’s a Banach algebra $L^1(\mathbf{T})$ where on the unit circle $\mathbf{T}$ we use the normalized Haar measure, and the multiplication on $L^1$ is given by convolution. (This algebra doesn’t have an identity, but we can adjoin one, and it is useful to think of it as the Dirac distribution supported at the identity $1 \in \mathbf{T}$.) The spaces $L^p(\mathbf{T})$, for $1 \leq p \leq \infty$ are Banach modules under convolution product: the convolution product

$\ast: L^1(\mathbf{T}) \times L^p(\mathbf{T}) \to L^p(\mathbf{T})$

is bilinear and continuous because we have ${\|f \ast g\|}_p \leq {\|f\|}_1 {\|g\|}_p$.

Here are some useful examples of convolution:

• Let $e_n: \mathbf{T} \to \mathbb{C}$ denote the character $z \mapsto z^n$, or if you like, the character $x \mapsto e^{i n x}$ where we identify $\mathbf{T}$ with $\mathbb{R}/2\pi \mathbb{Z}$. Let $f: \mathbf{T} \to \mathbb{C}$ be continuous, say. Then

$(e_n \ast f)(y) = \frac1{2\pi} \int_{-\pi}^{\pi} e^{i n (y-x)} f(x) d x = (\frac1{2\pi} \int_{-\pi}^{\pi} e^{-i n x} f(x) d x)\; e^{i n y}$

where the integral in the parentheses gives the $n^{th}$ Fourier coefficient. Thinking of $e_n$ and $f$ as living in $L^2(\mathbf{T})$, this Fourier coefficient is the Hilbert space pairing $\langle e_n, f \rangle$, and so we get the neat little formula

$e_n \ast f = \langle e_n, f \rangle e_n.$

• The operator $S_n: f \mapsto S_n(f)$ is defined by $f \mapsto \sum_{k = -n}^n \langle e_k, f \rangle e_k$ where we orthogonally project $f$ onto the subspace of $L^2(\mathbf{T})$ spanned by the orthonormal elements $e_{-n}, e_{-n+1}, \ldots, e_n$. Using the example above, we see that the operator $S_n$ can be described as the result of convolving with the function

$D_n = e_{-n} + e_{-n+1} + \ldots + e_n,$

i.e.,

$S_n(f) = D_n \ast f.$

This $D_n$ is called the $n^{th}$ Dirichlet kernel. It’s manifestly a finite geometric series, and thus we easily compute

$D_n(x) = e^{-i n x} \frac{e^{(2 n + 1)i x} - 1}{e^{i x} - 1} = \frac{e^{(n + 1/2) i x} - e^{-(n + 1/2)i x}}{e^{(1/2)i x} - e^{-(1/2)i x}} = \frac{\sin((n + 1/2)x)}{\sin((1/2)x)}.$

This (a) spikes to the value $2n + 1$ at $x = 0$, and (b) has “mass”

$\frac1{2\pi}\int_{-\pi}^{\pi} e_{-n}(x) + \ldots + e_0(x) + \ldots + e_n(x) d x = \frac1{2\pi} \int_{-\pi}^{\pi} e_0(x) d x = 1,$

and these two facts (a), (b) mean $D_n$ behaves something like an approximate Dirac distribution supported at $0$ – except it doesn’t taper off to zero a small distance away from $0$.

Now for a brief interlude on this business of “Dirac distribution” as identity. To repeat: the Banach algebra $L^1(\mathbf{T})$ has no identity for the convolution product (which is what the Dirac distribution $\delta$ would be, except that this “mystical” $\delta(x)$, which is infinite at $x=0$, and zero away from $x=0$, and has integral $1$, is not represented by an $L^1$ function). But, as a workaround, we can speak of approximate identities. A sequence of $L^1$ functions $K_n$ is an approximate identity if

1. For all $n$ we have $\frac1{2\pi} \int_{-\pi}^{\pi} K_n(x) d x = 1;$
2. For all $\epsilon, \delta \gt 0$ there exists $N$ such that $\frac1{2\pi} \int_{[-\pi, \pi]\backslash [-\epsilon, \epsilon]} {|K_n(x)|}\;\; d x \lt \delta,$ whenever $n \geq N$.

The second condition says that away from the identity element $0$, an approximate identity is close to zero (in the $L^1$ sense), while still having total mass $1$ by the first condition.

Lemma: Let $\{K_n\}$ be an approximate identity. For each of the Banach modules $L^p(\mathbf{T})$ over $L^1(\mathbf{T})$, $1 \leq p \leq \infty$, we have

$\underset{n \to \infty}{\lim} K_n \ast f = f$

for all $f \in L^p(\mathbf{T})$. (This justifies the term “approximate identity”.)

It should also be said that convolutions $K \ast f$ inherit the regularity behavior of $K$ or $f$. For example, if $K$ or $f$ is of class $C^n$, then so is $K \ast f$ (even if $f$ or $K$, respectively, isn’t). Consequently, if $f$ is continuous, then convergence $K_n \ast f \to f$ in the $L^\infty$ norm becomes uniform convergence of continuous functions.

Now we head to our third example.

• Define an operator (the $N^{th}$ Cesaro or Fejer mean) on continuous functions $f: \mathbf{T} \to \mathbb{C}$ by averaging the $S_n(f)$:

$K_N(f) = \frac1{N+1} \sum_{n=0}^N S_n(f).$

Thus $K_N$ is the result of convolving with an average of Dirichlet kernels: $K_N(f) = F_N \ast f$ where

$F_N = \frac1{N+1} \sum_{n=0}^N D_n.$

This $F_N$ is called a Fejer kernel. Now here is a pretty neat fact:

$F_N = \frac1{N+1} D_N^2.$

Indeed, $D_N^2 = \sum_{n=0}^N D_n$ for essentially the same reason that $11111^2 = 123454321$ – the proof of either can be left as an exercise. Hence we have

$F_N = \frac1{N+1} \frac{\sin^2((N+1/2) x)}{\sin^2((1/2)x)}$

and the positive functions $\{F_n\}$ form an approximate identity. For, each of the $F_n$ has mass $1$ since each is an average of Dirichlet kernels which have mass $1$, and given any $\delta, \epsilon \gt 0$ there exists $N$ such that

${|F_n(x)|} \leq \frac1{(n+1)\sin^2(\epsilon/2)} \leq \delta$

for all $x \in [-\pi, \pi] \backslash [-\epsilon, \epsilon]$ and $n \geq N$.

Therefore, by the lemma above, for $f: \mathbf{T} \to \mathbb{C}$ continuous, the Cesaro means $K_n(f)$ converge uniformly to $f$.

Posted by: Todd Trimble on January 21, 2013 3:06 AM | Permalink | Reply to this

### Re: Carleson’s Theorem

Wow, Todd, that’s very comprehensive!

We’ll definitely do the standard results about Fejér kernels and approximations to the identity. The students haven’t seen Cesàro summation before, so it’ll be nice to introduce it: summing unsummable series is an appealing story to tell. And introducing the results in terms of the “mystical” delta function was exactly what I’d planned to do!

(To any of my students reading this: Todd’s comment gives a high-level, abstract, compressed account of a decent-sized chunk of the course. We’ll take it at a much more relaxed pace, spending several lectures on this material.)

On the other hand, I’ll enjoy reading your comment slowly when I get the chance, as at a quick glance there are some points of view there that I haven’t yet learned to appreciate (e.g. looking at things in terms of Banach algebras). Also, I hadn’t noticed your “pretty neat fact” before.

One thing that constrains this course is that the students haven’t seen Lebesgue integration (and I’m not going to teach it). At first I thought it would be impossible to do this stuff using only Riemann, but it turns out to be entirely possible. Indeed, Körner’s book and the identically-titled book by Stein and Shakarchi do exactly this.

Posted by: Tom Leinster on January 21, 2013 11:20 AM | Permalink | Reply to this

### Re: Carleson’s Theorem

Todd, one little thing. In the notes I inherited, there was a further condition on “approximate identities”. (In fact, the previous lecturer used the term “good approximation to the identity”, perhaps for that reason.) It’s that $\sup_n \| K_n \|_1 \lt \infty.$ This follows immediately from your condition 1 if each $K_n$ is nonnegative, and is therefore true for the Fejér kernels.

Do you need that condition somewhere? At the moment I haven’t thought through where (if anywhere) you might need it, but I know it’s used in those notes I have.

Posted by: Tom Leinster on January 21, 2013 12:40 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

It could be that I overlooked something; I was going on memory for all of this. I guess a lot of people would impose the positivity condition (so that $\int K \; d x = \int {|K|} \; d x = 1$) on approximate identities $K$ (or what are also called mollifiers, especially when smoothness assumptions are added to the mix).

At one point I had gone through all this reasonably carefully, when I was teaching real analysis or functional analysis, and for some reason I don’t recall mentioning uniform boundedness of the $L^1$ norms. But I’ll try to get back to you on this, after I find a certain book in my library.

My main memory though is that I fell in love with approximate identities and the Dirac distribution and what a powerful technique they bring to the table. I was lucky enough in my graduate school days to learn functional analysis from François Trèves, who taught a very abstract distributions-oriented approach – it was definitely not your ordinary meat-and-potatoes functional analysis course. It would be nice thing for me to return to some time.

Posted by: Todd Trimble on January 21, 2013 1:45 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Well, I found the book I was looking for in my library (Measure and Integral by Wheeden and Zygmund), and their set-up for approximate identities is a little different. Anyway, I accept that the extra boundedness assumption you mentioned is probably there for a good reason, to make the estimates needed to prove the lemma I mentioned come out right. I’ll bet when I was teaching this, I included a positivity assumption which wasn’t mentioned in my comment above.

Posted by: Todd Trimble on January 21, 2013 6:21 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Tom, are you using any of Katznelson’s Introduction to Harmonic Analysis? It might be too compressed for your purposes, and in places it presupposes knowledge of the Lebesgue integral; but Chapter I (of the 2nd ed, Dover) contains many of the key results in a concise presentation.

In particular, Ch. I Section 2 talks about “summability kernels” which give an abstract characterization of the key features that make convolution with Fejer kernels behave nicely.

Posted by: Yemon Choi on January 22, 2013 8:59 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Thanks. I hadn’t looked at that bit of Katznelson before. At a quick read, it seems to be basically the same as part of what was covered by my predecessor Jim Wright, but done more tersely and more abstractly (and, as you point out, with the Lebesgue theory). The treatment looks pleasingly streamlined; I should read through it properly when I get the chance.

Posted by: Tom Leinster on January 23, 2013 12:46 AM | Permalink | Reply to this

### Re: Carleson’s Theorem

I decided to try to visualize the Dirichlet kernels once. You’ve inspired me to find them again. You can see an animation of the kernels $D_N$ for $N$ from $0$ to $200$ (thought of as functions on the circle) at YouTube:

Dirichlet kernel from N=0 to N=200

The black curve is the kernel (thought of as a function on the circle) and the red curves give envelopes between which the kernel always lies.

Posted by: Simon Willerton on January 21, 2013 4:23 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Thanks! I’ll link to that from the course web page once we get to that part of the course.

Posted by: Tom Leinster on January 21, 2013 5:02 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Cool! But why does YouTube think that related videos that I may also want to view are on topics for women involving makeup, dieting, and exercise? :)

Posted by: RodMcGuire on January 21, 2013 5:11 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

That’s probably nothing to do with the video itself. It’s just because it’s Simon’s channel.

Posted by: Tom Leinster on January 21, 2013 5:16 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

For those of you who sometimes need help in identifying dry English humour (John B?), I am happy to assist and point out that Tom’s comment was dry English humour.

Anyway, I’ve put up an analagous animation of the Fejer kernels.

Fejer kernels for N=0 to N=200

Posted by: Simon Willerton on January 21, 2013 5:43 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Tom’s comment was dry English humour.

I saw the Fejér one on your channel earlier and watched it too. I was a bit puzzled, because it looks a bit like Todd’s condition 2 for an approximate identity isn’t satisfied. I suppose it’s just slow convergence, or the area under the crinkly bit of the curve is smaller than it looks.

Posted by: Tom Leinster on January 21, 2013 5:48 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

I think it’s probably okay: those little spikes where local maxima occur are being pushed very gradually inwards (even as they seem to grow in height a bit) as $N$ increases. It might be hard to tell this condition 2, just from going up to $N = 200$.

Those are cool animations! Great to show the students; when I was teaching such things, I think I had them whip out their TI-85’s to graph a few kernels, but mostly just to get a general feel for their shapes. (It was also before YouTube was around.)

Posted by: Todd Trimble on January 21, 2013 6:36 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Todd said

Those are cool animations!

Thanks. I ought to say how I did them. I used maple and blender. Firstly I used maple to generate a gif file for each kernel $D_N$ with $0 \leq N \leq 200$ using the following code.

# Define the two red curves that the kernels lie between
envelope1:=spacecurve([cos(theta),sin(theta), 1/sin(theta/2)],
theta=0.01..2*Pi-0.01,numpoints=1000,view=-20..70,
orientation=[-40,75],thickness=3,color=red):
envelope2:=spacecurve([cos(theta),sin(theta), -1/sin(theta/2)],
theta=0.01..2*Pi-0.01,numpoints=1000,view=-20..70,
orientation=[-40,75],thickness=3,color=red):

# Define the plots of the Dirichlet kernels
for N from 0 to 200 do
DirichletPlot[N]:=
spacecurve([cos(theta),sin(theta),sin((N+1/2)*theta)/sin(theta/2)],
theta=0..2*Pi, numpoints=1000*(1+floor(N/40)),
view=-20..70, orientation=[-40,75], thickness=3, color=black):
od:

# Now export the pictures as gif files
for N from 0 to 200 do
PlotFileName:=sprintf(D:\\dirichlet\\dirichlet%03d.gif,N)
plotsetup(gif, plotoutput = PlotFileName, plotoptions=height=1600);
display(envelope1,envelope2,DirichletPlot[N]);
od;


I then loaded the gif files into blender and converted them to a single video file which was finally uploaded to YouTube.

Posted by: Simon Willerton on January 21, 2013 7:19 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Simon, I just used your videos in a lecture. They seemed to go down well. Thanks!

Posted by: Tom Leinster on March 4, 2013 5:45 PM | Permalink | Reply to this

### Re: Carleson’s Theorem

Tom, one other thing. Did you ever read the review of Körner’s book by Gavin Brown? It’s absolutely delightful.

Posted by: Todd Trimble on January 21, 2013 2:22 PM | Permalink | Reply to this