### Carleson’s Theorem

#### Posted by Tom Leinster

I’ve just started teaching an advanced undergraduate course on Fourier analysis — my first lecturing duty in my new job at Edinburgh.

What I hadn’t realized until I started preparing was the extraordinary history of false beliefs about the pointwise convergence of Fourier series. This started with Fourier himself about 1800, and was only fully resolved by Carleson in 1964.

The endlessly diverting index of Tom Körner’s book *Fourier
Analysis* alludes to this:

Here’s the basic set-up. Let $\mathbf{T} = \mathbf{R}/\mathbf{Z}$ be the
circle, and let $f\colon \mathbf{T} \to \mathbf{C}$ be an integrable
function. The **Fourier coefficients** of $f$ are

$\hat{f}(k) = \int_\mathbf{T} f(x) e^{-2\pi i k x} \,dx$

($k \in \mathbf{Z}$), and for $n \geq 0$, the **$n$th Fourier partial
sum** of $f$ is the function $S_n f\colon \mathbf{T} \to \mathbf{C}$
given by

$(S_n f)(x) = \sum_{k = -n}^n \hat{f}(k) e^{2\pi i k x}.$

The question of pointwise convergence is:

For ‘nice’ functions $f$, does $(S_n f)(x)$ converge to $f(x)$ as $n \to \infty$ for all $x \in \mathbf{T}$?

And if the answer is no, does it at least work for most $x$? Or if not for
most $x$, at least for *some* $x$?

Fourier apparently thought that $(S_n f)(x) \to f(x)$ was *always* true,
for all functions $f$, although what a man like Fourier would have thought
a ‘function’ was isn’t so clear.

Cauchy claimed a proof of pointwise convergence for continuous functions. It was wrong. Dirichlet didn’t claim to have proved it, but he said he would. He didn’t. However, he did show:

Theorem(Dirichlet, 1829) Let $f\colon \mathbf{T} \to \mathbf{C}$ be a continuously differentiable function. Then $(S_n f)(x) \to f(x)$ as $n \to \infty$ for all $x \in \mathbf{T}$.

In other words, pointwise convergence holds for continuously differentiable functions.

It was surely just a matter of time until someone managed to extend the proof to all continuous functions. Riemann believed this could be done, Weierstrass believed it, Dedekind believed it, Poisson believed it. So, in Körner’s words, it ‘came as a considerable surprise’ when du Bois–Reymond proved:

Theorem(du Bois–Reymond, 1876) There is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for some $x \in \mathbf{T}$, the sequence $((S_n f)(x))$ fails to converge.

Even worse (though I actually don’t know whether this was proved at the time):

TheoremLet $E$ be a countable subset of $\mathbf{T}$. Then there is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in E$, the sequence $((S_n f)(x))$ fails to converge.

The pendulum began to swing. Maybe there’s some continuous $f$ such that
$((S_n f)(x))$ doesn’t converge for *any* $x \in \mathbf{T}$. This,
apparently, became the general belief, solidified by a discovery of
Kolmogorov:

Theorem(Kolmogorov, 1926) There is a Lebesgue-integrable function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in \mathbf{T}$, the sequence $((S_n f)(x))$ fails to converge.

It was surely just a matter of time until someone managed to adapt the counterexample to give a continuous $f$ whose Fourier series converged nowhere.

At best, the situation was unclear, and this persisted until relatively
recently. I have on my shelf a 1957 undergraduate textbook
called *Mathematical Analysis* by Tom Apostol. In the part on Fourier
series, he states that it’s still unknown whether the Fourier series of a
continuous function has to converge at even *one* point. This isn’t
ancient history; Apostol’s book was even on my own undergraduate
recommended reading list (though I can’t say I ever read it).

The turning point was Carleson’s theorem of 1964. His result implies:

If $f\colon \mathbf{T} \to \mathbf{C}$ is continuous then $(S_n f)(x) \to f(x)$ for at least one $x \in \mathbf{T}$.

In fact, it implies something stronger:

If $f\colon \mathbf{T} \to \mathbf{C}$ is continuous then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

In fact, it implies something stronger still:

If $f\colon \mathbf{T} \to \mathbf{C}$ is Riemann integrable then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

The full statement is:

Theorem(Carleson, 1964) If $f \in L^2(\mathbf{T})$ then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

This was soon strengthened even further by Hunt (in a way that apparently Carleson had anticipated). ‘Recall’ that the spaces $L^p(\mathbf{T})$ get bigger as $p$ gets smaller; that is, if $1 \leq q \leq p \leq \infty$ then $L^q(\mathbf{T}) \supseteq L^p(\mathbf{T})$. So, if we could change the ‘2’ in Carleson’s theorem to something smaller, we’d have strengthened it. We can’t take it all the way down to 1, because of Kolmogorov’s counterexample. But Hunt showed that we can take it arbitrarily close to 1:

Theorem(Hunt, 1968) If $f \in \bigcup_{p > 1} L^p(\mathbf{T})$ then $(S_n f)(x) \to f(x)$ for almost all $x \in \mathbf{T}$.

There’s an obvious sense in which Carleson’s and Hunt’s theorems can’t be improved: we can’t change ‘almost all’ to ‘all’, simply because changing a function on a set of measure zero doesn’t change its Fourier coefficients.

But there’s another sense in which they’re optimal: given any set of measure zero,
there’s some $L^2$ function whose Fourier series fails to
converge there. Indeed, there’s a *continuous* such $f$:

Theorem(Kahane and Katznelson, 196?) Let $E$ be a measure zero subset of $\mathbf{T}$. Then there is a continuous function $f\colon \mathbf{T} \to \mathbf{C}$ such that for all $x \in E$, the sequence $((S_n f)(x))$ fails to converge.

I’ll finish with a question for experts. Despite Carleson’s own proof having been subsequently simplified, the Fourier analysis books I’ve seen say that all proofs are far too hard for an undergraduate course. But what about the corollary that if $f$ is continuous then $(S_n f)(x)$ must converge to $f(x)$ for at least one $x$? Is there now a proof of this that might be simple enough for a final-year undergraduate course?

## Re: Carleson’s Theorem