### Guest post by Batoul Diab

* My hometown Jwaya, Lebanon.*

I was wondering about what the last major pandemic, the Spanish flu of 1918, looked like in real time, so I looked at the 25th anniversary report of the Harvard Class of 1895, published in June 1920 and written in 1919. To my surprise, the flu is barely mentioned. Henry Adsit Bull lost his oldest daughter to it. A couple of classmates worked in influenza hospitals. Morton Aldrich used it as an excuse for being late with his report. Paul Washburn reported being quite ill with it, and emphasizing that it might be his last report, demanded that the editors print his curriculum vitae with no editorial changes. (Nope — he was still alive and well and banking in the 1935 report.) I thought 1894, whose report was written more in the thick of the epidemic, might have more to say, but not really. Two men died of it, including one who made it through hideous battles of the Great War only to succumb to flu in November 1918. Another lost daughter.

But no one *weighs in* on it; I have read a lot of old Harvard class reports, and if there’s one thing I can tell you about an early 20th century Harvard man, it’s that he likes to weigh in. Not sure what to make of this. Maybe the pandemic didn’t much touch the lives of the elite. Or maybe people just died of stuff more and the Spanish flu didn’t make much of an impression. Or maybe it was just too rough to talk about (but I don’t think so — people recount pretty grisly material about the war.)

Back to the present. The Wisconsin Supreme Court ordered all jury trials halted for two months for the safety of jurors, witnesses, and officers of the court; an extremely overwrought dissent from Justice Rebecca Bradley insists that if a right is in the constitution it can’t be put on pause, even for a couple of months, even in a pandemic, which will be news to the people in every state whose governors have suspended their right to assemble.

CJ made a blueberry bundt cake, the best thing he’s made so far, aided by the fact that at the Regent Market Co-op I found a box of pectin, an ingredient I didn’t even know existed. Powdered sugar there was not, but it turns out that powdered sugar is literally nothing but regular sugar ground fine and mixed with a little cornstarch! You can make it yourself if you have a good blender. And we do have a good blender. We love to blend.

Walked around the neighborhood a bit. Ran into the owner of a popular local restaurant and talked to him from across the street. He’s been spending days and days working to renegotiate his loan with the bank. He thinks we ought to be on the “Denmark plan” where the government straight up pays worker’s salaries rather than make businesses apply to loans so they can eventually get reimbursed for the money they’re losing right now. (I did not check whether this is actually the Denmark plan.) Also saw my kids’ pediatrician, who told me that regular pediatrics has been suspended except for babies and they’ve closed the regular clinic, everything is consolidated in 20 S. Park.

I’ve been spending a lot of time thinking about different groups’ COVID projections, claims and counterclaims. I’ll write about it a little in the next entry to show how little I know. But I think nobody knows anything.

Tomorrow it’ll be two weeks since the last time I was more than a quarter-mile from my house. We are told to be ready for another month. It won’t be that hard for us, but it’ll be hard for a lot of people.

This set of notes focuses on the *restriction problem* in Fourier analysis. Introduced by Elias Stein in the 1970s, the restriction problem is a key model problem for understanding more general oscillatory integral operators, and which has turned out to be connected to many questions in geometric measure theory, harmonic analysis, combinatorics, number theory, and PDE. Only partial results on the problem are known, but these partial results have already proven to be very useful or influential in many applications.

We work in a Euclidean space . Recall that is the space of -power integrable functions , quotiented out by almost everywhere equivalence, with the usual modifications when . If then the Fourier transform will be defined in this course by the formula

From the dominated convergence theorem we see that is a continuous function; from the Riemann-Lebesgue lemma we see that it goes to zero at infinity. Thus lies in the space of continuous functions that go to zero at infinity, which is a subspace of . Indeed, from the triangle inequality it is obvious that

If , then Plancherel’s theorem tells us that we have the identity

Because of this, there is a unique way to extend the Fourier transform from to , in such a way that it becomes a unitary map from to itself. By abuse of notation we continue to denote this extension of the Fourier transform by . Strictly speaking, this extension is no longer defined in a pointwise sense by the formula (1) (indeed, the integral on the RHS ceases to be absolutely integrable once leaves ; we will return to the (surprisingly difficult) question of whether pointwise convergence continues to hold (at least in an almost everywhere sense) later in this course, when we discuss Carleson’s theorem. On the other hand, the formula (1) remains valid in the sense of distributions, and in practice most of the identities and inequalities one can show about the Fourier transform of “nice” functions (e.g., functions in , or in the Schwartz class , or test function class ) can be extended to functions in “rough” function spaces such as by standard limiting arguments.

By (2), (3), and the Riesz-Thorin interpolation theorem, we also obtain the Hausdorff-Young inequality

for all and , where is the dual exponent to , defined by the usual formula . (One can improve this inequality by a constant factor, with the optimal constant worked out by Beckner, but the focus in these notes will not be on optimal constants.) As a consequence, the Fourier transform can also be uniquely extended as a continuous linear map from . (The situation with is much worse; see below the fold.)

The *restriction problem* asks, for a given exponent and a subset of , whether it is possible to meaningfully restrict the Fourier transform of a function to the set . If the set has positive Lebesgue measure, then the answer is yes, since lies in and therefore has a meaningful restriction to even though functions in are only defined up to sets of measure zero. But what if has measure zero? If , then is continuous and therefore can be meaningfully restricted to any set . At the other extreme, if and is an arbitrary function in , then by Plancherel’s theorem, is also an arbitrary function in , and thus has no well-defined restriction to any set of measure zero.

It was observed by Stein (as reported in the Ph.D. thesis of Charlie Fefferman) that for certain measure zero subsets of , such as the sphere , one can obtain meaningful restrictions of the Fourier transforms of functions for certain between and , thus demonstrating that the Fourier transform of such functions retains more structure than a typical element of :

Theorem 1 (Preliminary restriction theorem)If and , then one has the estimatefor all Schwartz functions , where denotes surface measure on the sphere . In particular, the restriction can be meaningfully defined by continuous linear extension to an element of .

*Proof:* Fix . We expand out

From (1) and Fubini’s theorem, the right-hand side may be expanded as

where the inverse Fourier transform of the measure is defined by the formula

In other words, we have the identity

using the Hermitian inner product . Since the sphere have bounded measure, we have from the triangle inequality that

Also, from the method of stationary phase (as covered in the previous class 247A), or Bessel function asymptotics, we have the decay

for any (note that the bound already follows from (6) unless ). We remark that the exponent here can be seen geometrically from the following considerations. For , the phase on the sphere is stationary at the two antipodal points of the sphere, and constant on the tangent hyperplanes to the sphere at these points. The wavelength of this phase is proportional to , so the phase would be approximately stationary on a cap formed by intersecting the sphere with a neighbourhood of the tangent hyperplane to one of the stationary points. As the sphere is tangent to second order at these points, this cap will have diameter in the directions of the -dimensional tangent space, so the cap will have surface measure , which leads to the prediction (7). We combine (6), (7) into the unified estimate

where the “Japanese bracket” is defined as . Since lies in precisely when , we conclude that

Applying Young’s convolution inequality, we conclude (after some arithmetic) that

whenever , and the claim now follows from (5) and Hölder’s inequality.

Remark 2By using the Hardy-Littlewood-Sobolev inequality in place of Young’s convolution inequality, one can also establish this result for .

Motivated by this result, given any Radon measure on and any exponents , we use to denote the claim that the *restriction estimate*

for all Schwartz functions ; if is a -dimensional submanifold of (possibly with boundary), we write for where is the -dimensional surface measure on . Thus, for instance, we trivially always have , while Theorem 1 asserts that holds whenever . We will not give a comprehensive survey of restriction theory in these notes, but instead focus on some model results that showcase some of the basic techniques in the field. (I have a more detailed survey on this topic from 2003, but it is somewhat out of date.)

** — 1. Necessary conditions — **

It is relatively easy to find necessary conditions for a restriction estimate to hold, as one simply needs to test the estimate (9) against a suitable family of examples. We begin with the simplest case . The Hausdorff-Young inequality (4) tells us that we have the restriction estimate whenever . These are the only restriction estimates available:

Proposition 3 (Restriction to )Suppose that are such that holds. Then and .

We first establish the necessity of the duality condition . This is easily shown, but we will demonstrate it in three slightly different ways in order to illustrate different perspectives. The first perspective is from scale invariance. Suppose that the estimate holds, thus one has

for all Schwartz functions . For any scaling factor , we define the scaled version of by the formula

Applying (10) with replaced by , we then have

From change of variables, we have

and from the definition of Fourier transform and further change of variables we have

so that

combining all these estimates and rearranging, we conclude that

If is non-zero, then by sending either to zero or infinity we conclude that for all , which is absurd. Thus we must have the necessary condition , or equivalently that .

We now establish the same necessary condition from the perspective of dimensional analysis, which one can view as an abstraction of scale invariance arguments. We give the spatial variable a unit of length. It is not so important what units we assign to the range of the function (it will cancel out of both sides), but let us make it dimensionless for sake of discussion. Then the norm

will have the units of , because integration against -dimensional Lebesgue measure will have the units of (note this conclusion can also be justified in the limiting case ). For similar reasons, the Fourier transform

will have the units of ; also, the frequency variable must have the units of in order to make the exponent appearing in the exponential dimensionless. As such, the norm

has units . In order for the estimate (10) to be dimensionally consistent, we must therefore have , or equivalently that .

Finally, we establish the necessary condition once again using the example of a rescaled bump function, which is basically the same as the first approach but with replaced by a bump function. We will argue at a slightly heuristic level, but it is not difficult to make the arguments below rigorous and we leave this as an exercise to the reader. Given a length scale , let be a bump function adapted to the ball of radius around the origin, thus where is some fixed test function supported on . We refer to this as a bump function *adapted* to ; more generally, given an ellipsoid (or other convex region, such as a cube, tube, or disk) , we define a bump function adapted to to be a function of the form , where is an affine map from (or other fixed convex region) to and is a bump function with all derivatives uniformly bounded. As long as is non-zero, the norm is comparable to (up to constant factors that can depend on but are independent of ). The uncertainty principle then predicts that the Fourier transform will be concentrated in the dual ball , and within this ball (or perhaps a slightly smaller version of this ball) would be expected to be of size comparable to (the phase does not vary enough to cause significant cancellation). From this we expect to be comparable in size to . If (10) held, we would then have

for all , which is only possible if , or equivalently .

Now we turn to the other necessary condition . Here one does not use scaling considerations; instead, it is more convenient to work with randomised examples. A useful tool in this regard is Khintchine’s inequality, which encodes the *square root cancellation* heuristic that a sum of numbers or functions with randomised signs (or phases) should have magnitude roughly comparable to the *square function* .

Lemma 4 (Khintchine’s inequality)Let , and let be independent random variables that each take the values with an equal probability of .

- (i) For any complex numbers , one has
- (ii) For any functions on a measure space , one has

*Proof:* We begin with (i). By taking real and imaginary parts we may assume without loss of generality that the are all real, then by normalisation it suffices to show the upper bound

for all , whenever are real numbers with .

When the upper and lower bounds follow by direct calculation (in fact we have equality in this case). By Hölder’s inequality, this yields the upper bound for and the lower bound for . To handle the remaining cases of (11) it is convenient to use the exponential moment method. Let be an arbitrary threshold, and consider the upper tail probability

For any , we see from Markov’s inequality that this quantity is less than or equal to

The expectation here can be computed to equal

By comparing power series we see that for any real , hence by the normalisation we see that

If we set we conclude that

since the random variable is symmetric around the origin, we conclude that

From the Fubini-Tonelli theorem we have

and this then gives the upper bound (11) for any . The claim (12) for then follows from this, Hölder’s inequality (applied in reverse), and the fact that (12) was already established for .

To prove (ii), observe from (i) that for every one has

integrating in and applying the Fubini-Tonelli theorem, we obtain the claim.

Exercise 5

- (i) How does the implied constant in (11) depend on in the limit if one analyses the above argument more carefully?
- (ii) Establish (11) for the case of even integers by direct expansion of the left-hand side and some combinatorial calculation. How does the dependence of the implied constant in (11) on compare with (i) if one does this?
- (iii) Establish a matching lower bound (up to absolute constants) for the implied constant in (11).

Now we show that the estimate (10) fails in the large regime , even when . Here, the idea is to have “spread out” in physical space (in order to keep the norm low), and also having somewhat spread out in frequency space (in order to prevent the norm from dropping too much). We use the probabilistic method (constructing a random counterexample rather than a deterministic one) in order to exploit Khintchine’s inequality. Let be a non-zero bump function supported on (say) the unit ball , and consider a (random) function of the form

where are the random signs from Lemma 4, and are sufficiently separated points in (all we need for this construction is that for all ); thus is the random sum of bump functions adapted to disjoint balls . In particular, the summands here have disjoint supports and

(note that the signs have no effect on the magnitude of ). If (10) were true, this would give the (deterministic) bound

On the other hand, the Fourier transform of is

so by Khintchine’s inequality

The phases can be deleted, and is not identically zero, so one arrives at

Comparing this with (13) and sending , we obtain a contradiction if . This completes the proof of Proposition 3.

Exercise 6Find a deterministic construction that explains why the estimate (10) fails when and .

Exercise 7 (Marcinkiewicz-Zygmund theorem)Let be measure spaces, let , and suppose is a bounded linear operator with operator norm . Show thatfor any at most countable index set and any functions . Informally, this result asserts that if a linear operator is bounded from scalar-valued functions to scalar-valued functions, then it is automatically bounded from

vector-valuedfunct

Exercise 8Let be a bounded open subset of , and let . Show that holds if and only if and . (Note: in order to use either the scale invariance argument or the dimensional analysis argument to get the condition , one should replace with something like a ball of some radius , and allow the estimates to depend on .)

Now we study the restriction problem for two model hypersurfaces:

- (i) The
*paraboloid*equipped with the measure induced from Lebesgue measure in the horizontal variables , thus

(note this is

*not*the same as surface measure on , although it is mutually absolutely continuous with this measure). - (ii) The sphere .

These two hypersurfaces differ from each other in one important respect: the paraboloid is non-compact, while the sphere is compact. Aside from that, though, they behave very similarly; they are both conic hypersurfaces with everywhere positive curvature. Furthermore, they are also very highly symmetric surfaces. The sphere of course enjoys the rotation symmetry under the orthogonal group . At first glance the paraboloid only enjoys symmetry under the smaller orthogonal group that rotates the variable (leaving the final coordinate unchanged), but it also has a family of Galilean symmetries

for any , which preserves (and also can be seen to preserve the measure , since the horizontal variable is simply translated by ). Furthermore, the paraboloid also enjoys a *parabolic scaling symmetry*

for any , for which the sphere does not have an exact analogue (though morally Taylor expansion suggests that the sphere “behaves like” the paraboloid at small scales. The following exercise exploits these symmetries:

- (i) Let be a bounded non-empty open subset of , and let . Show that holds if and only if holds.
- (ii) Let be bounded non-empty open subsets of (endowed with the restriction of to ), and let . Show that holds if and only if holds.
- (iii) Suppose that are such that holds. Show that . (
Hint:Any of the three methods of scale invariance, dimensional analysis, or rescaled bump functions will work here.)- (iv) Suppose that are such that holds. Show that . (
Hint:The same three methods still work, but some will be easier to pull off than others.)- (v) Suppose that are such that holds for some bounded non-empty open subset of , and that . Conclude that holds.
- (vi) Suppose that are such that holds, and that . Conclude that holds.

Exercise 10 (No non-trivial restriction estimates for flat hypersurfaces)Let be an open non-empty subset of a hyperplane in , and let . Show that can only hold when .

To obtain a further necessary condition on the restriction estimates or holding, it is convenient to dualise the restriction estimate to an *extension estimate*.

Exercise 11 (Duality)Let be a Radon measure on , let , and let . Show that the following claims are equivalent:

- (i) (Restriction estimate) One has
for all .

- (ii) (Extension estimate) one has
for all , where the inverse Fourier transform of the finite measure is defined by the formula

This gives a further necessary condition as follows. Suppose for instance that holds; then by the above exercise, one has

for all . In particular, . However, we have the following stationary phase computation:

for all and some non-zero constants depending only on . Conclude that the estimate can only hold if .

Exercise 13Show that the estimate can only hold if . (Hint:one can explicitly test (15) when is a gaussian; the fact that gaussians are not, strictly speaking, compactly supported can be dealt with by a limiting argument.)

It is conjectured that the necessary conditions claimed above are sufficient. Namely, we have

Conjecture 14 (Restriction conjecture for the sphere)Let . Then we have whenever and .

Conjecture 15 (Restriction conjecture for the paraboloid)Let . Then we have whenever and .

It is also conjectured that Conjecture 14 holds if one replaces the sphere by any bounded open non-empty subset of the paraboloid .

The current status of these conjectures is that they are fully solved in the two-dimensional case (as we will see later in these notes) and partially resolved in higher dimensions. For instance, in one of the strongest results currently is due to Hong Wang, who established for a bounded open non-empty subset of when (conjecturally this should hold for all ); for higher dimensions see this paper of Hickman and Rogers for the most recent results.

We close this section with an important connection between the restriction conjecture and another conjecture known as the *Kakeya maximal function conjecture*. To describe this connection, we first give an alternate derivation of the necessary condition in Conjecture 14, using a basic example known as the *Knapp example* (as described for instance in this article of Strichartz).

Let be a spherical cap in of some small radius , thus for some . Let be a bump function adapted to this cap, say where is a fixed non-zero bump function supported on . We refer to as a *Knapp example* at frequency (and spatially centred at the origin). The cap (or any slightly smaller version of ) has surface measure , thus

for any . We then apply the extension operator to :

The integrand is only non-vanishing if ; since also from the cosine rule we have

we also have . Thus, if lies in the tube

for a sufficiently small absolute constant , then the phase has real part . If we set to be non-negative and not identically zero, and note that , we conclude that

for . Since the tube has dimensions , its volume is

and thus

for any . By Exercise 11, we thus see that if the estimate holds, then

for all small ; sending to zero, we conclude that

or equivalently that , recovering the second necessary condition in Conjecture 14.

- (i) By considering a random superposition of Knapp examples located at different frequencies , and using Khintchine’s inequality, recover the first necessary condition of Conjecture 14.
- (ii) Suppose that holds for some . Establish the estimate
whenever is a collection of tubes – that is to say, sets of the form

whose directions are -separated (thus for any two distinct ), and the are arbitrary real numbers.

- (iii) Establish claims (i) and (ii) with the sphere replaced by a bounded non-empty open subset of the paraboloid .

Using this exercise, we can show that restriction estimates imply assertions about the dimension of Kakeya sets (also known as *Besicovitch sets*.

Exercise 17 (Restriction implies Kakeya)Assume that either Conjecture 14 or Conjecture 15 holds. Define aKakeya setto be a compact subset of that contains a unit line segment in every direction (thus for every , there exists a line segment for some that is contained in . Show that for any , the -neighbourhood of has Lebesgue measure for any . (This is equivalent to the assertion that has Minkowski dimension .) It is also possible to show that the restriction conjecture implies that all Kakeya sets have Hausdorff dimension , but this is trickier; see this paper of Bourgain. (This can be viewed as a challenge problem for those students who are familiar with the concept of Hausdorff dimension.)

The *Kakeya conjecture* asserts that all Kakeya sets in have Minkowski and Hausdorff dimension equal to . As with the restriction conjecture, this is known to be true in two dimensions (as was first proven by Davies), but only partial results are known in higher dimensions. For instance, in three dimensions, Kakeya sets are known to have (upper) Minkowski dimension at least for some absolute constant (a result of Katz, Laba, and myself), and also more recently for Hausdorff dimension (a result of Katz and Zahl). For the latest results in higher dimensions, see these papers of Hickman-Rogers-Zhang and Zahl.

Much of the modern progress on the restriction conjecture has come from trying to reverse the implication in Exercise 17, and use known partial results towards the Kakeya conjecture (or its relatives) to obtain restriction estimates. We will not give the latest arguments in this direction here, but give an illustrative example (in the multilinear setting) at the end of this set of notes.

** — 2. theory — **

One of the best understood cases of the restriction conjecture is the case. Note that Conjecture 14 asserts that holds whenever , and Conjecture 15 asserts that holds when . Theorem 1 already gave a partial result in this direction. Now we establish the full range of the restriction conjecture, due to Tomas and Stein:

Theorem 18 (Tomas-Stein restriction theorem)Let . Then holds for all , and holds for .

The exponent is sometimes referred to in the literature as the *Tomas-Stein exponent*; though the dual exponent is also referred to by this name.

We first establish the restriction estimate in the non-endpoint case by an interpolation method. Fix . By the identity (5) and Hölder’s inequality, it suffices to establish the inequality

We use the standard technique of *dyadic decomposition*. Let be a bump function supported on that equals on . Then one has the telescoping series

where is a bump function supported on the annulus . We can then decompose the convolution kernel as

so by the triangle inequality it will suffice to establish the bounds

for all and some constant depending only on .

The function is smooth and compactly supported, so (18) is immediate from Young’s inequality (note that when ). So it remains to prove (19). Firstly, we recall from (7) (or (8)) that the kernel is of magnitude . Thus by Young’s inequality we have

We now complement this with an estimate. The Fourier transform of can be computed as

for any , and hence by the triangle inequality and the rapid decay of the Schwartz function we have

By dyadic decomposition we then have

From elementary geometry we have

(basically because the sphere is -dimensional), and then on summing the geometric series we conclude that

From Plancherel’s theorem we conclude that

Applying either the Marcinkiewicz interpolation theorem (or the Riesz-Thorin interpolation theorem) to (20) and (21), we conclude (after some arithmetic) the required estimate (18) with

which is indeed positive when .

At the endpoint the above argument does not quite work; we obtain a decent bound for each dyadic component of , but then we have trouble getting a good bound for the sum. The original argument of Stein got around this problem by using complex interpolation instead of dyadic decomposition, embedding in an analytic family of functions. We present here another approach, which is now popular in PDE applications; the basic inputs (namely, an to estimate similar to (20), an estimate similar to (21), and an interpolation) are the same, but we employ the additional tool of Hardy-Littlewood-Sobolev fractional integration to recover the endpoint.

We turn to the details. Set . We write as and parameterise the frequency variable by with , thus for instance . (One can think of as a “time” variable that we will give a privileged role in the physical domain .) We split the spatial variable similarly. Let be a non-negative bump function localised to a small neighbourhood of the north pole of . By Exercise 9 it will suffice to show that

for . Squaring as before, it suffices to show that

For each , let denote the function , and let denote the function

Then we have

where on the right-hand side the convolution is now over rather than . By the Fubini-Tonelli theorem and Minkowski’s inequality, we thus have

From Exercise 12 we have the bounds

leading to the *dispersive estimate*

for any (the claim is vacuous when vanishes). On the other hand, the -dimensional Fourier transform of can be computed as

which is bounded by , hence by Plancherel we have the *energy estimate*

Interpolating, we conclude after some arithmetic that

Applying the one-dimensional Hardy-Littlewood-Sobolev inequality we conclude (after some more arithmetic) that

and the claim follows.

This latter argument can be adapted for the paraboloid, which in turn leads to some very useful estimates for the Schrödinger equation:

Exercise 19 (Strichartz estimates for the Schrödinger equation)Let .

- (i) By modifying the above arguments, establish the restriction estimate .
- (ii) Let , and let denote the function
(This is the solution to the Schrödinger equation with initial data .) Establish the

Strichartz estimate- (iii) More generally, with the hypotheses as in (ii), establish the bound
whenever are exponents obeying the scaling condition . (The endpoint case of this estimate is also available when , using a more sophisticated interpolation argument; see this paper of Keel and myself.)

The Strichartz estimates in the above exercise were for the linear Schrödinger equation, but Strichartz estimates can also be established by the same method (namely, interpolating between energy and dispersive estimates) for other linear dispersive equations, such as the linear wave equation . Such Strichartz estimates are a fundamental tool in the modern analysis of *nonlinear* dispersive equations, as they often allow one to view such nonlinear equations as perturbations of linear ones. The topic is too vast to survey in these notes, but see for instance my monograph on this topic.

** — 3. Bilinear estimates — **

A restriction estimate such as

In this section we will show how the consideration of bilinear extension estimates can be used to resolve the restriction conjecture for the circle (i.e., the case of Conjecture 14):

Theorem 20 (Restriction conjecture for )One has whenever and .

Note from Exercise 9(vi) that this theorem also implies the case of Conjecture 15. This case of the restriction conjecture was first established by Zygmund; Zygmund’s proof is shorter than the one given here (relying on the Hausdorff-Young inequality (4)), but the arguments here have broader applicability, in particular they are also useful in higher-dimensional settings.

To prove this conjecture, it suffices to verify it at the endpoint , since from Hölder’s inequality the norm is essentially non-decreasing in , where is arclength measure on . By Exercise 9, we may replace here by (say) the first quadrant of the circle, where is the map ; we let be the arc length measure on that quadrant. (This reduction is technically convenient to avoid having to deal with antipodal points with parallel tangents a little later in the argument.)

By (22) and relabeling, it suffices to show that

We now bilinearise this estimate. It is clear that the estimate (23) is equivalent to

for any , since (24) follows from (23) and Hölder’s inequality, and (23) follows from (24) by setting .

Right now, the two functions and are both allowed to occupy the entirety of the arc . However, one can get better estimates if one separates the functions to lie in *transverse* sub-arcs of (where by “transverse” we mean that there is some non-zero separation between the normal vectors of and the normal vectors of . The key estimate is

Proposition 21 (Bilinear estimate)Let be subintervals of such that . Then we havefor an , where denotes the arclength measure on .

*Proof:* To avoid some very minor technicalities involving convolutions of measures, let us approximate the arclength measures . Observe that we have

in the sense of distributions, where is the annular region

Thus we have the pointwise bound

and

and similarly for . Hence by monotone convergence it suffices to show that

for sufficiently small . By Plancherel’s theorem, it thus suffices to show that

for , if is sufficiently small. From Young’s inequality one has

so by interpolation it suffices to show that

But this follows from the pointwise bound

for sufficiently small , whose proof we leave as an exercise.

Remark 23Higher-dimensional bilinear estimates, involving more complicated manifolds than arcs, play an important role in the modern theory of nonlinear dispersive equations, especially when combined with the formalism of dispersive variants of Sobolev spaces known asspaces, introduced by Bourgain (and independently by Klainerman-Machedon). See for instance this book of mine for further discussion.

From the triangle inequality we have

so by complex interpolation (which works perfectly well for bilinear operators) we have

for any . The estimate (26) begins to look rather similar to (24), and we can deduce (24) from (26) as follows. Firstly, it is convenient to use Marcinkiewicz interpolation (using the fact that we have an open range of ) to reduce (23) to proving a restricted estimate

for any measurable subset of the circle, so to prove (24) it suffices to show that

We can view the expression as a two-dimensional integral

- It is a known theorem (first conjectured by Klainerman and Machedon) that one has the bilinear restriction theorem

(The range is known to be sharp for (29) except possibly for the endpoint , which remains open currently.) Assuming this result, show that Conjecture 15 holds for all . (*Hint:* one repeats the above arguments, but at one point one will be faced with estimating a bilinear expression involving two “close” regions , which could be very large or very small. The hypothesis (29) does not specify how the implied constants depend on the size or location of , but one can obtain such a dependence by exploiting the translation and Galilean symmetries of the paraboloid.)

** — 4. Multilinear estimates — **

We now turn to multilinear (or more precisely, -linear) Kakeya and restriction estimates, where we happen to have nearly optimal estimates. For instance, we have the following estimate (cf. (17)), first established by Bennett, Carbery, and myself:

Theorem 24 (Multilinear Kakeya estimate)Let , let be sufficiently small, and let . Suppose that are collections of tubes such that each tube in is oriented within of the basis vector . Then we have

Exercise 25Assuming Theorem 24, obtain an estimate for for any in terms of and , and use examples to show that this estimate is optimal in the sense that the exponents for and can only be improved by epsilon factors at best.

In the two-dimensional case the estimate is easily established with no epsilon loss. Indeed, in this case we can expand the left-hand side of (30) as

But if is a rectangle oriented near , and is a -rectangle oriented near , then is comparable with , and the claim follows.

The epsilon loss was removed in general dimension by Guth, using the polynomial method. We will not give that argument here, but instead give a simpler proof of Theorem 24, also due to Guth, and based primarily on the method of *induction on scales*. We first treat the case when , that is when all the tubes in each family are completely parallel:

Exercise 26

*Proof:* For each , let be a collection of tubes oriented within of . Our objective is to show that

Let be a small constant depending only on . We partition into cubes of sidelenth , then the left-hand side of (33) can be decomposed as

Clearly we can restrict the inner sum to those tubes that actually intersect . For small enough, the intersection of with is contained in a tube oriented within of ; such a tube can be viewed as a rescaling by of a tube, also oriented within of . From (31) and rescaling we conclude that

Now let be the tube with the same central axis and center of mass as . For small enough, if then equals on all of , and hence

Combining all these estimates, we can bound the left-hand side of (33) by

But by (31) and rescaling we have

and the claim follows.

Now let , and let be sufficiently large depending on . If is sufficiently small depending on , then from (32) we have the claim

whenever . On the other hand, from Proposition 26 we see (for large enough) that if (34) holds in some range with then it also holds in the larger range . By induction we then have (34) for all . Combining this with (32), we have shown that

for all , whenever is sufficiently small depending on . This is *almost* what we need to prove Theorem 24, except that we are requiring to be small depending on as well as , whereas Theorem 24 only requires to be sufficiently small depending on and not . We can overcome this (at the cost of worsening the implied constants by an -dependent factor) by the triangle inequality and exploiting affine invariance (somewhat in the spirit of Exercise 9). Namely, suppose that and is only assumed to be small depending on but not on . By what we have previously established, we have

whenever the tubes lie within of , where is a quantity that is sufficiently small depending on . Now we apply a linear transformation to both sides, and also modify slightly, and conclude that for any within of , we still have the bound (35) if the are assumed to lie within (say) of instead of . On the other hand, by compactness (or more precisely, total boundedness), we can find directions that lie within of , such that any other direction that lies within of lies within of one of the . Applying the (quasi-)triangle inequality for , we conclude that

whenever the direction of are merely assumed to lie within of . This concludes the proof of Theorem 24.

Exercise 27By optimising the parameters in the above argument, refine the estimate in Theorem 24 slightly tofor any .

We can use the multilinear Kakeya estimate to prove a multilinear restriction (or more precisely, multilinear extension) estimate:

Theorem 28 (Multilinear restriction estimate)Let , let be sufficiently small, and let . Suppose that are open subsets of that lie within of the basis vector . Then we have

Exercise 29By modifying the arguments used to prove Exercise 16(ii), show that Theorem 28 implies Theorem 24.

Exercise 30Assuming Theorem 28, obtain for each and an estimate of the formwhenever , , and and some exponent , and use examples to show that the exponent you obtain is best possible.

Remark 31In the case, this result with no epsilon loss follows from Proposition 21. It is an open question whether the epsilon can be removed in higher dimensions; see this recent paper of mine for some progress in this direction.

To prove Theorem 28, we again turn to induction on scales; the argument here is a corrected version of one from this paper of Bennett, Carbery and myself, which first appeared in this paper of Bennett. Fix , and let be sufficiently small. For technical reasons it is convenient to replace the subsets of the sphere by annuli. More precisely, for each , let denote the best constant in the inequality

whenever , where is the annular cap

Because we have restricted both the Fourier and spatial domains to be compactly supported it is clear that is finite for each , thus

Exercise 32Show that (40) implies Theorem 28. (Hint:starting with , multiply by a suitable weight function that is large on and has Fourier transform supported on , and write this as for a suitable . Then obtain estimates on .)

To establish (40), the key estimate is

Proposition 33 (Induction on scales)For any , one has

Suppose we can establish this claim. Applying Theorem 24, we conclude that if for a sufficiently large depending on , one has

From (39) one has

for all and some , and then an easy induction then shows that (41) holds for all , giving the claim.

It remains to prove Proposition 33. We will rely on the *wave packet decomposition*. Informally, this decomposes into a sum of “wave packets” that is approximately of the form

where ranges over -tubes in oriented in various directions oriented near , and the coefficients obey an type estimate

(This decomposition is inaccurate in a number of technical ways, for instance the sharp cutoff should be replaced by something smoother, but we ignore these issues for the sake of this informal discussion.) Heuristically speaking, (42) is asserting that behaves like a superposition of various (translated) Knapp examples (16) with .

Let us informally indicate why we would expect the wave packet decomposition to hold, and then why it should imply something like Proposition 33. Geometrically, the annular cap behaves like the union of essentially disjoint -disks , each centred at some point on the unit sphere that is close to , and oriented normal to the direction . Thus should behave like the sum of the components . By the uncertainty principle, each such component should behave like a constant multiple of the plane wave on each translate of the dual region to , which is a tube oriented in the direction . By Plancherel’s theorem, the total norm of should equal . Thus we expect to have a decomposition roughly of the form

where is a collection of parallel and boundedly overlapping tubes oriented in the direction , and the are coefficients with

Summing over and collecting powers of , we (heuristically) obtain the wave packet decomposition (42) with bound (43).

Now we informally explain why the decomposition (42) (and attendant bound (43)) should yield Proposition 33. Our task is to show that

and the are essentially distinct tubes oriented within of . We cover by balls of radius . On each such ball, the cutoffs are morally constant, and so

From the uncertainty principle, the trigonometric polynomial behaves on like the inverse Fourier transform of a function supported on with

and hence by (38) we expect the expression (46) to be bounded by

which is also morally

Averaging in , we thus expect the left-hand side of (44) to be

Applying a rescaled (and weighted) version of Theorem 24, this is bounded by

and the claim now follows from (45).

Now we begin the rigorous argument. We need to prove (44), and we normalise . By Fubini’s theorem we have

Let be a fixed Schwartz function that is bounded away from zero on and has Fourier transform supported on , thus the function is bounded away from zero on and has Fourier transform supported on . In particular we have

We can write

and observe that has Fourier transform supported in . Thus

Thus it remains to establish the bound

We cover by a collection of disks , each one centered at an element that lies within of , and is oriented with normal , with the separated from each other by . A partition of unity then lets us write where each with

By Plancherel’s theorem the right-hand side is at most

This is morally bounded by

so one has morally bounded the left-hand side of (47) by

In practice, due to the rapid decay of , one has to add some additional terms involving some translates of the balls , but these can be handled by the same method as the one given below and we omit this technicality for brevity. We can write , where is a Schwartz function adapted to a slight dilate of whose inverse Fourier transform is a bump function adapted to a tube oriented along through the origin, and with

This gives a reproducing-type formula

which by Cauchy-Schwarz (or Jensen’s inequality) gives the pointwise bound

By enlarging slightly, we then have

for all , hence

We have thus bounded the left-hand side of (47) by

which we can rearrange as

Using a rescaled version of (31) (and viewing the convolution here as a limit of Riemann sums) we can bound this by

which by (49), (48) is bounded by

giving (47) as desired.

Coronavirus is forcing massive changes on the academic ecosystem, and here’s another:

We’re having a seminar on applied category theory at U. C. Riverside, organized by Joe Moeller and Christian Williams.

It will take place on Wednesdays at 5 pm UTC, which is 10 am in California or 1 pm on the east coast of the United States, or 6 pm in England. It will be held online via Zoom, here:

https://ucr.zoom.us/j/607160601

We will have discussions online here:

https://categorytheory.zulipchat.com/

The first 6 talks will be:

• April 1st, John Baez: Structured cospans and double categories.

Abstract.One goal of applied category theory is to better understand networks appearing throughout science and engineering. Here we introduce “structured cospans” as a way to study networks with inputs and outputs. Given a functor L: A → X, a structured cospan is a diagram in X of the form

L(a) → x ← L(b).

If A and X have finite colimits and L is a left adjoint, we obtain a symmetric monoidal category whose objects are those of A and whose morphisms are certain equivalence classes of structured cospans. However, this arises from a more fundamental structure: a symmetric monoidal

doublecategory where the horizontal 1-cells are structured cospans, not equivalence classes thereof. We explain the mathematics and illustrate it with an example from epidemiology.

• April 8th, Prakash Panangaden: A categorical view of conditional expectation.

Abstract.This talk is a fragment from a larger work on approximating Markov processes. I will focus on a functorial definition of conditional expectation without talking about how it was used. We define categories of cones—which are abstract versions of the familiar cones in vector spaces—of measures and related categories cones of L_{p}functions. We will state a number of dualities and isomorphisms between these categories. Then we will define conditional expectation by exploiting these dualities: it will turn out that we can define conditional expectation with respect to certain morphisms. These generalize the standard notion of conditioning with respect to a sub-sigma algebra. Why did I use the plural? Because it turns out that there are two kinds of conditional expectation, one of which looks like a left adjoint (in the matrix sense not the categorical sense) and the other looks like a right adjoint. I will review concepts like image measure, Radon-Nikodym derivatives and the traditional definition of conditional expectation. This is joint work with Philippe Chaput, Vincent Danos and Gordon Plotkin.

• April 15, Jules Hedges: Open games: the long road to practical applications

Abstract.I will talk about open games, and the closely related concepts of lenses/optics and open learners. My goal is to report on the successes and failures of an ongoing effort to try to realise the often-claimed benefits of categories and compositionality in actual practice. I will introduce what little theory is needed along the way. Here are some things I plan to talk about:— Lenses as an abstraction of the chain rule

— Comb diagrams

— Surprising applications of open games: Bayesian inference, value function iteration

— The state of tool support

— Open games in their natural habitat: microeconomics

— Sociological aspects of working with economics

• April 22, Michael Shulman.

• April 29, Gershom Bazerman, A localic approach to the semantics of dependency, conflict, and concurrency.

Abstract.Petri nets have been of interest to applied category theory for some time. Back in the 1980s, one approach to their semantics was given by algebraic gadgets called “event structures.” We use classical techniques from order theory to study event structures without conflict restrictions (which we term “dependency structures with choice”) by their associated “traces”, which let us establish a one-to-one correspondence between DSCs and a certain class of locales. These locales have an internal logic of reachability, which can be equipped with “versioning” modalities that let us abstract away certain unnecessary detail from an underlying DSC. With this in hand we can give a general notion of what it means to “solve a dependency problem” and combinatorial results bounding the complexity of this. Time permitting, I will sketch work-in-progress which hopes to equip these locales with a notion of conflict, letting us capture the full semantics of general event structures in the form of homological data, thus providing one avenue to the topological semantics of concurrent systems. This is joint work with Raymond Puzio.

• May 6, Sarah Rovner-Frydman: Separation logic through a new lens.

• May 13, Tai-Danae Bradley.

• May 20, Gordon Plotkin.

• May 27, To be determined.

• June 3, Nina Otter.

Here is some information provided by Zoom. There are lots of ways to attend, though the simplest for most of you will be going to

https://ucr.zoom.us/j/607160601

at the scheduled times.

Topic: ACT@UCR seminar

Time: Apr 1, 2020 10:00 AM Pacific Time (US and Canada)

Every 7 days, 10 occurrence(s)

Apr 1, 2020 10:00 AM

Apr 8, 2020 10:00 AM

Apr 15, 2020 10:00 AM

Apr 22, 2020 10:00 AM

Apr 29, 2020 10:00 AM

May 6, 2020 10:00 AM

May 13, 2020 10:00 AM

May 20, 2020 10:00 AM

May 27, 2020 10:00 AM

Jun 3, 2020 10:00 AM

Please download and import the following iCalendar (.ics) files to your calendar system.

Join Zoom Meeting

https://ucr.zoom.us/j/607160601

Meeting ID: 607 160 601

One tap mobile

+16699006833,,607160601# US (San Jose)

+13462487799,,607160601# US (Houston)

Dial by your location

+1 669 900 6833 US (San Jose)

+1 346 248 7799 US (Houston)

+1 646 876 9923 US (New York)

+1 253 215 8782 US

+1 301 715 8592 US

+1 312 626 6799 US (Chicago)

Meeting ID: 607 160 601

Find your local number: https://ucr.zoom.us/u/adkKXYyiHq

Join by SIP

607160601@zoomcrc.com

Join by H.323

162.255.37.11 (US West)

162.255.36.11 (US East)

221.122.88.195 (China)

115.114.131.7 (India Mumbai)

115.114.115.7 (India Hyderabad)

213.19.144.110 (EMEA)

103.122.166.55 (Australia)

209.9.211.110 (Hong Kong)

64.211.144.160 (Brazil)

69.174.57.160 (Canada)

207.226.132.110 (Japan)

Meeting ID: 607 160 601

A friend listed some calls for help:

• The UK urgently needs help from modellers. You must be willing to work on specified tasks and meet deadlines. Previous experience in epidemic modelling is not required.

https://epcced.github.io/ramp/

• MIT is having a “Beat the Pandemic” hackathon online April 3-5. You can help them develop solutions that address the most pressing technical, social, and financial issues caused by the COVID-19 outbreak:

https://covid19challenge.mit.edu/

• The COVID-19 National Scientist Volunteer Database Coordination Team is looking for scientists to help local COVID-19 efforts in the US:

https://docs.google.com/forms/d/e/1FAIpQLScXC56q2tPgz0WbPrhP7WareiclfxfaKQFI0ZbXg4FkKan5iQ/viewform

• The Real Time Epidemic datathon, which started March 30, is collective open source project for developing real-time and large-scale epidemic forecasting models:

https://www.epidemicdatathon.com/

• Crowdfight COVID-19 is a mailing list that sends lists of tasks for which help is needed:

https://crowdfightcovid19.org/volunteers (address not working when I last checked—maybe overloaded?)

We kick off our informal video series on The Biggest Ideas in the Universe with one of my favorites — *Conservation*, as in “conservation of momentum,” “conservation of energy,” things like that. Readers of The Big Picture will recognize this as one of my favorite themes, and smile with familiarity as we mention some important names — Aristotle, Ibn Sina, Galileo, Laplace, and others.

Remember that for the next couple of days you are encouraged to leave questions about what was discussed in the video, either here or directly at the YouTube page. I will pick some of my favorites and make another video giving some answers.

Update: here’s the Q&A.

Welcome to the second installment in our video series, The Biggest Ideas in the Universe. Today’s idea is Change. This sounds like a big and rather vague topic, and it is, but I’m actually using the label to sneak in a discussion of continuous change through time — what mathematicians call *calculus*. (I know some of you would jump on a video about calculus, but for others we need to ease them in softly.)

Don’t worry, no real equations here — I do write down some symbols, but just so you might recognize them if you come across them elsewhere.

Also, I’m still fiddling with the green-screen technology, and this was not my finest moment. But I think I have settled on a good mixture of settings, so video quality should improve going forward.

**Update (March 31):** Since commenter after commenter seems to have missed my point—or rather, rounded the point to something different that I didn’t say—let me try one more time. My faith in official pronouncements from health authorities, and in institutions like the CDC and the FDA, was clearly *catastrophically *misplaced—and if that doesn’t force significant revisions to my worldview, then I’m beyond hope. Maybe the failures are because these organizations are at the mercy of political incompetents—meaning ultimately Trump and the people who put him in office. Or maybe the rot started long before Trump. Maybe it’s specific to the US, or maybe it’s everywhere. I still don’t know the answers to those questions.

On the other hand, my faith in my ability to listen to individual people, whether they’re expert epidemiologists or virologists or just technologists or rationalists or anyone else (who in turn listened to the experts), and to say “yes, this person clearly has good judgment and has thought about it carefully, and if they’re worried then I should be too”—my faith in *that* has only gone up. The problem is simply that I didn’t do enough of that back in January and February, and when I did, I didn’t sufficiently act on it.

**End of Update**

On Feb. 4, a friend sent me an email that read, in part:

Dr. A,

What do you make of this coronavirus risk? … I don’t know what level of precaution is necessary! Please share your view.

This was the first time that I’d been prompted to give this subject any thought whatsoever. I sent a quick reply two minutes later:

For now, I think the risk from the ordinary flu is much much greater! But worth watching to see if it becomes a real pandemic.

Strictly speaking, this reply was “correct”—even “reasonable” and “balanced,” admitting the possibility of changing circumstances. Yet if I could go back in time, I’d probably send a slightly different message—one that would fare better in the judgment of history. Something like this, maybe:

HOLY SHIT!!!!!—GET YOUR PARENTS SOMEWHERE SAFE—CANCEL ALL TRAVEL PLANS—STOCK UP ON FOOD AND MASKS AND HAND SANITIZERS. SELL ALL STOCK YOU OWN!!! SHORT THE MARKET IF YOU KNOW HOW, OTHERWISE GET CASH AND BONDS. HAVE AN ISOLATED PLACE TO ESCAPE TO. IF YOU’RE FEELING ALTRUISTIC, JOIN GROUPS MAKING THEIR OWN MASKS AND VENTILATORS.

DO NOT RELY ON OFFICIAL PRONOUNCEMENTS, OR REASSURING ARTICLES FROM MAINSTREAM SOURCES LIKEVOXORTHE WASHINGTON POST. THEY’RE FULL OF IT. THE CDC AND OTHER FEDERAL AGENCIES ARE ASLEEP AT THE WHEEL, HOLLOWED-OUT SHELLS OF WHAT YOU IMAGINE THEM TO BE. FOR ALL IT WILL DO IN ITS MOMENT OF ULTIMATE NEED, IT WOULD BE BETTER IF THE CDC NEVER EXISTED.

WHO THEN SHOULD YOU LISTEN TO? CONTRARIAN, RATIONALIST NERDS AND TECH TYCOONS ON SOCIAL MEDIA. BILL GATES, BALAJI SRINIVASAN, PAUL GRAHAM, GREG COCHRAN, ROBIN HANSON, SARAH CONSTANTIN, ELIEZER YUDKOWSKY, NICHOLAS CHRISTAKIS, ERIC WEINSTEIN. NO, NOT ALL SUCH PEOPLE—NOT ELON MUSK, FOR EXAMPLE—BUT YOU’LL DO RIDICULOUSLY BETTER THAN AVERAGE THIS WAY.

BASICALLY, THE MORE SNEERCLUB WOULD SNEER AT A GIVEN PERSON, THE MORE THEY’D CALL THEM AN AUTODIDACT STEMLORD DUNNING-KRUGER ASSHOLE WHO’S THE EMBODIMENT OF EVERYTHING WRONG WITH NEOLIBERAL CAPITALISM, THE MORE YOU SHOULD LISTEN TO THAT PERSON RIGHT NOW FOR THE SAKE OF YOUR AND YOUR LOVED ONES’ FUCKING LIVES.

DON’T WORRY: WITHIN 6-8 WEEKS, WHAT THE CONTRARIANS ARE SAYING TODAY WILLBECONVENTIONAL WISDOM. THE PUBLICATIONS THAT NOW SNEER AT PANDEMIC PREPPERS WILL TURN AROUND AND SNEER AT THE IRRESPONSIBLE NON-PREPPERS, WITHOUT EVER ADMITTING ERROR. WE’LL ALWAYS HAVE BEEN AT WAR WITH OCEANIA—OR RATHER CORONIA. TRUTH, OFFICIAL RECOMMENDATIONS, AND PROGRESSIVE POLITICS WILL GET BACK INTO ALIGNMENT JUST LIKE THEY NORMALLY ARE, AND WE’LL ALL BE SHARING MEMES JUSTLY DENOUNCING TRUMP AND THE CRAVEN REPUBLICAN SENATORS AND EVANGELICAL PASTORS AND NUTTY CONSPIRACY THEORISTS WHO DON’T CARE HOW MANY LIVES THEY SACRIFICE WITH THEIR DENIALS.

BUT EVEN THOUGH THE ENLIGHTENED MAINSTREAM WILL FIGURE OUT THE TRUTH IN A MONTH OR SO—AND EVEN THOUGH THAT’S FAR BETTER THAN OUR IDIOT PRESIDENT AND MILLIONS OF HIS FOLLOWERS, WHO WILL UNDERSTAND ONLY AFTER THE TRENCHES OVERFLOW WITH BODIES, IF THEN—EVEN SO, WE DON’T HAVE A MONTH. IF YOU WANT TO BE AHEAD OF THE SENSIBLE MAINSTREAM, THEN ALMOST BY DEFINITION, THAT MEANS YOU NEED TO LISTEN TO THE POLITICALLY INCORRECT, CRAZY-SOUNDING ICONOCLASTS: TO THOSE WHO, UNLIKE YOU AND ALSO UNLIKE ME, HAVE DEMONSTRATED THAT THEY DON’T CARE IF PEOPLE SNEER AT THEM.

Of course, I would never have sent such an email, and not only because of the bold and all-caps. My whole personality stands against every sentence. I’ve always taken my cues from “mainstream, reasonable, balanced” authorities, in any subject where I’m not personally expert. That heuristic has generally been an excellent way to maximize expected rightness. But when it fails … holy crap!

Now, and for the rest of my life, I’ll face the question: what was wrong with me, such that I would never have sent a “nutty” email like the one above? Can I fix it?

More specifically, was my problem intellectual or emotional? I lean toward the latter. By mid-to-late February, as more and more of my smartest friends started panicking and telling me why I should too, I got intellectually fully on board with the idea that millions of people might die as the new virus spread around the world, and I affirmed as much on Facebook and elsewhere. And yet it still took me a few more weeks to get from “millions could die” to “**HOLY SHIT MILLIONS COULD DIE—PANIC—DROP EVERYTHING ELSE—BUILD MORE VENTILATORS!!!!**“

A viral article implores us to “flatten the curve of armchair epidemiology”—that is, to listen only to authoritive sources like the CDC, not random people spouting on social media. This was notable to me for being the diametric opposite of the *actual* lesson of the past two months. It would be like taking the lesson from the 2008 financial crisis that from now on, you would only trust serious rating agencies, like Moody’s or Standard & Poor.

Oh, but I forgot to tell you the punchline. A couple days ago, the same friend who emailed me on February 4, emailed again to tell me that both of her parents (who live outside the US) now have covid-19. Her father had to go to the emergency room and tested positive. Her mother stayed home with somewhat milder symptoms. Given the overloaded medical system in their country, neither can expect a high standard of care. My friend has spent the past few days desperately trying to get anyone from the hospital on the phone.

This post represents my apology to her. Like, it’s one thing to be so afraid of the jeers of the enlightened that you feign asexuality and live as an ascetic for a decade. It’s worse to be so afraid that you fail adequately to warn your friends when you see an exponential function coming to kill their loved ones.

For the first time in the quarantine, I did some actual direct research myself. I made a Jupyter (tm) notebook in which I simulate and analyze a time-domain signal containing an asteroseismic-like forest of coherent oscillators. I then use likelihood methods to see if I can extract or infer the frequency spacing of the coherent modes. The answers are a bit messy, but I think it *is* possible to measure frequency differences below the “uncertainty-principle” naive limit. That is, I think we (Bonaca and me in this case, but I have also worked on this problem with Feeney, Foreman-Mackey, and others) can resolve differences well below 1/*T*, where *T* is the duration of the full set of observations. That is, I think we can do better than the usual method of taking a periodogram and looking at the distances between peaks.

[caption id="attachment_19517" align="aligncenter" width="499"] Sharing my live virtual chalkboard while online teaching using Zoom (the cable for the iPad is for power only).[/caption]It is an interesting time for all of us right now, whatever our walk of life. For those of us who make our living by standing up in front of people and talking and/or leading discussion (as is the case for teachers, lecturers, and professors of various sorts), there has been a lot of rapid learning of new techniques and workflows as we scramble to keep doing that while also not gathering in groups in classrooms and seminar rooms. I started thinking about this last week (the week of 2nd March), prompted by colleagues in the physics department here at USC, and then tested it out last Friday (6th) live with students from my general relativity class (22 students). But they were in the room so that we could iron out any issues, and get a feel for what worked best. Since then, I gave an online research seminar to the combined Harvard/MIT/USC theoretical physics groups on Wednesday (cancelling my original trip to fly to the East Coast to give it in person), and that worked pretty well.

But the big test was this morning. Giving a two hour lecture to my General Relativity class where we were really not all in the same room, but scattered over the campus and city (and maybe beyond), while being able to maintain a live play-by-play working environment on the board, as opposed to just showing slides. Showing slides (by doing screen-sharing) is great, but for the kind of physics techniques I’m teaching, you need to be able to show how to calculate, and bring the material to life - the old “chalk and talk” that people in other fields tend to frown upon, but which is so essential to learning how to actually *think* and navigate the language of physics, which is in large part the diagrams and equations. This is the big challenge lots of people are worried about with regards going online - how do I do that? (Besides, making a full set of slides for every single lecture you might want to do For the next month or more seems to me like a mammoth task - I’d not want to do that.)

So I’ve arrived at a system that works for me, and I thought I’d share it with those of you who might not yet have found your own solution. Many of the things I will say may well be specific to me and my institution (USC) at some level of detail, but aspects of it will generalize to other situations. Adapt as applies to you.

Do share the link to this page with others if you wish to - I may well update it from time to time with more information.

Here goes:

[...] Click to continue reading this post

The post Online Teaching Methods appeared first on Asymptotia.

I’m giving the first talk at the ACT@UCR seminar. It’ll happen on Wednesday April 1st—I’m not kidding!—at 5 pm UTC, which is 10 am in California, 1 pm on the east coast of the United States, or 6 pm in England. It will be held online via Zoom, here:

https://ucr.zoom.us/j/607160601

We will have discussions online here—I suggest going here 20 minutes before the talk, so you can meet people and chat:

https://categorytheory.zulipchat.com/

I’ll also chat with people afterwards at that location. With luck I’ll also be able to put a video of my talk on YouTube… but you can look at the slides now:

• John Baez, Structured cospans and double categories.

Abstract.One goal of applied category theory is to better understand networks appearing throughout science and engineering. Here we introduce “structured cospans” as a way to study networks with inputs and outputs. Given a functor L: A → X, a structured cospan is a diagram in X of the form

If A and X have finite colimits and L is a left adjoint, we obtain a symmetric monoidal category whose objects are those of A and whose morphisms are certain equivalence classes of structured cospans. However, this arises from a more fundamental structure: a symmetric monoidal

doublecategory where the horizontal 1-cells are structured cospans, not equivalence classes thereof. We explain the mathematics and illustrate it with an example from epidemiology.

This talk is based on work with Kenny Courser and Christina Vasilakopoulou, some of which appears here:

• John Baez and Kenny Courser, Structured cospans.

• Kenny Courser, *Open Systems: a Double Categorical Perspective*.

Yesterday Rongmin Lu told me something amazing: structured cospans were already invented in 2007 by José Luiz Fiadeiro and Vincent Schmit. It’s pretty common for simple ideas to be discovered several times. The amazing thing is that these other authors also called them ‘structured cospans’!

• José Luiz Fiadeiro and Vincent Schmitt, Structured co-spans: an algebra of interaction protocols, in *International Conference on Algebra and Coalgebra in Computer Science*, Springer, Berlin, 2007.

These earlier authors did not do everything we’ve done, so I’m not upset. Their work proves I chose the right name.

I’m giving the first talk at the ACT@UCR seminar. It’ll happen on Wednesday April 1st—I’m not kidding!—at 5 pm UTC, which is 10 am in California, 1 pm on the east coast of the United States, or 6 pm in England. It will be held online via Zoom, here:

https://ucr.zoom.us/j/607160601

We will have discussions online here—I suggest going here 20 minutes before the talk, so you can meet people and chat:

https://categorytheory.zulipchat.com/

I’ll also chat with people afterwards at that location.

With luck I’ll also be able to put a video of my talk on YouTube… but you can look at the slides now:

- John Baez, Structured cospans and double categories.

Abstract.One goal of applied category theory is to better understand networks appearing throughout science and engineering. Here we introduce “structured cospans” as a way to study networks with inputs and outputs. Given a functor $L \colon \mathsf{A} \to \mathsf{B}$, a structured cospan is a diagram in $\mathsf{X}$ of the formIf $\mathsf{A}$ and $\mathsf{X}$ have finite colimits and $L$ is a left adjoint, we obtain a symmetric monoidal category whose objects are those of $\mathsf{A}$ and whose morphisms are certain equivalence classes of structured cospans. However, this arises from a more fundamental structure: a symmetric monoidal

doublecategory where the horizontal 1-cells are structured cospans, not equivalence classes thereof. We explain the mathematics and illustrate it with an example from epidemiology.

This talk is based on work with Kenny Courser and Christina Vasilakopoulou, some of which appears here:

John Baez and Kenny Courser, Structured cospans.

Kenny Courser,

*Open Systems: a Double Categorical Perspective*.

Yesterday Rongmin Lu told me something amazing: structured cospans were already invented in 2007 by José Luiz Fiadeiro and Vincent Schmit. It’s pretty common for simple ideas to be discovered several times. The amazing thing is that these other authors also called them ‘structured cospans’!

- José Luiz Fiadeiro and Vincent Schmitt, Structured co-spans: an algebra of interaction protocols, in
*International Conference on Algebra and Coalgebra in Computer Science*, Springer, Berlin, 2007.

These earlier authors did not do everything we’ve done, so I’m not upset. Their work proves I chose the right name.

Someone should make a grand calendar, readable by everyone, of all the new math seminars that are springing into existence. Here’s another! It’s a bit outside the core concerns of *Azimuth*, but it’ll have a lot of category theory, and it features some good practices that I hope more seminars adopt, or tweak.

• Online Worldwide Seminar on Logic and Semantics, organized by Alexandra Silva, Pawel Sobocinski and Jamie Vicary.

There will be talks fortnightly at 1 pm UTC, which is currently 2 pm British Time, thanks to daylight savings time. Here are the first few:

• Wednesday, April 1, — Kevin Buzzard, Imperial College London: “Is HoTT the way to do mathematics?”

• Wednesday, April 15 — Joost-Pieter Katoen, Aachen University: “Termination of probabilistic programs”.

• Wednesday, April 29 — Daniela Petrisan, University of Paris: “Combining probabilistic and non-deterministic choice via weak distributive laws”.

• Wednesday, May 13 — Bartek Klin, Warsaw University: “Monadic monadic second order logic”.

• Wednesday, May 27 — Dexter Kozen, Cornell University: “Brzozowski derivatives as distributive laws”.

*Joining the seminar.*To join any OWLS seminar, visit the following link, up to 15 minutes before the posted start time:

You will be given the option to join through your web browser, or to launch the *Zoom* client if it is installed on your device. For the best experience, we recommend using the client.

*Audio and video.* We encourage all participants to enable their audio and video at all times (click “Use Device Audio” in the *Zoom* interface.) Don’t worry about making noise and disrupting the proceedings accidentally; the Chairperson will ensure your audio is muted by default during the seminar. Having your audio and video enabled will allow other participants to see your face in the “Gallery” view, letting them know that you’re taking part. It also gives you the option of asking a question, and of making best use of the “coffee break” sessions. For most users with good network access (such as a fast home broadband connection), there is no need to worry that having your audio and video enabled will degrade the experience; the technology platform ensures that the speaker’s audio/video stream is prioritised at all times. However, those on slow connections may find it better to disable their audio and video.

*Coffee breaks.* Every OWLS seminar has two “coffee breaks”, one starting 15 minutes before the posted start time of the seminar, and the second starting after the seminar is finished. To participate in these, feel free to join the meeting early, or to keep the meeting window open after the end of the talk. During these coffee break periods, participants will be automatically gathered into small groups, assigned at random; please introduce yourself to the other members of your group, and chat just like you would at a real conference. Remember to bring your own coffee!

*During the seminar.* If you’d like to ask a question, either during the seminar or in the question period at the end, click the “Participants” menu and select “Raise hand”. The Chairperson may choose to interrupt the speaker and give your audio/video feed the focus, giving you the opportunity to ask your question verbally, or may instead decide to let the seminar continue. You may click “Lower hand” at any time to show you no longer wish to ask a question. To preserve the experience of a real face-to-face conference, there is no possibility of giving a written question, and the chat room is disabled at all times. You also have the opportunity to give nonverbal feedback to the speaker by clicking the “speed up” or “slow down” buttons, also in the “Participants” menu.

*Recordings.* All OWLS seminars are recorded and uploaded to YouTube after the event. Only the audio/video of the chairperson, speaker, and questioners will be captured. If you prefer not to be recorded, do not ask a question. Of course, the organizers do not make any recordings of the coffee break sessions.

The MIT Categories Seminar is an informal teaching seminar in category theory and its applications, with the occasional research talk. This spring they are meeting online each Thursday, 12 noon to 1pm Eastern Time.

The talks are broadcast over YouTube here, with simultaneous discussion on the Category Theory Community Server. (To join the channel, click here.) Talks are recorded and remain available on the YouTube channel.

Here are some forthcoming talks:

March 26: David Jaz Myers (Johns Hopkins University) — Homotopy type theory for doing category theory.

April 2: Todd Trimble (Western Connecticut State University) — Geometry of regular relational calculus.

April 9: John Baez (UC Riverside) — Structured cospans and Petri nets.

April 16: Joachim Kock (Universitat Autònoma de Barcelona) — to be announced.

April 23: Joe Moeller (UC Riverside) — to be announced.

Videos of many older talks are available here.

Someone should make a grand calendar, readable by everyone, of all the new math seminars that are springing into existence. Here’s another:

- Online Worldwide Seminar on Logic and Semantics, organized by Alexandra Silva, Pawel Sobocinski and Jamie Vicary.

There will be talks fortnightly at 1 pm UTC, which is currently 2 pm British Time, thanks to daylight savings time. Here are the first few:

Wednesday, April 1, — Kevin Buzzard, Imperial College London: “Is HoTT the way to do mathematics?”

Wednesday, April 15 — Joost-Pieter Katoen, Aachen University: “Termination of probabilistic programs”.

Wednesday, April 29 — Daniela Petrisan, University of Paris: “Combining probabilistic and non-deterministic choice via weak distributive laws”.

Wednesday, May 13 — Bartek Klin, Warsaw University: “Monadic monadic second order logic”.

Wednesday, May 27 — Dexter Kozen, Cornell University: “Brzozowski derivatives as distributive laws”.

*Joining the seminar.*To join any OWLS seminar, visit the following link, up to 15 minutes before the posted start time:

You will be given the option to join through your web browser, or to launch the *Zoom* client if it is installed on your device. For the best experience, we recommend using the client.

*Audio and video.* We encourage all participants to enable their audio and video at all times (click “Use Device Audio” in the *Zoom* interface.) Don’t worry about making noise and disrupting the proceedings accidentally; the Chairperson will ensure your audio is muted by default during the seminar. Having your audio and video enabled will allow other participants to see your face in the “Gallery” view, letting them know that you’re taking part. It also gives you the option of asking a question, and of making best use of the “coffee break” sessions. For most users with good network access (such as a fast home broadband connection), there is no need to worry that having your audio and video enabled will degrade the experience; the technology platform ensures that the speaker’s audio/video stream is prioritised at all times. However, those on slow connections may find it better to disable their audio and video.

*Coffee breaks.* Every OWLS seminar has two “coffee breaks”, one starting 15 minutes before the posted start time of the seminar, and the second starting after the seminar is finished. To participate in these, feel free to join the meeting early, or to keep the meeting window open after the end of the talk. During these coffee break periods, participants will be automatically gathered into small groups, assigned at random; please introduce yourself to the other members of your group, and chat just like you would at a real conference. Remember to bring your own coffee!

*During the seminar.* If you’d like to ask a question, either during the seminar or in the question period at the end, click the “Participants” menu and select “Raise hand”. The Chairperson may choose to interrupt the speaker and give your audio/video feed the focus, giving you the opportunity to ask your question verbally, or may instead decide to let the seminar continue. You may click “Lower hand” at any time to show you no longer wish to ask a question. To preserve the experience of a real face-to-face conference, there is no possibility of giving a written question, and the chat room is disabled at all times. You also have the opportunity to give nonverbal feedback to the speaker by clicking the “speed up” or “slow down” buttons, also in the “Participants” menu.

*Recordings.* All OWLS seminars are recorded and uploaded to YouTube after the event. Only the audio/video of the chairperson, speaker, and questioners will be captured. If you prefer not to be recorded, do not ask a question. Of course, the organizers do not make any recordings of the coffee break sessions.

Social media spread the word yesterday evening that Phil Anderson, intellectual giant of condensed matter physics, had passed away at the age of 96.

It is hard to overstate the impact that Anderson had on the field. In terms of pure scientific results, there are others far more skilled than I who can describe his contributions, but I will mention a few that are well known:

It is hard to overstate the impact that Anderson had on the field. In terms of pure scientific results, there are others far more skilled than I who can describe his contributions, but I will mention a few that are well known:

- He developed what is now known as the Anderson model, a theoretical treatment originally intended to capture the essential physics in some transition metal-based magnets. The model considers comparatively localized
*d*orbitals and includes both hopping to neighboring sites in a lattice as well as the "on-site repulsion"*U*that makes it energetically expensive to have two electrons (in a spin singlet) on the same site. This leads to "superexchange" processes, where energetically costly double-occupancy is a virtual intermediate state. The Anderson model became the basis for many developments - allow coupling between the local sites and delocalized*s*or*p*bands, and you get the Kondo model. Put in coupling to lattice vibrations and you get the Anderson-Holstein model. Have a lattice and make the on-site repulsion really strong, and you get the Hubbard model famed in correlated electron circles and as the favored treatment of the copper oxide superconductors. - Anderson also made defining contributions to the theory of localization. Electrons in solids are wavelike, and in perfect crystal lattices the ones in the conduction and valence bands propagate right past the ions because the waves themselves account for the periodicity of the lattice. Anderson showed that even in the absence of interactions (the electron-electron repulsion), disorder can scatter those waves, and interference effects can lead to situations where the final result is waves that are exponentially damped with distance. This is called Anderson localization, and it applies to light and sound as well as electrons. With strict conditions, this result implies that (ignoring interactions) infinitesimal amounts of disorder can make a 2D electronic system an insulator.
- Here is his Nobel Lecture, by the way, that really focuses on these two topics.
- In considering superconductivity, Anderson also discovered what is now known as the Higgs mechanism, showing that while the bare excitations of some quantum field theory could be massless, coupling those excitations to some scalar field whose particular value broke an underlying symmetry could lead to an effective mass term (in the sense of how momentum and energy relate to each other) for the originally massless degrees of freedom. Since Anderson himself wrote about this within the last five years, I have nothing to add.
- Anderson also worked on superfluidity in
^{3}He, advancing understanding of this first-discovered non-electronic paired superfluid and its funky properties due to*p*-wave pairing. - With the discovery of the copper oxide superconductors, Anderson introduced the resonating valence bond (RVB) model that still shapes discussions of these and exotic spin-liquid systems.

Anderson was unquestionably a brilliant person who in many ways defined the modern field of condensed matter physics. He was intellectually active right up to the end, and he will be missed. (For one of my own interactions with him, see here.)

Back to modal HoTT. If what was considered last time were all, one would wonder what the fuss was about. Now, there’s much that needs to be said about type dependency, types as propositions, sets, groupoids, and so on, but let me skip to the end of my book to mention modal types, and in particular the intriguing use of modalities to present spatial notions of cohesiveness. Cohesion is an idea, originally due to Lawvere, which sets out from an adjoint triple of modalities arising in turn from an adjoint *quadruple* between toposes of spaces and sets of the kind:

components $\dashv$ discrete $\dashv$ points $\dashv$ codiscrete.

This has been generalised to the $(\infty, 1)$-categorical world by Urs and Mike. On top of the original triple of modalites, one can construct further triples first for *differential* cohesion and then also for supergeometry. With superspaces available in this synthetic fashion it is possible to think about Modern Physics formalized in Modal Homotopy Type Theory. This isn’t just an ‘in principle’ means of expression, but has been instrumental in guiding Urs’s construction with Hisham Sati of a formulation of M-theory – Hypothesis H. Surely it’s quite something that a foundational system could have provided guidance in this way, however the hypothesis turns out. Imagine other notable foundational systems being able to do any such thing.

Mathematics rather than physics is the subject of chapter 5 of my book, where I’m presenting cohesive HoTT as a means to gain some kind of conceptual traction over the vast terrain that is modern geometry. However I’m aware that there are some apparent limitations, problems with ‘$p$-adic’ forms of cohesion, cohesion in algebraic geometry, and so on. In the briefest note (p. 158) I mention the closely related pyknotic and condensed approaches of, respectively, (Barwick and Haine) and (Clausen and Scholze). Since they provide a different category-theoretic perspective on space, I’d like to know more about what’s going on with these.

[Edited to correct the authors and spelling of name. Other edits in response to comments, as noted there.]

I’ll keep to the former approach since the authors are explicit in pointing out where their construction differs from the cohesive one. In cohesive situations, that functor which takes an object in a base category and equips it with the discrete topology has a left adjoint, and so preserves limits. This does not hold for pyknotic sets (BarHai 19, 2.2.4).

We hear

one of the main peculiarities of the theory of pyknotic structures … is also one of its advantages: the forgetful functor is not faithful. (BarHai 19, 0.2.4)

Where there is only one possible topology on a singleton set, in the category of pyknotic sets the point possesses many pyknotic structures.

The $Pyk$ construction applies to all finite-product categories, $D$. There are several equivalent formulations of the concept, one being that $Pyk(D)$ is composed of the finite-product-preserving functors from the category of complete Boolean algebras to $D$. (See others sites at pyknotic set.) We thus have $Pyk(Ab)$, the category of pyknotic abelian groups.

One reason, we are told, for the whole pyknotic approach is that $Pyk(Ab)$ rectifies a perceived problem with the category of topological abelian groups, $AbTop$, in that where the former is itself an abelian category, this is not the case with the latter:

This can be seen by taking an abelian group and imposing two topologies, one finer than the other. Both the kernel and cokernel of the continuous map which is the identity on elements are 0. This is an indication that $AbTop$ does not have enough objects. To rectify this, we can modify the category to allow ‘pyknotic’ structures on 0, which can act as a cokernel here.

[Condensed abelian groups perform this role for Clausen and Scholze.]

$Pyk$ also preserves topos structure: If $X$ is a topos, then so is $Pyk(X)$.

I’m sure all the smart category theorists around here have useful things to say, but just to raise some small observations from cursory engagement.

Is there something importantly non-constructive about this construction? Complete Boolean algebras form the opposite category to Stonean locales, and

In the presence of the axiom of choice, the category of Stonean locales is equivalent to the category of Stonean spaces. (nLab: complete Boolean space).

Are there any category-theoretic features of $CompBoolAlg$ being exploited, such as that it’s not cocomplete?

Concepts I’ve seen mentioned by Barwick include: ultraproduct, ultrafilter, codensity monad, proétaleness. There’s some connection between what they’re doing in BarHai19, sec 4.3 and Lurie’s work on Makkai’s conceptual completeness (see here), which Lurie is looking to extend to higher categories.

Barwick and Haines tell us of their Theorem 4.3.6 that

The main motivation of the study of 1-ultracategories is the following result, which implies both the Deligne Completeness Theorem and Makkai’s Strong Conceptual Completeness Theorem. (p. 35)

and refer to

- Jacob Lurie, 2018, Ultracategories, (pdf).

This refers in turn to work by Scholze and Bhatt, presumably relating to why Scholze along with Clausen have devised a close relative of pyknoticity in condensed mathematics. A condensed set is a sheaf of sets on the pro-étale site of a point.

Awodey and students (Forssell and Breiner) were looking for an alternative route to this model-theoretic area (avoiding ultra-structures in favour of topological ones, see pp. 6-7 of Forssell, and even of scheme-theoretic ones in Breiner):

we reframe Makkai & Reyes’ conceptual completeness theorem as a theorem about schemes. (Breiner, p. 9)

Seems an interesting tangle of ideas.

In another not-much-research-day, I did get in an interesting call with Megan Bedell (Flatiron) and Lily Zhao (Yale) about our project to precisely calibrate the *EXPRES* spectrograph. Zhao is using our dimensionality reduction to look at instrument changes. She can use it to split the months of instrument use into sensible (what we call) *epochs*. Each of these epochs has a wavelength calibration with a sensible, low-dimensional representation. So the value of the dimensionality reduction is not just to make the calibration hierarchical, but also to find change points and—more generally—put eyes on the data.

On page 12 of a document put out by Imperial College London, which has been very widely read and commented on, and which has had a significant influence on UK policy concerning the coronavirus, there is a diagram that shows the possible impact of a strategy of alternating between measures that are serious enough to cause the number of cases to decline, and more relaxed measures that allow it to grow again. They call this *adaptive triggering*: when the number of cases needing intensive care reaches a certain level per week, the stronger measures are triggered, and when it declines to some other level (the numbers they give are 100 and 50, respectively), they are lifted.

If such a policy were ever to be enacted, a very important question would be how to optimize the choice of the two triggers. I’ve tried to work this out, subject to certain simplifying assumptions (and it’s important to stress right at the outset that these assumptions are questionable, and therefore that any conclusion I come to should be treated with great caution). This post is to show the calculation I did. It leads to slightly counterintuitive results, so part of my reason for posting it publicly is as a sanity check: I know that if I post it here, then any flaws in my reasoning will be quickly picked up. And the contrapositive of that statement is that if the reasoning survives the harsh scrutiny of a typical reader of this blog, then I can feel fairly confident about it. Of course, it may also be that I have failed to model some aspect of the situation that would make a material difference to the conclusions I draw. I would be very interested in criticisms of that kind too. (Indeed, I make some myself in the post.)

Before I get on to what the model is, I would like to make clear that I am not *advocating* this adaptive-triggering policy. Personally, what I would like to see is something more like what Tomas Pueyo calls The Hammer and the Dance: roughly speaking, you get the cases down to a trickle, and then you stop that trickle turning back into a flood by stamping down hard on local outbreaks using a lot of testing, contact tracing, isolation of potential infected people, etc. (This would need to be combined with other measures such as quarantine for people arriving from more affected countries etc.) But it still seems worth thinking about the adaptive-triggering policy, in case the hammer-and-dance policy doesn’t work (which could be for the simple reason that a government decides not to implement it).

Here was my first attempt at modelling the situation. I make the following assumptions. The numbers are positive constants.

- Relaxation is triggered when the rate of infection is .
- Lockdown (or similar) is triggered when the rate of infection is .
- The rate of infection is of the form during a relaxation phase.
- The rate of infection is of the form during a lockdown phase.
- The rate of “damage” due to infection is times the infection rate.
- The rate of damage due to lockdown measures is while those measures are in force.

For the moment I am not concerned with how realistic these assumptions are, but just with what their consequences are. What I would like to do is minimize the average damage by choosing and appropriately.

I may as well give away one of the punchlines straight away, since no calculation is needed to explain it. The time it takes for the infection rate to increase from to or to decrease from to depends only on the ratio . Therefore, if we divide both and by 2, we decrease the damage due to the infection and have no effect on the damage due to the lockdown measures. Thus, for any fixed ratio , it is best to make both and as small as possible.

This has the counterintuitive consequence that during one of the cycles one would be imposing lockdown measures that were doing far more damage than the damage done by the virus itself. However, I think something like that may actually be correct: unless the triggers are so low that the assumptions of the model completely break down (for example because local containment is, at least for a while, a realistic policy, so national lockdown is pointlessly damaging), there is nothing to be lost, and lives to be gained, by keeping them in the same proportion but decreasing them.

Now let me do the calculation, so that we can think about how to optimize the ratio for a fixed .

The time taken for the infection rate to increase from to is , and during that time the number of infections is

.

By symmetry the number of infections during the lockdown phase is (just run time backwards). So during a time the damage done by infections is , making the average damage . Meanwhile, the average damage done by lockdown measures over the whole cycle is .

Note that the lockdown damage doesn’t depend on and : it just depends on the proportion of time spent in lockdown, which depends only on the ratio between and . So from the point of view of optimizing and , we can simply forget about the damage caused by the lockdown measures.

Returning, therefore, to the term , let us say that . Then the term simplifies to . This increases with , which leads to a second counterintuitive conclusion, which is that for fixed , should be as close as possible to 0. So if, for example, , which tells us that the lockdown phases have to be twice as long as the relaxation phases, then it would be better to have cycles of two days of lockdown and one of relaxation than cycles of six weeks of lockdown and three weeks of relaxation.

Can this be correct? It seems as though with very short cycles the lockdowns wouldn’t work, because for one day in three people would be out there infecting others. I haven’t yet got my head round this, but I think what has gone wrong is that the model of exponential growth followed instantly by exponential decay is too great a simplification of what actually happens. Indeed, data seem to suggest a curve that rounds off at the top rather than switching suddenly from one exponential to another — see for example Chart 9 from the Tomas Pueyo article linked to above. But I think it is correct to conclude that the length of a cycle should be at most of a similar order of magnitude to the “turnaround time” from exponential growth to exponential decay. That is, one should make the cycles as short as possible provided that they are on a timescale that is long enough for the assumption of exponential growth followed by exponential decay to be reasonably accurate.

So far I have treated and and as parameters that we have no control over at all. But in practice that is not the case. At any one time there is a suite of measures one can take — encouraging frequent handwashing, banning large gatherings, closing schools, encouraging working from home wherever possible, closing pubs, restaurants, theatres and cinemas, enforcing full lockdown — that have different effects on the rate of growth or decline in infection and cause different levels of damage.

It seems worth taking this into account too, especially as there has been a common pattern of introducing more and more measures as the number of cases goes up. That feels like a sensible response — intuitively one would think that the cure should be kept proportionate — but is it?

Let’s suppose we have a collection of possible sets of measures . For ease of writing I shall call them measures rather than sets of measures, but in practice each is not just a single measure but a combination of measures such as the ones listed above. Associated with each measure is a growth rate (which is positive if the measures are not strong enough to stop the disease growing and negative if they are strong enough to cause it to decay) and a damage rate . Suppose we apply for time . Then during that time the rate of infection will multiply by . So if we do this for each measure, then we will get back to the starting infection rate provided that . (This is possible because some of the are negative and some are positive.)

There isn’t a particularly nice expression for the damage resulting from the disease during one of these cycles, but that does not mean that there is nothing to say. Suppose that the starting rate of infection is and that the rate after the first stages of the cycle is . Then . Also, by the calculation above, the damage done during the th stage is .

This has an immediate consequence for the order in which the should be applied. Let me consider just the first two stages. The total damage caused by the disease during these two stages is

.

To make that easier to read, let’s forget the term (which we’re holding constant) and concentrate on the expression

.

If we reorder stages 1 and 2, we can replace this damage by

.

This is an improvement if the second number is smaller than the first. But the first minus the second is equal to

,

so the reordering is a good idea if . This tells us that we should start with smaller and work up to bigger ones. Of course, since we are applying the measures in a cycle, we cannot ensure that the form an increasing sequence, but we can say, for example, that if we first apply the measures that allow the disease to spread, and then the ones that get it to decay, then during the relaxation phase we should work from the least relaxed measures to the most relaxed ones (so the growth rate will keep increasing), and during the suppression phase we should start with the strictest measures and work down to the most relaxed ones.

It might seem strange that during the relaxation phase the measures should get gradually more relaxed as the spread worsens. In fact, I think it *is* strange, but I think what that strangeness is telling us is that using several different measures during the relaxation phase is not a sensible thing to do.

The optimization problem I get if I try to balance the damage from the disease with the damage caused by the various control measures is fairly horrible, so I am going to simplify it a lot in the following way. The basic principle that there is nothing to be lost by dividing everything by 2 still applies when there are lots of measures, so I shall assume that a sensible government has taken that point on board to the point where the direct damage from the disease is insignificant compared with the damage caused by the measures. (Just to be clear, I certainly don’t mean that lives lost are insignificant, but I mean that the number of lives lost to the disease is significantly smaller than the number lost as an indirect result of the measures taken to control its spread.) Given this assumption, I am free to concentrate just on the damage due to the measures , so this is what I will try to minimize.

The total damage across a full cycle is , so the average damage, which is what matters here, is

.

We don’t have complete freedom to choose, or else we’d obviously just choose the smallest and go with that. The constraint is that the growth rate of the virus has to end up where it began: this is the constraint that , which we saw earlier.

Suppose we can find such that , but . Then in particular we can find such with . If all the are strictly positive, then we can also choose them in such a way that all the are still strictly positive. So if we replace each by , then the numerator of the fraction decreases, the denominator stays the same, and the constraint is still satisfied. It follows that we had not optimized.

Therefore, if the choice of is optimal and all the are non-zero (and therefore strictly positive — we can’t run some measures for a negative amount of time) it is not possible to find such that , but . This is equivalent to the statement that the vector is a linear combination of the vectors and . In other words, we can find such that for each . I wrote it like that because the smaller is, the larger the damage one expects the measures to cause. Thus, the points form a descending sequence. (We can assume this, since if one measure causes both more damage and a higher growth rate than another, then there can be no reason to choose it.) Thus, will be positive, and since at least some are positive, and no measures will cause a *negative* amount of damage, is positive as well.

The converse of this statement is true as well. If for every , then , from which it follows that the average damage across the cycle is , regardless of which measures are taken for which lengths of time.

This already shows that there is nothing to be gained from having more than one measure for the relaxation phase and one for the lockdown phase. There remains the question of how to choose the best pair of measures.

To answer it, we can plot the points . The relaxation points will appear to the right of the y-axis and the suppression points will appear to the left. If we choose one point from each side, then they lie in some line , of which is the intercept. Since is the average damage, which we are trying to minimize, we see that our aim is to find a line segment joining a point on the left-hand side to a point on the right-hand side, and we want it to cross the y-axis as low as possible.

It is not hard to check that the intercept of the line joining to is at . So if we rename the points to the left of the y-axis and the points to the right , then we want to minimize over all .

It isn’t completely easy to convert this criterion into a rule of thumb for how best to choose two measures, one for the relaxation phase and one for the suppression phase, but we can draw a couple of conclusions from it.

For example, suppose that for the suppression measures there is a choice between two measures, one of which works twice as quickly as the other but causes twice as much damage per unit time. Then the corresponding two points lie on a line with negative gradient that goes through the origin, and therefore lies below all points in the positive quadrant. From this it follows that the slower but less damaging measure is better. Another way of seeing that is that with the more severe measure the total damage during the lockdown phase stays the same, as does the total damage during the relaxation phase, but the length of the cycle is decreased, so the *average* damage is increased.

Note that I am not saying that one should always go for less severe measures — I made the strong assumption there that the two points lay in a line through the origin. If we can choose a measure that causes damage at double the rate but acts three times as quickly as another measure, then it may turn out to be better than the less damaging but slower measure.

However, it seems plausible that the set of points will exhibit a certain amount of convexity. That is because if you want to reduce the growth rate of infections, then at first there will be some low-hanging fruit — for example, it is not costly at all to run a public-information campaign to persuade people to wash their hands more frequently, and that can make quite a big difference — but the more you continue, the more difficult making a significant difference becomes, and you have to wheel out much more damaging measures such as school closures.

*If* the points were to lie on a convex curve (and I’m definitely not claiming this, but just saying that something like it could perhaps be true), then the best pair of points would be the ones that are nearest to the y-axis on either side. This would say that the best strategy is to alternate between a set of measures that allows the disease to grow rather slowly and a set of measures that causes it to decay slowly again.

This last conclusion points up another defect in the model, which is the assumption that a given set of measures causes damage at a constant rate. For some measures, this is not very realistic: for example, even in normal times schools alternate between periods of being closed and periods of being open (though not necessarily to a coronavirus-dictated timetable of course), so one might expect the damage from schools being 100% closed to be more than twice the damage from schools being closed half the time. More generally, it might well be better to rotate between two or three measures that all cause roughly the same rate of damage, but in different ways, so as to spread out the damage and try to avoid reaching the point where the rate of one kind of damage goes up.

Again I want to stress that these conclusions are all quite tentative, and should certainly not be taken as a guide to policy without more thought and more sophisticated modelling. However, they do at least *suggest* that certain policies ought not to be ruled out without a good reason.

If adaptive triggering is going to be applied, then the following are the policies that the above analysis suggests. First, here is a quick reminder that I use the word “measure” as shorthand for “set of measures”. So for example “Encourage social distancing and close all schools, pubs, restaurants, theatres, and cinemas” would be a possible measure.

- There is nothing to lose and plenty to gain by making the triggers (that is, the infection rates that cause one to switch from relaxation to suppression and back again) low. This has the consequence that the triggers should be set in such a way that the damage from the measures is significantly higher than the damage caused by the disease. This sounds paradoxical, but the alternative is to make the disease worse without making the cure any less bad, and there is no point in doing that.
- Within reason, the cycles should be kept short.
- There is no point in having more than one measure for the relaxation phase and one for the suppression phase.
- If you must have more than one measure for each phase, then during the relaxation phase the measures should get more relaxed each time they change, and during the suppression phase they should get less strict each time they change.
- Given enough information about their consequences, the optimal measures can be determined quite easily, but doing the calculation in practice, especially in the presence of significant uncertainties, could be quite delicate.

Point number 1 above seems to me to be quite a strong argument in favour of the hammer-and-dance approach. That is because the conclusion, which looks to me quite robust to changes in the model, is that the triggers should be set very low. But if they are set very low, then it is highly unlikely that the enormous damage caused by school closures, lockdowns etc. is the best approach for dealing with the cases that arise, since widespread testing and quarantining of people who test positive, contacts of those people, people who arrive from certain other countries, and so on, will probably be far less damaging, even if they are costly to do well. So I regard point number 1 as a sort of reductio ad absurdum of the adaptive-triggering approach.

Point number 2 seems quite robust as well, but I think the model breaks down on small timescales (for reasons I haven’t properly understood), so one shouldn’t conclude from it that the cycles should be short on a timescale of days. That is what is meant by “within reason”. But they should be as short as possible provided that they are long enough for the dominant behaviour of the infection rate to be exponential growth and decay. (That does not imply that they should not be shorter than this — just that one cannot reach that conclusion without a more sophisticated model. But it seems highly likely that there is a minimum “reasonable” length for a cycle: this is something I’d be very interested to understand better.)

Point number 3 was a clear consequence of the simple model (though it depended on taking 1 seriously enough that the damage from the disease could be ignored), but may well not be a sensible conclusion in reality, since the assumption that each measure causes damage at a rate that does not change over time is highly questionable, and dropping that assumption could make quite a big difference. Nevertheless, it is interesting to see what the consequences of that assumption are.

Point number 4 seems to be another fairly robust conclusion. However, in the light of 3 one might hope that it would not need to be applied, except perhaps as part of a policy of “rotating” between various measures to spread the damage about more evenly.

It seems at least possible that the optimal adaptive-triggering policy, if one had a number of choices of measures, would be to choose one set that causes the infections to grow slowly and another that causes them to shrink slowly — in other words to fine tune the measures so as to keep the infection rate roughly constant (and small). Such fine tuning would be very dangerous to attempt now, given how much uncertainty we are facing, but could become more realistic after a few cycles, when we would start to have more information about the effects of various measures.

One final point is that throughout this discussion I have been assuming that the triggers would be based on the current rate of infection. In practice of course, this is hard to measure, which is presumably why the Imperial College paper used demand for intensive care beds instead. However, with enough data about the effects of various measures on the rate of spread of the virus, one would be less reliant on direct measurements, and could instead make inferences about the likely rate of infection given data collected over the previous few weeks. This seems better than using demand for ICU beds as a trigger, since that demand reflects the infection rate from some time earlier.

I met early (by videocon, of course) with Teresa Huang (NYU) and Soledad Villar (NYU) to talk about our projects to develop adversarial attacks against regressions of discriminative and generative forms. We ended up talking a bit about information theory. I gave my minimal description of Fisher Information. I was recalling that I was taught some of that back in my PhD, but I forgot it all and re-learned it by using it in real data analyses. I feel like it would be a good subject for an *arXiv*-only post.

The question to hand today was this: You are given a set of data *x* that contain information about some quantity *y*. For a training subset, you are also given labels *y*, which are noisy. That is, the labels you are given do not exactly match the true values of *y*. Which contains more information about the true labels? The labels you are given or the data? This is a question answerable (under exceedingly strong assumptions) within information theory.

*[Research barely proceeds during this pandemic. Don't take these posts to be evidence of a lot of research activity here.]*

Mathieu Renzo (Flatiron) showed some calculations in Stars & Exoplanets meeting (now virtual) about how common-envelope stars might appear in *LISA*. The idea is that when stars with compact cores in binary systems orbitally decay, they hit a point at which they must merge into a single star. These sources might be in the *LISA* band for a while. My loyal reader knows that Adrian Price-Whelan (Flatiron) and I have found some very short-period binary companions in radial-velocity data from *APOGEE*; some too short-period to be orbiting outside their host stars! We have presumed that these signals are mis-classified asteroseismic oscillations. However, maybe they *could* be common-envelope? Renzo pointed out that there should be interesting spectral signatures if they are common-envelope. Let's check!

In a recent post I discussed the conclusions of a study aimed at computing a small but very important correction to the theoretical prediction of the anomalous magnetic moment of the muon. The interest of this lays in the fact that the latter quantity is virtually the only one for which the Standard Model prediction exhibits a tension with the current experimental measurements among all the measurable parameters of the subnuclear world.

After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets. (The proposal can be viewed as a PDF, and is also available on Overleaf). As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets. (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute. Feel free to mention further relevant data sets in the comments.)

This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on. In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in. (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)

UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.

UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page). The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t *just* apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a *controlled approximation*. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because *some of the properties of neutron stars are unknown*.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

It’s family blogging time! Since school is out we need some kind of writing activity so we’re all blogging, not just me. I did not require any particular subject. CJ is blogging about the movies he’s watching in his friend groups’ “movie club ” — he has the Marvel bug now and is plowing through the whole collection on Disney+. AB’s blog is called “The Nasty Times: Foods that Were Never Meant To Be Eaten” and each entry is about a food she considers nasty. The first entry was about mushrooms and she is currently composing “Why Onions Do Not Belong in Sloppy Joes.” I know, I know, who doesn’t like mushrooms and onions? Well, me at AB’s age — I made my mom take them out of *everything*, much to her annoyance. Now I’m getting my comeuppance.

I have two big longboxes of comics in the basement, almost all from 1982-1986, and AB and I spent part of the morning starting to sort and organize them. Perfect example of a task that feels like productivity and is not important in any way and yet — satisfying. Also nice to see old friends again, covers I haven’t seen in years but are familiar to me in every detail. This one seemed fairly on point:

I am still thinking about the masks. Why so unpopular in the US? Maybe it works like this. You are told (correctly) that wearing a mask doesn’t provide strong protection. Let’s say (making up a number) it only reduces your chance of transmitting or contracting the virus by a half. To many people that is going to feel like nothing: “I’m not really protected, what’s the point?” But in the aggregate, an easy, cheap measure that reduces number of transmissions by 50% would be extremely socially valuable.

talk about class of 1895

Made a big, creamy, cheesy casserole with rotini and a million artichokes and peas, the vegetables out of the freezer of course. Times like this bring out the 60s housewife in me. Everyone is saying it’s good to get out of the house and see the sun from time to time, even just on your porch, but there hasn’t really *been* any sun here; it’s Wisconsin-technically-spring, in the 40s and kind of dreary. I go play basketball with the kids in the driveway each day in the chill. CJ can beat me almost all the time now.

AB and I listened to all the songs on Spotify called “Coronavirus.” There are already a ton; we didn’t actually listen to all of them, there were too many. A lot of them are in Spanish.

Daniel Litt organized a number theory conference, all held on Zoom with more than 130 people watching. To my surprise, this worked really well. People are starting to organize lists of online seminars and at this point there are *more* seminars I could be “going” to each day than there are when life is normal.

I’ve heard talk about starting baseball with the All-Star Game and having the World Series at Christmas.

Some people are hoping that maybe we’re drastically underestimating the prevalence of infection; maybe the reason curves are starting to bend isn’t the effect of our social isolation measures but the fact that a substantial population has already been affected and acquired temporary immunity, without ever knowing they were sick, and so maybe we’re vastly overestimating the proportion of cases which turn into serious illnesses. Wouldn’t that be great?

At the moment I don’t know anyone who’s died but I know people who know people who’ve died. At this point, do most people in the United States know people who know people who’ve died?

By phone I discussed with Ana Bonaca (Harvard) this paper by Auge *et al*, which (very sensibly) looks at the possibility of doing asteroseismology from the ground. My loyal reader knows that this is something I have been thinking about for a long time (I think it is mentioned in the shouty slide deck linked to from this old blog post), and Bonaca has too. Auge *et al* show that they can get nu-max from the ground for very luminous (and hence large, and hence large-amplitude-oscillating, and hence long-period) giant stars. Can they also get delta-nu? They say that it is rarely possible. But their tool is something like a Fourier transform followed by searching for peaks. If your *main goal* was to determine delta-nu, this would not be the tool of choice, I think. *Not that I have a tool ready!* Bonaca and I resolved to take a look at this problem.

Coronavirus is forcing massive changes on the academic ecosystem, and here’s another:

We’re having a seminar on applied category theory at U. C. Riverside, organized by Joe Moeller and Christian Williams.

It will take place on Wednesdays at 5 pm UTC, which is 10 am in California or 1 pm on the east coast of the United States, or 6 pm in England. It will be held online via Zoom, here:

https://ucr.zoom.us/j/607160601

We will have discussions online here:

https://categorytheory.zulipchat.com/

The first two talks will be:

- Wednesday April 1st, John Baez: Structured cospans and double categories.

**Abstract.** One goal of applied category theory is to better understand networks appearing throughout science and engineering. Here we introduce “structured cospans” as a way to study networks with inputs and outputs. Given a functor $L \colon A \to X$, a structured cospan is a diagram in $X$ of the form
$L(a) \to x \leftarrow L(b).$
If $A$ and $X$ have finite colimits and $L$ is a left adjoint, we obtain a symmetric monoidal category whose objects are those of $A$ and whose morphisms are certain equivalence classes of structured cospans. However, this arises from a more fundamental structure: a symmetric monoidal *double* category where the horizontal 1-cells are structured cospans, not equivalence classes thereof. We explain the mathematics and illustrate it with an example from chemistry.

- Wednesday April 8th, Prakash Panangaden: A categorical view of conditional expectation.

**Abstract.** This talk is a fragment from a larger work on approximating Markov processes. I will focus on a functorial definition of conditional expectation without talking about how it was used. We define categories of cones — which are abstract versions of the familiar cones in vector spaces — of measures and related categories cones of $L_p$ functions. We will state a number of dualities and isomorphisms between these categories. Then we will define conditional expectation by exploiting these dualities: it will turn out that we can define conditional expectation with respect to certain morphisms. These generalize the standard notion of conditioning with respect to a sub-sigma algebra. Why did I use the plural? Because it turns out that there are two kinds of conditional expectation, one of which looks like a left adjoint (in the matrix sense not the categorical sense) and the other looks like a right adjoint. I will review concepts like image measure, Radon-Nikodym derivatives and the traditional definition of conditional expectation. This is joint work with Philippe Chaput, Vincent Danos and Gordon Plotkin.

Here is some information provided by Zoom. There are lots of ways to attend, though the simplest for most of you will be going to

https://ucr.zoom.us/j/607160601

at the scheduled times.

Topic: ACT@UCR seminar

Time: Apr 1, 2020 10:00 AM Pacific Time (US and Canada)

Every 7 days, 10 occurrence(s)

Apr 1, 2020 10:00 AM

Apr 8, 2020 10:00 AM

Apr 15, 2020 10:00 AM

Apr 22, 2020 10:00 AM

Apr 29, 2020 10:00 AM

May 6, 2020 10:00 AM

May 13, 2020 10:00 AM

May 20, 2020 10:00 AM

May 27, 2020 10:00 AM

- Jun 3, 2020 10:00 AM

Please download and import the following iCalendar (.ics) files to your calendar system. Daily: https://ucr.zoom.us/meeting/u5Qqdu-oqDsrnNuE0v8386uhHBWr4GQpqA/ics?icsToken=98tyKu-oqTosGtKVsVyCY7ctA8Hib9_ykH9gv4YNike2W3ZGaivUAfAWFYNvAfmB

Join Zoom Meeting: https://ucr.zoom.us/j/607160601

Meeting ID: 607 160 601

One tap mobile +16699006833,,607160601# US (San Jose)

+13462487799,,607160601# US (Houston)

Dial by your location

```
+1 669 900 6833 US (San Jose)
+1 346 248 7799 US (Houston)
+1 646 876 9923 US (New York)
+1 253 215 8782 US
+1 301 715 8592 US
+1 312 626 6799 US (Chicago)
```

Meeting ID: 607 160 601

Find your local number: https://ucr.zoom.us/u/adkKXYyiHq

Join by SIP 607160601@zoomcrc.com

Join by H.323

162.255.37.11 (US West)

162.255.36.11 (US East)

221.122.88.195 (China)

115.114.131.7 (India Mumbai)

115.114.115.7 (India Hyderabad)

213.19.144.110 (EMEA)

103.122.166.55 (Australia)

209.9.211.110 (Hong Kong)

64.211.144.160 (Brazil)

69.174.57.160 (Canada)

207.226.132.110 (Japan)

Meeting ID: 607 160 601

* My hometown Jwaya, Lebanon.*

The MIT Categories Seminar is an informal teaching seminar in category theory and its applications, with the occasional research talk. This spring they are meeting online each Thursday, 12 noon to 1pm Eastern Time.

The talks are broadcast over YouTube here, with simultaneous discussion on the Category Theory Community Server. (To join the channel, click here.) Talks are recorded and remain available on the YouTube channel.

Here are some forthcoming talks:

March 26: David Jaz Myers (Johns Hopkins University) — Homotopy type theory for doing category theory.

April 2: Todd Trimble (Western Connecticut State University) — Geometry of regular relational calculus.

April 9: John Baez (UC Riverside) — Structured cospans and Petri nets.

April 16: Joachim Kock (Universitat Autònoma de Barcelona) — to be announced.

April 23: Joe Moeller (UC Riverside) — to be announced.

Videos of many older talks are available here.

beautiful day, The Dog got some extra long walks

word is that there is a case or two of COVID-19 in the neighbourhood,

also first case among campus staff confirmed, and first case among Hershey staff

A. called, the true depth of shortages has been revealed by the local news there,

stores in Arizona are rationing ammunition, 200 rounds per customer

Comfortably Numb

word is that there is a case or two of COVID-19 in the neighbourhood,

also first case among campus staff confirmed, and first case among Hershey staff

A. called, the true depth of shortages has been revealed by the local news there,

stores in Arizona are rationing ammunition, 200 rounds per customer

Comfortably Numb

So far, I confess, this pandemic is not shaping up for me like for Isaac Newton. It’s not just that I haven’t invented calculus or mechanics: I feel little motivation to think about research at all. Or to catch up on classic literature or films … or even to shower, shave, or brush my teeth. I’m quarantined in the house with my wife, our two kids, and my parents, so certainly there’s been plenty of family time, although my 7-year-daughter would inexplicably rather play fashion games on her iPad than get personalized math lessons from the author of *Quantum Computing Since Democritus*.

Mostly, it seems, I’ve been spending the time sleeping. Or curled up in bed, phone to face, transfixed by the disaster movie that’s the world’s new reality. Have you ever had one of those nightmares where you know the catastrophe is approaching—whether that means a missed flight, a botched presentation at your old high school, or (perhaps) more people dying than in any event since WWII—but you don’t know exactly when, and you can do nothing to avert it? Yeah, that feeling is what I now close my eyes to *escape*. And then I wake up, and I’m back in bizarro-nightmare-land, where the US is in no rush whatsoever to test people or to build ventilators or hospitals to cope with the coming deluge, and where ideas that could save millions have no chance against rotting institutions.

If nothing else, I guess we now have a decisive answer to the question of why humanity can’t get its act together on climate change. Namely, if we can’t wrap our heads around a catastrophe that explodes exponentially over **a few weeks**—if those who denied or minimized it face no consequences even when they’re dramatically refuted before everyone’s eyes—then what chance could we possibly have against a catastrophe that explodes exponentially over a century? (Note that I reject the view that the virus was sent by some guardian angel as the only possible *solution* to climate change, one crisis cancelling another one. For one thing, I expect emissions to roar back as soon as this new Black Death is over; for another, the virus punishes public transportation but not cars.)

Anyway, I realized I needed something, not necessarily to take my mind off the crisis, but to break me out of an unproductive spiral. Also, what better time than the present for things that I wouldn’t normally have time for? So, continuing a tradition from 2008, 2009, 2011, 2013, 2015, and 2018, we’re going to do an Ask Me Anything session. Questions directly or tangentially related to the crisis (continuing the discussion from the previous thread) are okay, questions *totally unrelated* to the crisis are even okayer, goofball questions are great, and questions that I can involve my two kids in answering are greatest of all. Here are this year’s ground rules:

- 24 hours or until I get bored
- One question per person total
**Absolutely no**multi-part questions- Self-contained questions only—nothing that requires me to read a paper, watch a video, etc.
- Scan the previous AMAs to see if your question is already there
- Any sufficiently patronizing, hostile, or annoying questions might be left in the moderation queue, 100% at my discretion

So ask away! And always look on the bright side of life.

**Update (March 19): No more questions, please.** Thanks, everyone! It will take me a few days just to work through all the great questions that are already in the queue.

**Update (March 24):** Thanks again for the 90-odd questions! For your reading convenience, here are links to all my answers, with some answers that I’m happy with bolded.

- Could non-biological entities be conscious? (Short answer: presumably)
- Is online teaching here to stay, even after coronavirus passes?
- Would a parliament of randomly-chosen high SAT-scorers be better than the current US government? (Short answer: probably. Low bar!)
- Am I optimistic about NISQ algorithms like QAOA? (Short answer: no)
- Have I tried sitting under an apple tree? (Answer: No, only laying under the covers)
- Something something about radical bio-isolationism to defend against engineered super-plagues?
**Will the corona crisis help teach people about climate change?****Will the crisis make academics reevaluate the need for so many conferences?**- How should we teach undergrad Theory of Computation?
**How do I feel about the progress in TCS over the last decade?**(Short answer: good)- Is MIP*=RE evidence for quantum being stronger in general? (Short answer: no)
- Should voters be blamed, or only politicians? (Short answer: voters)
**Could quantum computing help with future pandemics?**(Short answer: conceivably but let’s start with masks and ventilators)- What’s my estimate for the number of COVID victims?
- Will Trump’s debased response to the pandemic make him lose the election? (Short answer: I hope so!)
**What recently-read books do I recommend?**- What are my daughter’s favorite fashion games?
- Are there quantum size-depth tradeoffs for problems in NC or RNC?
- How do I combat the feeling that the pandemic makes all my research useless? (Short answer: my research was useless already!)
- Is time already involved at the commutator level in QM?
- What potential use of QCs brings me the most joy? (Short answer: disproving people who claimed QCs were impossible!)
- Why aren’t I more terrified of the flu than of coronavirus? (Short answer: Do you live in a frigging cave?)
**Why are physicists so ignorant of theoretical CS, if all their experiments rely on computers?**- How much economic cost should we bear to flatten the curve?
- Which two historical figures would I like to watch debating the human condition? (Answer: Bertrand Russell and Jesus)
- Is CRISPR being used to help search for coronavirus vaccines?
**Have I ever grown a beard?**(Short answer: sadly, yes)- Why are hard-to-simulate Mikado sticks not “Mikado supremacy”?
**What advice would I give a postdoc who dreams of pursuing big questions?**(Short answer: do it!)- What recent research idea did I have that didn’t pan out?
- How will QCs help advance materials science? (Short answer: no one knows)
- Are Parity Games in P?
- Boxers or briefs?
- How do I know I retain transtemporal identity?
**What positives will come out of this pandemic?**- Will I try Beeminder?
- Am I worried about dysgenic fertility trends?
- How do you know if something’s conscious? (Short answer: you don’t)
- What non-political decree would I issue as king?
- Do Borel determinacy and the measurability of projective sets have metaphysical truth-values? (Short answer: dunno)
- Why haven’t we previously seen lots of viruses with asymptomatic spread?
- How is Gil Kalai faring now that his worldview has been shattered? (Short answer: he’s fine)
**Have things gotten so awful that we should stop working on fundamental questions?****Do non-physical realms exist?**- Is consciousness a quantum phenomenon?
- Can one separate the completeness and soundness aspects of MIP*?
**Why do I still feel the need to bring up Amanda Marcotte?**(Short answer: Because ironically, she showed the world why my teenage fears*weren’t*completely delusional)- How would I apportion money to different research areas?
- Can one end music infringement lawsuits by putting 68 billion computer-generated melodies into the public domain?
**What’s the second piece of information people should take from this blog?**- How likely am I to win a Nobel Prize?
- What research area would I choose were I starting today?
- Will my math lessons for my daughter be made more widely available? (Short answer: That’s the plan!)
**How should Turing-universality be defined for dynamical systems?****Has the coronavirus crisis inspired me to take existential risks in general, including AI risks, more seriously?**(Short answer: yes)- Why do world leaders say we’re now in a war? (Short answer: because we are)
**Why doesn’t chemistry get more respect?**- What other catastrophes do I worry might hit us in the next decade?
**Have I ever experienced “true free will”?**- Why do many top PhD programs now expect applicants to already have research experience? (Short answer: arms race)
- Do I believe in telepathy? (Answer: no)
- Is what’s mathematically possible a strict subset of what’s logically possible?
- What do I think of the Law of the Excluded Middle? (Short answer: 100% for, 0% against)
- What do I think of bounded arithmetic and proof complexity?
- Who are leading candidates for the next Turing Award?
**Could an infinite amount of computation be done just before a Big Rip?**- What is quantum discord and is it useful for anything?
**What do I think of academics having to write diversity statements?**(Short answer: for diversity, against mandatory statements)**What’s my frank opinion of geometry-from-entanglement, and what does the new replica wormhole result tell us?****What do I think of Sabine Hossenfelder’s view that impossibility theorems are only about theories, not about nature itself?**- Do I own a gun, or have I considered getting one? (Short answer: no and no)
- Are the asymptotically best quantum error-correcting codes likely to be stabilizer codes?
- What was the flavor of the last piece of cake I ate?
- Why is it called computer science if we don’t do experiments? (Short answer: for starters, we do do experiments)
- Is there such a thing as non-maximal entanglement? (Short answer: yes)
- At what fatality rate do you order shelter-in-place?
- Are logistics and optimization promising applications of QC?
- What problems in quantum complexity do I think will be solved next?
**What changes would I (and my kids) make to elementary education in the US?****What do my kids think that my wife and I do at work?**- Do I think Theory A and Theory B will join forces to tackle the P=?NP problem? (Short answer: No, I think Theory A will tackle it, while Theory B explains how it could’ve been tackled more elegantly)
- What’s a good quantum computing project idea?
- What’s the role of error-correcting codes in the PCP Theorem?
- Is the Blum Speedup Theorem true? (Answer: yes)
**Which type of entanglement better captures the real world: tensor-product or commuting-operator?**- Is Ewin Tang’s algorithm practical?
**What motivates me to get out of bed?**(Answer: Dana yelling at me to come help with the kids)- Would George Washington’s descendants, ruling as kings, be preferable to Trump?
- What big question would I love to solve?
- Am I planning a revision of
*Quantum Computing Since Democritus*? (Short answer: no)

Sometimes, experimental results spark enormous curiosity inspiring a myriad of questions and ideas for further experimentation. In 2004, Geim and Novoselov, from The University of Manchester, isolated a single layer of graphene from bulk graphite with the “Scotch Tape Method” for which they were awarded the 2010 Nobel Prize in Physics. This one experimental result has branched out countless times serving as a source of inspiration in as many different fields. We are now in the midst of an array of branching-out in graphene research, and one of those branches gaining attention is ultra low friction observed between graphene and other surface materials.

Much has been learned about graphene in the past 15 years through an immense amount of research, most of which, in non-mechanical realms (e.g., electron transport measurements, thermal conductivity, pseudo magnetic fields in strain engineering). However, superlubricity, a mechanical phenomenon, has become the focus among many research groups. Mechanical measurements have famously shown graphene’s tensile strength to be hundreds of times that of the strongest steel, indisputably placing it atop the list of construction materials best for a superhero suit. Superlubricity is a tribological property of graphene and is, arguably, as equally impressive as graphene’s tensile strength.

Tribology is the study of interacting surfaces during relative motion including sources of friction and methods for its reduction. It’s not a recent discovery that coating a surface with graphite (many layers of graphene) can lower friction between two sliding surfaces. Current research studies the precise mechanisms and surfaces for which to minimize friction with single or several layers of graphene.

Research published in *Nature Materials* in 2018 measures friction between surfaces under constant load and velocity. The experiment includes two groups; one consisting of two graphene surfaces (homogeneous junction), and another consisting of graphene and hexagonal boron nitride (heterogeneous junction). The research group measures friction using Atomic Force Microscopy (AFM). The hexagonal boron nitride (or graphene for a homogeneous junction) is fixed to the stage of the AFM while the graphene slides atop. Loads are held constant at 20 𝜇N and sliding velocity constant at 200 nm/s. Ultra low friction is observed for homogeneous junctions when the underlying crystalline lattice structures of the surfaces are at a relative angle of 30 degrees. However, this ultra low friction state is very unstable and upon sliding, the surfaces rotate towards a locked-in lattice alignment. Friction varies with respect to the relative angle between the two surface’s crystalline lattice structures. Minimum (ultra low) friction occurs at a relative angle of 30 degrees reaching a maximum when locked-in lattice alignment is realized upon sliding. While in a state of lattice alignment, shearing is rendered impossible with the experimental setup due to the relatively large amount of friction.

Friction varies with respect to the relative angle of the crystalline lattice structures and is, therefore, anisotropic. For example, the fact it takes less force to split wood when an axe blade is applied parallel to its grains than when applied perpendicularly illustrates the anisotropic nature of wood, as the force to split wood is dependent upon the direction along which the force is applied. Frictional anisotropy is greater in homogeneous junctions because the tendency to orient into a stuck, maximum friction alignment, is greater than with heterojunctions. In fact, heterogeneous junctions experience frictional anisotropy three orders of magnitude less than homogeneous junctions. Heterogenous junctions display much less frictional anisotropy due to a lattice misalignment when the angle between the lattice vectors is at a minimum. In other words, the graphene and hBN crystalline lattice structures are never parallel because the materials differ, therefore, never experience the impact of lattice alignment as do homogenous junctions. Hence, heterogeneous junctions do not become stuck in a high friction state that characterizes homogeneous ones, and experience ultra low friction during sliding at all relative crystalline lattice structure angles.

Presumably, to increase applicability, upscaling to much larger loads will be necessary. A large scale cost effective method to dramatically reduce friction would undoubtedly have an enormous impact on a great number of industries. Cost efficiency is a key component to the realization of graphene’s potential impact, not only as it applies to superlubricity, but in all areas of application. As access to large amounts of affordable graphene increases, so will experiments in fabricating devices exploiting the extraordinary characteristics which have placed graphene and graphene based materials on the front lines of material research the past couple decades.

Heard a strange tale this week.

My neighbour was looking for supplies, taking relatives with health issues to a vacation home for extra isolation. Said he'd found small stores up in the mountains which still had stocks and bought what they needed.

Apparently, or so I'm told, there is a toilet paper manufacturer in northern Pennsylvania, which distributes locally (plausible, there is plenty of softwood up there).

Toilet Paper Seeds

But, the little cores in the centers of the rolls, those are apparently made in China, and are in short supply. So this toilet paper manufacturer is distributing the rolls without a core cardboard cylinder until they can set up to get alternative suppliers, or make their own.

Literally could not make this up.

So.., it must be true, right?

I went to Trader Joe’s this morning. It was an extremely pleasant oasis of normality. Everything was as it always is, except for the guy standing out front apparently doing nothing but who I guessed was there to control inflow in case the store got too crowded. (Verified by a friend who was at the store early this afternoon, by which point the guy was only letting someone in when someone else came out.) When I was there, the shoppers were somewhat sparse, but even so there was a kind of awkward impromptu ballet of people trying to imitate repelling particles as best they could. My friends in New York are saying the grocery stores are out of flour, eggs, milk, meat, and pasta, but here everything is stocked as normal. I filled my cart really high, not because I’m hoarding (we have enough shelf-stable starch and cans and root vegetables to last us a while, we’re fine) but because I now know that when all four of us, one of them a hungry teenager who’s now taller than I am, are eating three meals a day in the house, we actually consume a lot more food than I usually buy.

I didn’t wear a mask to the store — but why didn’t I? Everyone is saying that you are probably not going to get COVID from touching contaminated surfaces, as long as you are good about handwashing. They think the spread is really person to person — he coughs on you, you cough on me. Wrapping a scarf around the lower part of your face isn’t an N95 mask (remember when I didn’t know what an N95 mask was?) but any form of barrier has to block some reasonable portion of whatever droplet cloud a person coughs out, right? And that’s the game, to block a reasonable proportion of transmissions, to get that exponential constant down below 1. A few people in the store were wearing masks, maybe 1 in 20.

All the talk in the store was about the rumor that Governor Evers was signing a statewide shelter-in-place order, and when I got home I found out it was true. (Despite reassuring information about surfaces, I am trying not to take my phone out when I’m out in the world, to avoid potentially contaminating it.) Ours isn’t called “shelter in place,” it’s called “safer at home,” which I guess is meant to sound softer. What this is going to mean, I think, is that a lot of workplaces which are currently operating are going to stop. And that maybe I should have planned more state park walks with the kids last week because now it’s forbidden.

CJ’s middle school friends have a film club; they watch a movie and then discuss it for two hours the next day on FaceTime. He’s watching *Guardians of the Galaxy* right now. Last night we made Cincinnati chili, which I’ve never done before. Boiling the meat has always sounded gross to me but it really does make for a meaty-but-not-greasy chili. One small upside: I am making things you have to simmer for an hour, something I rarely do when I have to start dinner after I get home from work.

All in all, starting from the baseline that the news is very bad, the news is not bad. In Italy, which has been in hard lockdown for what, a week? the rate of new cases is starting to decline. (The mathematician Luca Trevisan is in northern Italy and his blog is a very good snapshot of what it’s like to be in the middle of the outbreak there.) China, after two months of lockdown and quite a long spell without major new infections, is starting to loosen up; what happens next seems pretty important. A big new wave of infection or have they really beaten it?

WHO launches global megatrial of the four most promising coronavirus treatments

very skeptical article on chloroquine or hydroxychloroquine therapy

Lovely day today, long walks with dog, too many people out and about.

The local hospital has set up a separate outside area for ER patients coming in with fevers and coughs, and restricted all visitors. Two cases currently confirmed in county, no indication of nuber of tests or lag times. HIPAA is going to really screw up any efforts to do contact tracing or general public health counter-measures.

Walking the dog through the forest above the bike path, saw people on the bike path couple of hundred yards away, then one of them had a really bad coughing fit. They continued in the direction away from us. Wonder what will happen to them.

Managed to avoid news and social media most of the day, then go a somewhat over-concentrated dose around dinner time.

very skeptical article on chloroquine or hydroxychloroquine therapy

Lovely day today, long walks with dog, too many people out and about.

The local hospital has set up a separate outside area for ER patients coming in with fevers and coughs, and restricted all visitors. Two cases currently confirmed in county, no indication of nuber of tests or lag times. HIPAA is going to really screw up any efforts to do contact tracing or general public health counter-measures.

Walking the dog through the forest above the bike path, saw people on the bike path couple of hundred yards away, then one of them had a really bad coughing fit. They continued in the direction away from us. Wonder what will happen to them.

Managed to avoid news and social media most of the day, then go a somewhat over-concentrated dose around dinner time.

So, how about all that social distancing due to the growing pandemic, eh? There is a lot more staying-at-home these days than we’re normally used to, and I think it’s important to keep our brains active as well as our hands washed and our homes stocked with toilet paper.

To that end, I’m doing a little experiment: a series of informal videos I’m calling The Biggest Ideas in the Universe. (Very tempted to put an exclamation point every time I write that, but mostly resisting.) They will have nothing directly to do with viruses or pandemics, but hopefully will be a way for people to think and learn something new while we’re struggling through this somewhat surreal experience. Who knows, they may even be useful long after things have returned to normal.

The idea will be to have me talking about one Big Idea in each video, hopefully with a new installment released each week. I’ll invite viewers to leave questions here (where I’ll be linking to each video), and at YouTube. Then I’ll pick out some of the most interesting questions and make another short video addressing them.

Full set of videos will be available here on the blog, or directly on YouTube.

Here’s the introductory announcement:

Before anyone jumps in to tell me — yes, I am very amateur at this! My green-screen usage could definitely use an upgrade, for one thing. Happy to take suggestions as to how to improve the quality of the video production (quality of the substance is what it is, I’m afraid).

Consider this a very tiny gesture in the direction of sticking together and moving forward during some trying times. I hope everyone out there is staying as safe as possible.

Chloroquine for the 2019 novel coronavirus SARS-CoV-2

So, anyone actually read this?

It is a "hypothesis paper" - referring to some preliminary clinical tests for SARS

*Maybe* there is something to this, but I don't see any new clinical preprints on medrXiv

## International Journal of Antimicrobial Agents (Elsevier)

So, anyone actually read this?

It is a "hypothesis paper" - referring to some preliminary clinical tests for SARS

Volume 55, Issue 3, March 2020, 105923

When do you go the grocery store? If you’re concerned about your own risk of infection, the logic of exponential growth insists that today is always better than tomorrow. But the community is better served by each person waiting as long as they can, so as to slow the overall exponential constant.

What *is* the exponential constant? People are constantly graphing the number of confirmed cases in each country, state, locality on a log-linear scale and watching the slope, but I don’t see how, in a principled way, to untangle the effects of increased testing from actual increases in infection. I guess if one hypothesizes that there’s something like a true mean rate you could plot state-by-state nominal cases against tests done and see if you can fit exp(ct)*(tests per capita) to it. But there are state-to-state differences in testing criteria, state-to-state differences in mitigation strategy, etc.

AB and I made chocolate chip cookies today. Dr. Mrs. Q and CJ watched *Inside Out.* Weather’s warmer and I think we’ll get some driveway basketball in. We listened to “The Gambler” in honor of Kenny Rogers, deceased today. I had forgotten, or didn’t know, what an ice-cold love letter to death it is. “Every hand’s a winner, and every hand’s a loser, and the best that you can hope for is to die in your sleep.” Damn.

Native Voices - NIH

“Often entire families perished during virgin-soil epidemics because all members were stricken simultaneously, leaving no one capable of fetching water or preparing food.” —Henry Dobyns, “Diseases,”

At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”. The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.) While there is some mathematics in the presentation, it is relatively non-technical.

As I write this, a very large fraction of the research universities in the US (and much of the world) are either in a shutdown mode or getting there rapidly. On-campus work is being limited to "essential" operations. At my institution (and most of the ones I know about), "essential" means (i) research directly related to diagnosing/treating/understanding covid-19; (ii) minimal efforts necessary to keep experimental animals and cell lines going, as the alternative would be years or decades of lost work; (iii) maintenance of critical equipment that will be damaged otherwise; (iv) support for undergraduates unable to get home.

For people in some disciplines, this may not be*that* disruptive, but for experimentalists (or field researchers), this is an enormous, unplanned break in practice. Graduate students face uncertainty (even more than usual), and postdocs doubly so (and I haven't seen anything online discussing their situation. An eight week hitch in the course of a six year PhD is frustrating, but in a limited-duration postdoc opportunity, it's disproportionately worse. The economics faced by universities and industry will also complicate the job market for a while.), and are often far from their families.

If we'd experienced something like this before, I could offer time-worn wisdom, but we've never had circumstances like this in the modern (post-WWII) research era. This whole situation feels surreal to me. Frankly, focusing and concentrating on science and the routine parts of the job have been a challenge, and I figure it has to be worse for people not as~~ancient~~ established. Here are a few thoughts, suggestions, and links as we move to get through this:

For people in some disciplines, this may not be

If we'd experienced something like this before, I could offer time-worn wisdom, but we've never had circumstances like this in the modern (post-WWII) research era. This whole situation feels surreal to me. Frankly, focusing and concentrating on science and the routine parts of the job have been a challenge, and I figure it has to be worse for people not as

- While we may be
*physically*socially distancing, please talk with your friends, family, and colleagues, by phone, skype, zoom, slack, wechat, whatever. Try not to get sucked into the cycle of breaking news and the toxic parts of social media. Please take advantage of your support structure, and if you need to talk to someone professional, please reach out. We're in this together - you don't have to face everything by yourself. - Trying to set up some kind of routine and sticking to it is good. Faculty I know are trying to come up with ways to keep their folks intellectually engaged - regular group meetings + presentations by zoom; scheduled seminars and discussions via similar video methods across research groups and in some cases even across different universities. For beginning students, this is a great time to read (really read) the literature and depending on your program, study for your candidacy/qualifier. Again, you don't have to do this alone; you can team up with partners on this. For students farther along, data analysis, paper writing, planning the next phase of your research, starting to work on the actual thesis writing, etc. are all possibilities. For postdocs interested in academia, this is potentially a time to comb the literature and think about what you would like to do as a research program. Some kind of schedule or plan is the way to divide this into manageable pieces instead of feeling like these are gigantic tasks.
- The Virtual March Meeting has continued to add talks.
- My friend Steve Simon's solid state course lectures are all available. They go with his book. They are also just one example of the variety of talks available from Oxford - here are the other physics ones.
- Here is a set of short pieces about topology in condensed matter from a few years ago.
- And here is a KITP workshop on this topic from this past fall.
- These are some very nice lecture notes about scientific computing using python. Here is something more in-depth on github. Could be a good time to learn this stuff....
- On the lighter side, here are both PhD Comics movies for free streaming.

Most people, when they picture exponential growth, think of speed. They think of something going faster and faster, more and more out of control. But in the beginning, exponential growth feels slow. A little bit leads to a little bit more, leads to a little bit more. It sneaks up on you.

When the first cases of COVID-19 were observed in China in December, I didn’t hear about it. If it was in the news, it wasn’t news I read.

I’d definitely heard about it by the end of January. A friend of mine had just gotten back from a trip to Singapore. At the time, Singapore had a few cases from China, but no local transmission. She decided to work from home for two weeks anyway, just to be safe. The rest of us chatted around tea at work, shocked at the measures China was taking to keep the virus under control.

Italy reached our awareness in February. My Italian friends griped and joked about the situation. Denmark’s first case was confirmed on February 27, a traveler returning from Italy. He was promptly quarantined.

I was scheduled to travel on March 8, to a conference in Hamburg. On March 2, six days before, they decided to postpone. I was surprised: Hamburg is on the opposite side of Germany from Italy.

That week, my friend who went to Singapore worked from home again. This time, she wasn’t worried she brought the virus from Singapore: she was worried she might pick it up in Denmark. I was surprised: with so few cases (23 by March 6) in a country with a track record of thorough quarantines, I didn’t think we had anything to worry about. She disagreed. She remembered what happened in Singapore.

That was Saturday, March 7. Monday evening, she messaged me again. The number of cases had risen to 90. Copenhagen University asked everyone who traveled to a “high-risk” region to stay home for fourteen days.

On Wednesday, the university announced new measures. They shut down social events, large meetings, and work-related travel. Classes continued, but students were asked to sit as far as possible from each other. The Niels Bohr Institute was more strict: employees were asked to work from home, and classes were asked to switch online. The canteen would stay open, but would only sell packaged food.

The new measures lasted a day. On Thursday, the government of Denmark announced a lockdown, starting Friday. Schools were closed for two weeks, and public sector employees were sent to work from home. On Saturday, they closed the borders. There were 836 confirmed cases.

Exponential growth is the essence of life…but not of daily life. It’s been eerie, seeing the world around me change little by little and then lots by lots. I’m not worried for my own health. I’m staying home regardless. I know now what an exponential feels like.

P.S.: This blog has a no-politics policy. Please don’t comment on what different countries or politicians should be doing, or who you think should be blamed. Viruses have enough effect on the world right now, let’s keep viral arguments out of the comment section.

First off, let me say I do not wish to sound disrespectful to anybody here, leave alone my colleagues, to most of which goes my full esteem and respect (yes, not all of them, doh). Yet, I feel compelled to write today about a sociological datum I have paid attention to, in these difficult times of isolation, when all of us turn to the internet as our main outlet for rants, logorrhea, or to make the impending threat of a global catastrophe less heavy by sharing it with our peers, to exorcise our fears.

Just a short post to note that this year’s Abel prize has been awarded jointly to Hillel Furstenberg and Grigory Margulis for “for pioneering the use of methods from probability and dynamics in group theory, number theory and combinatorics”. I was not involved in the decision making process of the Abel committee this year, but I certainly feel that the contributions of both mathematicians are worthy of the prize. Certainly both mathematicians have influenced my own work (for instance, Furstenberg’s proof of Szemeredi’s theorem ended up being a key influence in my result with Ben Green that the primes contain arbitrarily long arithmetic progressions); see for instance these blog posts mentioning Furstenberg, and these blog posts mentioning Margulis.

I recited the poem “Paul Revere’s Ride” to myself while walking across campus last week.

A few hours earlier, I’d cancelled the seminar that I’d been slated to cohost two days later. In a few hours, I’d cancel the rest of the seminars in the series. Undergraduates would begin vacating their dorms within a day. Labs would shut down, and postdocs would receive instructions to work from home.

I memorized “Paul Revere’s Ride” after moving to Cambridge, following tradition: As a research assistant at Lancaster University in the UK, I memorized e. e. cummings’s “anyone lived in a pretty how town.” At Caltech, I memorized “Kubla Khan.” Another home called for another poem. “Paul Revere’s Ride” brooked no competition: Campus’s red bricks run into Boston, where Revere’s story began during the 1700s.

Henry Wadsworth Longfellow, who lived a few blocks from Harvard, composed the poem. It centers on the British assault against the American colonies, at Lexington and Concord, on the eve of the Revolutionary War. A patriot learned of the British troops’ movements one night. He communicated the information to faraway colleagues by hanging lamps in a church’s belfry. His colleagues rode throughout the night, to “spread the alarm / through every Middlesex village and farm.” The riders included Paul Revere, a Boston silversmith.

The Boston-area bricks share their color with Harvard’s crest, crimson. So do the protrusions on the coronavirus’s surface in colored pictures.

The yard that I was crossing was about to “de-densify,” the red-brick buildings were about to empty, and my home was about to lock its doors. I’d watch regulations multiply, emails keep pace, and masks appear. Revere’s messenger friend, too, stood back and observed his home:

he climbed to the tower of the church,

Up the wooden stairs, with stealthy tread,

To the belfry-chamber overhead, [ . . . ]

By the trembling ladder, steep and tall,

To the highest window in the wall,

Where he paused to listen and look down

A moment on the roofs of the town,

And the moonlight flowing over all.

I commiserated also with Revere, waiting on tenterhooks for his message:

Meanwhile, impatient to mount and ride,

Booted and spurred, with a heavy stride,

On the opposite shore walked Paul Revere.

Now he patted his horse’s side,

Now gazed on the landscape far and near,

Then impetuous stamped the earth,

And turned and tightened his saddle-girth…

The lamps ended the wait, and Revere rode off. His mission carried a sense of urgency, yet led him to serenity that I hadn’t expected:

He has left the village and mounted the steep,

And beneath him, tranquil and broad and deep,

Is the Mystic, meeting the ocean tides…

The poem’s final stanza kicks. Its message carries as much relevance to the 21st century as Longfellow, writing about the 1700s during the 1800s, could have dreamed:

So through the night rode Paul Revere;

And so through the night went his cry of alarm

To every Middlesex village and farm,—

A cry of defiance, and not of fear,

A voice in the darkness, a knock at the door,

And a word that shall echo forevermore!

For, borne on the night-wind of the Past,

Through all our history, to the last,

In the hour of darkness and peril and need,

The people will waken and listen to hear

The hurrying hoof-beats of that steed,

And the midnight message of Paul Revere.

Reciting poetry clears my head. I can recite on autopilot, while processing other information or admiring my surroundings. But the poem usually wins my attention at last. The rhythm and rhyme sweep me along, narrowing my focus. Reciting “Paul Revere’s Ride” takes me 5-10 minutes. After finishing that morning, I repeated the poem, and began repeating it again, until arriving at my institute on the edge of Harvard’s campus.

Isolation can benefit theorists. Many of us need quiet to study, capture proofs, and disentangle ideas. Many of us need collaboration; but email, Skype, Google hangouts, and Zoom connect us. Many of us share and gain ideas through travel; but I can forfeit a little car sickness, air turbulence, and waiting in lines. Many of us need results from experimentalist collaborators, but experimental results often take long to gather in the absence of pandemics. Many of us are introverts who enjoy a little self-isolation.

April is National Poetry Month in the United States. I often celebrate by intertwining physics with poetry in my April blog post. Next month, though, I’ll have other news to report. Besides, my walk demonstrated, we need poetry now.

Paul Revere found tranquility on the eve of a storm. Maybe, when the night clears and doors reopen, science born of the quiet will flood journals. Aren’t we fortunate, as physicists, to lead lives steeped in a kind of poetry?

Next quarter, starting March 30, I will be teaching “Math 247B: Classical Fourier Analysis” here at UCLA. (The course should more accurately be named “Modern real-variable harmonic analysis”, but we have not gotten around to implementing such a name change.) This class (a continuation of Math 247A from previous quarter, taught by my colleague, Monica Visan) will cover the following topics:

- Restriction theory and Strichartz estimates
- Decoupling estimates and applications
- Paraproducts; time frequency analysis; Carleson’s theorem

As usual, lecture notes will be made available on this blog.

Unlike previous courses, this one will be given online as part of UCLA’s social distancing efforts. In particular, the course will be open to anyone with an internet connection (no UCLA affiliation is required), though non-UCLA participants will not have full access to all aspects of the course, and there is the possibility that some restrictions on participation may be imposed if there are significant disruptions to class activity. For more information, see the course description. **UPDATE**: due to time limitations, I will not be able to respond to personal email inquiries about this class from non-UCLA participants in the course. Please use the comment thread to this blog post for such inquiries. I will also update the course description throughout the course to reflect the latest information about the course, both for UCLA students enrolled in the course and for non-UCLA participants.

It's been a remarkable week. There seems to be a consensus among US universities, based in part on CDC guidelines, and in part on the logistically and legally terrifying possibility of having to deal with dormitories full of quarantined undergraduates, that the rest of the 2019-2020 academic year will be conducted via online methods. This will be rough, but could well be a watershed moment for distance education techniques. The companies that make the major software platforms (e.g. zoom, canvas) and their web storage are facing a remarkable trial by fire when the nation's large universities all come back from break and hundreds of thousands of students all try to use these tools at once.

At the same time that all this is going on, many doctoral programs around the country (including ours) that had not already done their graduate recruiting visitations were canceling open houses and trying to put together virtual experiences to do the job.

There is a lot to unpack here, but it's worth asking: Are people over-reacting? I don't think so, and over-reacting would be better than the alternative, anyway. Different estimates give a range of values, but it would appear that the age-averaged mortality rate of covid-19 is somewhere between 0.7% and 3%. (The current number in the US is something like 2.9%, but that's probably an overestimate due to appallingly too little testing; in the non-Wuhan parts of China it's like 0.6%, but in Italy it's over 3%.) The disease seems comparable in transmission to the annual influenza, which in the US is estimated to infect 35-40M people every year, and with a mortality rate of around 0.1% leads to something like 35-40K deaths per year. Given this, it's not unreasonable to think that, unchecked, there could be between 250K and 1.2M deaths from this in the US alone. A key issue in Italy stems from the hospitalization rate of around 10-15%. If the cases come too rapidly in time, there just aren't enough hospital beds. This is why flattening the curve is so important.

It annoys me to see some people whom I generally respect scientifically seem to throw their numerical literacy out the window on this. We shouldn't freak out and panic, but we should understand the underlying math and assumptions and take this appropriately seriously.

**Update**: Notes from a meeting at UCSF (web archive version of link) hosted by, among others, Joe DeRisi. I first met Joe when we became Packard Fellows back in 2003. He's a brilliant and very nice guy, who with colleagues created the viral phylogeny chip that identified SARS as a previously unknown coronavirus and pinpointed its closest relatives.

On Wednesday, I asked several of my students what tools they use to collaborate online on their problem sets. Several of them mentioned Discord. I am currently trying to set up a Discord channel for my class. I imagine I am not the only one in this situation, so I am writing up my progress as I go here . If you have relevant knowledge, please leave an answer to this question or edit mine!

If you have questions to discuss, let’s do that in the comment thread here. And please promote this on twitter and wherever else math teacher’s gather!

I did a dry run with 6 of my students this afternoon. We spent 10 minutes introducing each other to the system, broke into two groups of 3 and spent 10 minutes solving a math problem (problem 19.2 here) and 30 minutes debriefing. Most students found it awkward but workable. Here are things I/we saw as good:

- Connection was reliable; much less dropping out than the video conferencing tools they report their other courses are using.
- The presence of the chat stream with integrated graphics and LaTeX encouraged people to write things down, just like we encourage students to write on blackboards when doing group work in class. This made it easier for me to jump in and out of conversations.
- It was pretty easy for me to jump back and forth from one group to the other. (Not sure how I would do with 4 or more groups, though.)
- We really did solve a nontrivial problem and have a nontrivial conversation about it!

Things we didn’t like

- Both groups needed, at one point, to draw an equation or commutative diagram like this one:

One group did the LaTeX, the other draw a commutative diagram in MS Paint and dropped it into the thread. Both expressed frustration about how much slower this was than drawing a picture in person. - Some students felt that it was awkward not seeing the people they were talking to.

**Update (March 13):** One day after I put up this post—a post that many commenters criticized as too alarmist—the first covid cases were detected in Austin. As a result, UT Austin closed its campus (including my son’s daycare), and at 3:30am, the Austin Independent School District announced its decision to suspend all schools until further notice. All my remaining plans for the semester (including visits to Berkeley, Stanford, Harvard, CU Boulder, Fermilab, Yale, and CMU) are obviously cancelled. My family is now on lockdown, in our house, probably at least until the summer. The war on the virus has reached us. The “1939” analogy that I mentioned in the post turned out to be more precise than I thought: then, as now, there were intense debates about how just serious the crisis would be, but those debates never even had a chance to get settled by argument; events on the ground simply rendered them irrelevant.

**Scott’s foreword:** This week Steve Ebin, a longtime *Shtetl-Optimized* reader (and occasional commenter) from the San Francisco tech world, sent me the essay below. Steve’s essay fit too well with my own recent thoughts, and indeed with this blog’s title, for me not to offer to share it here—and to my surprise and gratitude, Steve agreed.

I guess there are only two things I’d add to what Steve wrote. First, some commenters took me to task for a misplaced emphasis in my last coronavirus post, and on further reflection, I now concede that they were right. When a preventable catastrophe strikes the world, what’s always terrified me most are *not* the ranting lunatics and conspiracy theorists, even if some of those lunatics actually managed to attain the height of power, from where they played a central role in the catastrophe. No, what’s terrified me more are the blank-faced bureaucrats who’ve signed the paperwork that amounted to death warrants. Like, for example, the state regulators who ordered the Seattle infectious disease expert to stop, after she’d had enough of the government’s failure to allow corona tests, took it upon herself to start testing anyway, and found lots of positive results. Notably, only some countries have empowered lunatics, but the blank-faced bureaucrats rule everywhere unless something stronger overrides them.

Second, I’ll forever ask myself what went wrong with me, that it took me until metaphorical 1939 to acknowledge the scale of an unfolding catastrophe (on more than a purely intellectual level)—even while others were trying to tell me way back in metaphorical 1933. Even so, better metaphorical 1939 than metaphorical 1946.

**Without further ado, Steve’s essay:**

The most expensive meal I ever ate was in San Francisco at a restaurant called Eight Tables. As the name implies, the restaurant has only eight tables. The meal cost $1,000 and featured 12 courses, prepared by award-winning chefs.

The most expensive meal a person ever ate was in late 2019, in China, and consisted of under-cooked bat meat. It cost trillions of dollars. The person who ate it, possibly a peasant, changed the course of the 21st century. The bat he ate contained a virus, and the virus threatened to spread from this man to the rest of humanity.

I’m making up some details, of course. Maybe the man wasn’t a peasant. Or he could have been a woman. Or the bat could have been a pangolin. Or maybe, through a lucky accident (the guy was a loner perhaps), it could have not spread. That could have happened, but it didn’t. Or maybe sometimes that does happen and we don’t know it. These are just accidents of history.

I’m writing this on March 9, 2020. The good news is that the virus, in its current form, doesn’t kill children. I am so thankful for that. The bad news is that the virus does kill adults. The virus is like a grim reaper, culling the sick, the debilitated, and the elderly from the population. It attacks the pulmonary system. I heard a 25-year-old survivor describing how he became unable to control his breathing and could not fall asleep or he would die. Even for healthy young people, the prognosis is often poor.

There were Jews in Europe in the 1930s who sat around tables with the elders of their families and villages and debated whether to leave for America, or Palestine, or South America. Most of them, including my grandmother’s family, didn’t leave, and were largely exterminated. The virus of the time was Nazism, and it too attacked the pulmonary systems of the old and the debilitated, in that case with poisonous gasses.

When you grow up as I did, you are taught to have a paranoia in the back of your mind that there is a major disaster about to happen. That a holocaust, or something of that magnitude, might occur in your lifetime. And so you are never complacent. For your whole life, you’re looking and waiting for a history changing event. You try to ensure that you are willing to follow your thoughts to their logical conclusion and take the necessary actions as a result, unlike many of the Jews of 1930s Europe, who refused to confront the obstacle in front of them until it was too late, and unlike many politicians and world leaders today, who are doing the same.

And the conclusion we must now confront is clear. We are watching a once-in-a-century event unfold. Coronavirus–its mutations, its spawn–will change the course of human history. It will overwhelm our defense system and may kill millions. It may continue to mutate and kill millions more. We will develop painful social measures to slow its spread. We will produce vaccines and better treatment protocols. Some of this will help, but none of this will work perfectly. What will happen to society as this unfolds?

My favorite biblical verse comes from Ecclesiastes: To everything there is a season, and a time to every purpose under the heaven. A time to be born, and a time to die. A time to plant and a time to pluck that which is planted. And so on.

The season has changed, and the seven years of famine have begun.

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an *effective* description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

In this picture, the force of gravity between black holes looks like particles (specifically, *gravitons*) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

After the outbreak in China of the COVID-19 virus, Italy has fallen in the middle of the most acute sanitary emergency we ever experienced in a long while. And what's worse, other countries are sadly joining it. As I write this piece, over 15,000 Italians have tested positive to the virus, and over 1000 have already died. Based on very convincing analysis of data from China, we know that the real number of cases is higher by at least a factor 20, if not 100. In these conditions, individuals have to protect themselves and help reduce the spread of the virus with all possible means - most importantly, by staying home and cutting all social contacts.

Muon g-2: lattice salad

A couple of weeks ago, a new lattice QCD calculation by a group known as BMW tried to tip the BSM community into depression by reporting that they could resolve the tension between theory and experiment. Their new result had a tiny uncertainty (of 0.6%), much smaller than any previous lattice computation.

As I've mentioned here several times, the anomalous magnetic moment of the muon is one of the most precisely measured quantities in the world, and its prediction from theory has for several years been believed to be slightly different from the measured one. Since the theory was thought to be well understood and rather "clean", with uncertainty similar to the experimental one (yet the two values being substantially different) it has long been a hope that the Standard Model's cracks would be revealed there. Two new experiments should tell us about this, including an experiment at Fermilab that should report data this year with potentially four times smaller experimental uncertainty than the previous result; An elementary decription of the physics and experiment is given on their website.

However, there were always two slightly murky parts of the theory calculation, where low-energy QCD rears its head appearing in loops. A nice summary of this is found in e.g. this talk from slide 33 onwards, and I will shamelessly steal some figures from there. These QCD loops appear as

Hadronic light-by-light, and hadronic vector polarisation (HVP) diagrams.

The calculation of both of these is tricky, and the light-by-light contribution is believed to be under control and small. The disagreement is in the HVP part. This corresponds to mesons appearing in the loop, but there is a clever trick called the R-ratio approach, where experimental cross-section data can be used together with the optical theorem to give a very precise prediction. Many groups have calculated this with results that agree very well.

On the other hand, it should be possible to calculate this HVP part by simulating QCD on the lattice. Previous lattice calculations disagreed somewhat, but also estimated their uncertainties to be large, comparable to the difference between their calculations and the experimental value or the value from the R-ratio. The new calculation claims that, with their new lattice QCD technique, they find that the HVP contribution should be large enough to remove the disagreement with experiment, with a tiny uncertainty. The paper is organised into a short letter of four pages, and then 79 pages of supplementary material. However, they conclude the letter with *"Obviously, our findings should be confirmed –or refuted– by other collaborations using other discretizations of QCD." *

Clearly I am not qualified to comment on their uncertainty estimate, but if the new result is true then, unless there has been an amazing statistical fluke across all groups performing the R-ratio calculation, *someone* has been underestimating their uncertainties (i.e. they have missed something big). So it is something of a relief to see an even newer paper attempting to reconcile the lattice and R-ratio HVP calculations, from the point of view of lattice QCD experts. The key phrase in the abstract is *"Our results may indicate a difficulty related to estimating uncertainties of the continuum extrapolation that deserves further attention."* They perform a calculation similar to BMW but with a different error estimate; they give a handy comparison of the different calculations in this plot:

* Update 12/03/20:* A new paper yesterday tries to shed some new light on the situation: apparently it has been known since 2008 that an HVP explanation of the muon anomalous magnetic moment discrepancy was unlikely, because it leads to other quantities being messed up. In particular, the same diagrams that appear above also appear in the determination of the electroweak gauge coupling, which is precisely measured at low energies from Thomson scattering, and then run up to the Z mass: $$ \alpha^{-1} (M_Z) = \alpha^{-1} (0) \bigg[ 1 - ... - \Delta \alpha^{(5)}_{\mathrm{HVP}} (M_Z) + ... \bigg] $$ where the ellipsis denotes other contributions. Adding the BMW lattice contribution there at low energies and extrapolating up, the new paper finds that the fit is spoiled for the W-boson mass and also an observable constructed from the ratio of axial and vector couplings to the Z-boson: $$ A_{\ell} = \frac{2 \mathrm{Re}[ g_V^{\ell}/g_A^{\ell}]}{1 + (\mathrm{Re} [g_V^{\ell}/g_A^{\ell}])^2}$$ The key plot for this observable is:

In this blog’s now 15-year-history, at Waterloo and then MIT and now UT Austin, I’ve tried to make it clear that I blog *always* as Scott, never as Dr. Aaronson of Such-and-Such Institution. (God knows I’ve written a few things that a prudent dean might prefer that I hadn’t—though if I couldn’t honestly say that, in what sense would I even enjoy “academic freedom”?) Today, though, for only about the second time, I’m also writing as a professor motivated by a duty of care toward his students.

A week ago, most of my grad students were in the Bay Area for a workshop; they then returned and spent a week hanging around the CS building like normal. Yesterday I learned that at least one of those students developed symptoms consistent with covid19. Of course, it’s much more likely to be a boring cold or flu—but still, in any sane regime, just to be certain, such a person would **promptly get tested**.

After quarantining himself, my student called the “24/7 covid19 hotline” listed in an email from the university’s president, but found no one answering the phone over the weekend. Yesterday he finally got through—only to be told, flatly, that he couldn’t be tested due to insufficient capacity. When I heard this, I asked my department chair and dean to look into the matter, and received confirmation that yeah, it sucks, but this is the situation.

If it’s true that, as I’ve read, the same story is currently playing itself out all over the country, then this presumably isn’t the fault of anyone in UT’s health service or the city of Austin. Rather, as they say in the movies, it goes all the way to the top, to the CDC director and ultimately the president—or rather, to the festering wound that now sits where the top used to be.

Speaking of movies, over the weekend Dana and I watched Contagion, as apparently many people are now doing. I confess that I’d missed it when it came out in 2011. I think it’s a cinematic masterpiece. It freely violates many of the rules of movie narrative: characters are neither done in by their own hubris, nor saved by their faith or by being A-list stars. But *Contagion* is also more than a glorified public service announcement about the importance of washing your hands. It wants to show you the reality of the human world of its characters, and *also* the reality of a virus, and how the two realities affect each other despite obeying utterly different logic. It will show a scene that’s important to the charaters for human reasons, and then it will show you the same scene again, except this time making you focus on whose hand touched which surface in which order.

But for all its excellence and now-obvious prescience, there are two respects in which Contagion failed to predict the reality of 2020. The first is just a lucky throw of the RNA dice: namely, that the real coronavirus is perhaps an order of magnitude less fatal than the movie virus, and for some unknown reason it spares children. But the second difference is terrifying. All the public health authorities in the movie are ultra-empowered and competent. They do badass things like injecting themselves with experimental vaccines. If they stumble, it’s only in deeply understandable ways that any of us might (e.g., warning their own loved ones to evacuate a city before warning the public).

In other words, when the scriptwriters, writing their disaster movie, tried to imagine the worst, they failed to imagine a US government that would essentially abandon the public, by

(1) botching a simple test that dozens of other countries performed without issue,

(2) preventing anyone else from performing their own tests, and then

(3) turning around and using the lack of positive test results to justify its own inaction.

They failed to imagine a CDC that might as well not exist for all it would do in its hour of need: one that didn’t even bother to update its website on weekends, and stopped publishing data once the data became too embarrassing. The scriptwriters *did* imagine a troll gleefully spreading lies about the virus online, endangering anyone who listened to him. They failed to imagine a universe where that troll was the president.

“I mean, don’t get me wrong,” they told me. “Trump is a racist con artist, a demagogue, the precise thing that Adams and Hamilton and Franklin tried to engineer our republic to avoid. Just, don’t get so *depressed* about it all the time! Moaning about how we’re trapped in a freakishly horrible branch of the wavefunction, blah blah. I mean look on the bright side! What an incredible run of luck we’ve had, that we elected a president with the mental horizons of a sadistic toddler, and yet *in three years he hasn’t caused even one apocalypse*. You’re alive and healthy, your loved ones are alive and healthy. It could be a lot worse!”

The above, I suspect, is a sentiment that will now forever date any writing containing it to January 2020 or earlier.

How do I prepare my research talks? I usually just sit down with a pencil, some paper and a cup of something warm, and I just draw/map out the story. Each box is a beat of the narrative, and ends up corresponding to one or two slides (if I’m doing … Click to continue reading this post

The post Talk Prep appeared first on Asymptotia.

Just to follow up:

- The APS is partnering with the Virtual March Meeting, as well as collecting talks and slides and linking them to the online meeting program.
- There is going to be a Virtual Science Forum this afternoon (Eastern Standard Time, Friday, March 6) using zoom as a meeting platform, featuring what would have been March Meeting invited talks by Florian Marquardt, Eun-Ah Kim, and Steve Girvin.
- The APS is working on refunds. All told, the society is going to lose millions of dollars on this.
- I am very surprised that the webpage for the APS April Meeting does not, as of this writing, have anything on it at all about this issue. I've already passed on my strong suggestion that they at least put up a notice that says "We are closely monitoring the situation and will make a firm decision about the status of the meeting by [date]."
~~The ACS has a notice on their page about their national meeting scheduled for Philadelphia on March 22-26. I'm rather surprised that they are still going ahead.~~**Update:**ACS has now cancelled their spring meeting.- The MRS seems to have nothing up yet regarding their April meeting.

People tend to have poor intuition about exponential functions. I'm not an alarmist, but it's important to consider: total US cases of covid-19 today are the level Wuhan was seven weeks ago. Hopefully measures people are taking (social distancing, hand washing, dropping non-critical travel) plus seasonality of illness plus lower population density plus fewer smokers will help keep things comparatively manageable. The US government realistically will not take some of the steps taken by the Chinese government (e.g., wholesale travel restrictions, military-enforced quarantines).

In response to the highly-acclaimed, largely accurate, yet somewhat opaque and authoritarian *Quantum Bullshit Detector* Twitter account (@BullshitQuantum),

I am pleased to introduce the democratised equivalent, the *Democratic Quantum Bullshit Detector Bullshit *(@QuantumDemocrat), where all bullshit is determined by you, the people, via Twitter polls, expressing your First Amendment quantum rights on what is bullshit and what is not.

Interestingly, the first result (voting still open at the time of writing this), indicates that three quarters of the community has faith in the original authoritarian *Quantum Bullshit Detector*, determining it to be 'Not Bullshit'.

Happy bullshitting! And be thankful that you live in a world where we are all free to call bullshit!

Nb: the editor fully acknowledges the meaninglessness of Twitter polls, as was recently confirmed by a Twitter poll. But that is not to say we can't have fun, provide a platform to hold one another to account, and the polls and comments can't trigger useful dialogue, which I very much encourage.

Many people have suggested coating handles, doorknobs and so forth with virus-killing copper tape. It’s a shame that this isn’t being tried on a wider scale. In the meantime, though, here’s a related but different idea that I had last night.

Imagine we could coat every doorknob, every light switch, every railing, every other surface that people might touch in public buildings, with some long-lasting disgusting, sticky, slimy substance. For a variety of reasons, one probably wouldn’t use actual excrement, although it wouldn’t hurt if the substance *looked like* that. Or it could be a sickly neon green or red, to make it impossible to conceal when you’d gotten the substance on your hands.

What would be the result? Of course, people would avoid touching these surfaces. If they had to, they’d do so with a napkin or glove whenever possible. If they had to touch them bare-handedly, they’d rush to wash their hands with soap as soon as possible afterwards. Certainly they wouldn’t touch their faces before having washed their hands.

In short, they’d show exactly the behaviors that experts agree are among the most helpful, if our goal is to slow the spread of the coronavirus. In effect, we’d be plugging an unfortunate gap in our evolutionary programming—namely, that the surfaces where viruses can thrive aren’t intuitively disgusting to us, as (say) vomit or putrid meat are—by *making* those surfaces disgusting, as they ought to be in the middle of a pandemic.

Note that, even if it *somehow* turns out to be infeasible to coat all the touchable surfaces in public buildings with disgusting goo, you might still derive great personal benefit from *imagining* them so covered. If you manage to pull that off, it will yield just the right heuristic for when and how often you should now be washing your hands (and avoiding touching your face), without no need for additional conscious reflection.

Mostly, having the above thoughts made me grateful for my friend Robin Hanson. For as long Robin is around, tweeting and blogging from his unique corner of mindspace, no one will ever be able to say that *my* ideas for how to control the coronavirus were the world’s weirdest or most politically tone-deaf.

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles *aren’t*: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of *fields*, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: **None of that post was quantum**.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: **I am not saying that**.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: **This isn’t about on-shell vs. off-shell.**

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: **I’m not arguing with Nima here.**

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

For those in the Houston area:

Created by SMBC's Zach Weinersmith, BAHFest is a celebration of well-argued and thoroughly researched but *completely incorrect *scientific theory. Our brave speakers present their bad theories in front of a live audience and a panel of judges with real science credentials, who together determine who takes home the coveted BAHFest trophy. And eternal glory, of course. If you'd like to learn more about the event, you can check out these articles from the Wall Street Journal and NPR's Science Friday.

Our keynote for this year's event is the hilarious Phil Plait (AKA the Bad Astronomer)! Phil will be doing a book signing of his book "Death from the Skies" before and after the show.

The event is brought to you by BAHFest, and the graduate students in Rice University's Department of BioSciences. Click here for more information about the show, including how to purchase tickets. We hope to see you there!

[Full disclosure: I am one of the judges at this year's event.]

A new long article which appeared on the arXiv preprint repository last week is sending ripples around the world of particle physics phenomenology, as its main result -if proven correct- will completely wipe off the table the one and only long-standing inconsistency of the Standard Model of particle physics, the one existing between theoretical and experimental estimates of the so-called** anomalous magnetic moment of the muon.**