Planet Musings

September 27, 2020

John BaezBanning Lead in Wetlands

An European Union commission has voted to ban the use of lead ammunition near wetlands and waterways! The proposal now needs to be approved by the European Parliament and Council. They are expected to approve the ban. If so, it will go into effect in 2022. The same commission, called REACH, may debate a complete ban on lead ammunition and fishing weights later this year.

Why does this matter? The European Chemicals Agency has estimated that as many as 1.5 million aquatic birds die annually from lead poisoning because they swallow some of the 5000 tonnes of lead shot that land in European wetlands each year. Water birds are more likely to be poisoned by lead because they mistake small lead shot pellets for stones they deliberately ingest to help grind their food.

In fact, about 20,000 tonnes of lead shot is fired each year in the EU, and 60,000 in the US. Eating game shot with lead is not good for you—but also, even low levels of lead in the environment can cause health damage and negative changes in behavior.

How much lead is too much? This is a tricky question, so I’ll just give some data. In the U.S., the geometric mean of the blood lead level among adults was 1.2 micrograms per deciliter (μg/dL) in 2009–2010. Blood lead concentrations in poisoning victims ranges from 30-80 µg/dL in children exposed to lead paint in older houses, 80–100 µg/dL in people working with pottery glazes, 90–140 µg/dL in individuals consuming contaminated herbal medicines, 110–140 µg/dL in indoor shooting range instructors and as high as 330 µg/dL in those drinking fruit juices from glazed earthenware containers!

The amount of lead that US children are exposed to has been dropping, thanks to improved regulations:

However, what seem like low levels now may be high in the grand scheme of things. The amount of lead in the environment has increased by a factor of about 300 in the Greenland ice sheet during the past 3000 years. Most of this is due to industrial emissions:

• Amy Ng and Clair Patterson, Natural concentrations of lead in ancient Arctic and Antarctic ice, Geochimica et Cosmochimica Acta 45 (1981), 2109–2121.

September 26, 2020

David Hoggwriting linear algebra operators and implementations

I spent my research time this morning typing math into a document co-authored by Ana Bonaca (Harvard) about a likelihood function for asteroseismology. It is tedious and repetitive to type the math out in LaTeX, and I feel like I have done the nearly-same typing over and over again. And, adding to the feeling of futility: The linear-algebra objects we define in the paper are more conceptual than operational: In many cases we don't actually construct these objects; we avoid constructing things that will make the execution either slow or imprecise. So in addition to typing these, I find myself also typing lots of non-trivial implementation advice.

September 25, 2020

Matt von HippelWhich Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

I’ll start with an example, neutrino oscillation.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.

Doug NatelsonThe Barnett Effect and cool measurement technique

 I've written before about the Einstein-deHaas effect - supposedly Einstein's only experimental result (see here, too) - a fantastic proof that spin really is angular momentum.  In that experiment, a magnetic field is flipped, causing the magnetization of a ferromagnet to reorient itself to align with the new field direction.  While Einstein and deHaas thought about amperean current loops (the idea that magnetization came from microscopic circulating currents that we would now call orbital magnetism), we now know that magnetization in many materials comes from the spin of the electrons.  When those spins reorient, angular momentum has to be conserved somehow, so it is transferred to/from the lattice, resulting in a mechanical torque that can be measured.

Less well-known is the complement, the Barnett effect.  Take a ferromagnetic material and rotate it. The mechanical rotational angular momentum gets transferred (via rather complicated physics, it turns out) at some rate to the spins of the electrons, causing the material to develop a magnetization along the rotational axis.  This seems amazing to me now, knowing about spin.  It must've really seemed nearly miraculous back in 1915 when it was measured by Barnett.

So, how did Barnett actually measure this, with the technology available in 1915?  Here's the basic diagram of the scheme from the original paper:


There are two rods that can each be rotated about its long axis.  The rods pass through counterwound coils, so that if there is a differential change in magnetic flux through the two coils, that generates a current that flows through the fluxmeter.  The Grassot fluxmeter is a fancy galvanometer - basically a coil suspended on a torsion fiber between poles of a magnet.  Current through that coil leads to a torque on the fiber, which is detected in this case by deflection of a beam of light bounced off a mirror mounted on the fiber.  The paper describes the setup in great detail, and getting this to work clearly involved meticulous experimental technique and care.  It's impressive how people were able to do this kind of work without all the modern electronics that we take for granted.  Respect.

September 24, 2020

John BaezEnayat on Nonstandard Numbers

Michael Weiss and I have been carrying on a dialog on nonstandard models of arithmetic, and after a long break we’re continuing, here:

• Michael Weiss and John Baez, Non-standard models of arithmetic (Part 18).

In this part we reach a goal we’ve been heading toward for a long time! We’ve been reading this paper:

• Ali Enayat, Standard models of arithmetic.

and we’ve finally gone through the main theorems and explained what they say. We’ll talk about the proofs later.

The simplest one is this:

• Every ZF-standard model of PA that is not V-standard is recursively saturated.

What does this theorem mean, roughly? Let me be very sketchy here, to keep things simple and give just a flavor of what’s going on.

Peano arithmetic is a well-known axiomatic theory of the natural numbers. People study different models of Peano arithmetic in some universe of sets, say U. If we fix our universe U there is typically one ‘standard’ model of Peano arithmetic, built using the set

\omega = \{0,\{0\},\{0,\{0\}\}, \dots \}

or in other words

\omega = \{0,1,2, \dots \}

All models of Peano arithmetic not isomorphic to this one are called ‘nonstandard’. You can show that any model of Peano arithmetic contains an isomorphic copy of standard model as an initial segment. This uniquely characterizes the standard model.

But different axioms for set theory give different concepts of U, the universe of sets. So the uniqueness of the standard model of Peano arithmetic is relative to that choice!

Let’s fix a choice of axioms for set theory: the Zermelo–Fraenkel or ‘ZF’ axioms are a popular choice. For the sake of discussion I’ll assume these axioms are consistent. (If they turn out not to be, I’ll correct this post.)

We can say the universe U is just what ZF is talking about, and only theorems of ZF count as things we know about U. Or, we can take a different attitude. After all, there are a couple of senses in which the ZF axioms don’t completely pin down the universe of sets.

First, there are statements in set theory that are neither provable nor disprovable from the ZF axioms. For any of these statements we’re free to assume it holds, or it doesn’t hold. We can add either it or its negation to the ZF axioms, and still get a consistent theory.

Second, a closely related sense in which the ZF axioms don’t uniquely pin down U is this: there are many different models of the ZF axioms.

Here I’m talking about models in some universe of sets, say V. This may seem circular! But it’s not really: first we choose some way to deal with set theory, and then we study models of the ZF axioms in this context. It’s a useful thing to do.

So fix this picture in mind. We start with a universe of sets V. Then we look at different models of ZF in V, each of which gives a universe U. U is sitting inside V, but from inside it looks like ‘the universe of all sets’.

Now, for each of these universes U we can study models of Peano arithmetic in U. And as I already explained, inside each U there will be a standard model of Peano arithmetic. But of course this depends on U.

So, we get lots of standard models of Peano arithmetic, one for each choice of U. Enayat calls these ZF-standard models of Peano arithmetic.

But there is one very special model of ZF in V, namely V itself. In other words, one choice of U is to take U = V. There’s a standard model of Peano arithmetic in V itself. This is an example of a ZF-standard model, but this is a very special one. Let’s call any model of Peano arithmetic isomorphic to this one V-standard.

Enayat’s theorem is about ZF-standard models of Peano arithmetic that aren’t V-standard. He shows that any ZF-standard model that’s not V-standard is ‘recursively saturated’.

What does it mean for a model M of Peano arithmetic to be ‘recursively saturated’? The idea is very roughly that ‘anything that can happen in any model, happens in M’.

Let me be a bit more precise. It means that if you write any computer program that prints out an infinite list of properties of an n-tuple of natural numbers, and there’s some model of Peano arithmetic that has an n-tuple with all these properties, then there’s an n-tuple of natural numbers in the model M with all these properties.

For example, there are models of Peano arithmetic that have a number x such that

0 < x
1 < x
2 < x
3 < x

and so on, ad infinitum. These are the nonstandard models. So a recursively saturated model must have such a number x. So it must be nonstandard.

In short, Enayat has found that ZF-standard models of Peano arithmetic in the universe V come in two drastically different kinds. They are either ‘as standard as possible’, namely V-standard. Or, they are ‘extremely rich’, containing n-tuples with all possible lists of consistent properties that you can print out with a computer program: they are recursively saturated.

I am probably almost as confused about this as you are. But Michael and I will dig into this more in our series of posts.

In fact we’ve been at this a while already. Here is a description of the whole series of posts so far:

Posts 1–10 are available as pdf files, formatted for small and medium screens.

Non-standard Models of Arithmetic 1: John kicks off the series by asking about recursively saturated models, and Michael says a bit about n-types and the overspill lemma. He also mentions the arithmetical hierarchy.

Non-standard Models of Arithmetic 2: John mention some references, and sets a goal: to understand this paper:

• Ali Enayat, Standard models of arithmetic.

John describes his dream: to show that “the” standard model is a much more nebulous notion than many seem to believe. He says a bit about the overspill lemma, and Joel David Hamkins gives a quick overview of saturation.

Non-standard Models of Arithmetic 3: A few remarks on the ultrafinitists Alexander Yessenin-Volpin and Edward Nelson; also Michael’s grad-school friend who used to argue that 7 might be nonstandard.

Non-standard Models of Arithmetic 4: Some back-and-forth on Enayat’s term “standard model” (or “ZF-standard model”) for the omega of a model of ZF. Philosophy starts to rear its head.

Non-standard Models of Arithmetic 5: Hamlet and Polonius talk math, and Michael holds forth on his philosophies of mathematics.

Non-standard Models of Arithmetic 6: John weighs in with why he finds “the standard model of Peano arithmetic” a problematic phrase. The Busy Beaver function is mentioned.

Non-standard Models of Arithmetic 7: We start on Enayat’s paper in earnest. Some throat-clearing about Axiom SM, standard models of ZF, inaccessible cardinals, and absoluteness. “As above, so below”: how ZF makes its “gravitational field” felt in PA.

Non-standard Models of Arithmetic 8: A bit about the Paris-Harrington and Goodstein theorems. In preparation, the equivalence (of sorts) between PA and ZF¬∞. The universe Vω of hereditarily finite sets and its correspondence with \mathbb{N}. A bit about Ramsey’s theorem (needed for Paris-Harrington). Finally, we touch on the different ways theories can be “equivalent”, thanks to a comment by Jeffrey Ketland.

Non-standard Models of Arithmetic 9: Michael sketches the proof of the Paris-Harrington theorem.

Non-Standard Models of Arithmetic 10: Ordinal analysis, the function growth hierarchies, and some fragments of PA. Some questions that neither of us knows how to answer.

Non-standard Models of Arithmetic 11: Back to Enayat’s paper: his definition of PAT for a recursive extension T of ZF. This uses the translation of formulas of PA into formulas of ZF, \varphi\mapsto \varphi^\mathbb{N}. Craig’s trick and Rosser’s trick.

Non-standard Models of Arithmetic 12: The strength of PAT for various T‘s. PAZF is equivalent to PAZFC+GCH, but PAZFI is strictly stronger than PAZF. (ZFI = ZF + “there exists an inaccessible cardinal”.)

Non-standard Models of Arithmetic 13: Enayat’s “natural” axiomatization of PAT, and his proof that this works. A digression into Tarski’s theorem on the undefinability of truth, and how to work around it. For example, while truth is not definable, we can define truth for statements with at most a fixed number of quantifiers.

Non-standard Models of Arithmetic 14: The previous post showed that PAT implies ΦT, where ΦT is Enayat’s “natural” axiomatization of PAT. Here we show the converse. We also interpret ΦT as saying, “Trust T”.

Non-standard Models of Arithmetic 15: We start to look at Truth (aka Satisfaction). Tarski gave a definition of Truth, and showed that Truth is undefinable. Less enigmatically put, there is no formula True(x) in the language of Peano arithmetic (L(PA)) that holds exactly for the Gödel numbers of true sentences of Peano arithmetic. On the other hand, Truth for Peano arithmetic can be formalized in the language of set theory (L(ZF)), and there are other work-arounds. We give an analogy with the Cantor diagonal argument.

Non-standard Models of Arithmetic 16: We look at the nitty-gritty of Trued(x), the formula in L(PA) that expresses truth in PA for formulas with parse-tree depth at most d. We see how the quantifiers “bleed through”, and why this prevents us from combining the whole series of Trued(x)’s into a single formula True(x). We also look at the variant SatΣn(x,y).

Non-standard Models of Arithmetic 17: More about how Trued evades Tarski’s undefinability theorem. The difference between Trued and SatΣn, and how it doesn’t matter for us. How Trued captures Truth for models of arithmetic: PA ⊢ Trued(⌜φ⌝) ↔ φ, for any φ of parse-tree depth at most d. Sketch of why this holds.

Non-standard Models of Arithmetic 18: The heart of Enayat’s paper: characterizing countable nonstandard T-standard models of PA (Prop. 6, Thm. 7, Cor. 8). Refresher on types. Meaning of ‘recursively saturated’. Standard meaning of ‘nonstandard’; standard and nonstandard meanings of ‘standard’.

Non-standard Models of Arithmetic 19: We marvel a bit over Enayat’s Prop. 6, and especially Cor. 8. The triple-decker sandwich, aka three-layer cake: ωUUV. More about why the omegas of standard models of ZF are standard. Refresher on ΦT. The smug confidence of a ZF-standard model.

Clifford JohnsonEarly Career Musings

Because of a certain movie from earlier this Summer (which I have not yet got around to mentioning here on the blog), I’ve been doing a lot of interviews recently, so sorry in advance for my face showing up in all your media. And I know many will sneer because … Click to continue reading this post

The post Early Career Musings appeared first on Asymptotia.

n-Category Café New Normed Division Algebra Found!

Hurwitz’s theorem says that there are only 4 normed division algebras over the real numbers, up to isomorphism: the real numbers, the complex numbers, the quaternions, and the octonions. The proof was published in 1923. It’s a famous result, and several other proofs are known. I’ve spent a lot of time studying them.

Thus you can imagine my surprise today when I learned Hurwitz’s theorem was false!

Abstract. We present an eight-dimensional even sub-algebra of the 2 4=162^4=16-dimensional associative Clifford algebra Cl 4,0\mathrm{Cl}_{4,0} and show that its eight-dimensional elements denoted as X\mathbf{X} and Y\mathbf{Y} respect the norm relation XY=XY\| \mathbf{X} \mathbf{Y}\| = \| \mathbf{X} \| \| \mathbf{Y} \|, thus forming an octonion-like but associative normed division algebra, where the norms are calculated using the fundamental geometric product instead of the usual scalar product. The corresponding 7-sphere has a topology that differs from that of octonionic 7-sphere.

Even more wonderful is that the author has discovered that the unit vectors in his normed division algebra form a 7-sphere that is not homeomorphic to the standard 7-sphere. Exotic 7-spheres are a dime a dozen, but those merely fail to be diffeomorphic to the standard 7-sphere.

Well, no: this article must be wrong.

Doesn’t Communications in Algebra have some mechanism where a few editors look at every paper — just to be sure they make sense?

n-Category Café Symmetric Pseudomonoids

The category of cocommutative comonoid objects in a symmetric monoidal category is cartesian, with their tensor product serving as their product. This result seems to date back to here:

Dually, the category of commutative monoid objects in a symmetric monoidal category is cocartesian. This was proved in Fox’s suspiciously similar paper in Cocomm. Coalg.

I’m working on a paper with Todd Trimble and Joe Moeller, and right now we need something similar one level up — that is, for symmetric pseudomonoids. (For example, a symmetric pseudomonoid in Cat is a symmetric monoidal category.)

The 2-category of symmetric pseudomonoids in a symmetric monoidal 2-category should be cocartesian, with their tensor product serving as their coproduct. I imagine the coproduct universal property will hold only up to 2-iso.

Has someone proved this, so we don’t need to?

Hmm, I just noticed that this paper:

proves the result I want in the special case where the symmetric monoidal 2-category is Cat. Namely:

Theorem 2.3. The 2-category SMC of symmetric monoidal categories, strong monoidal functors, and monoidal natural transformations has 2-categorical biproducts.

Unfortunately their proof is not purely ‘formal’, so it doesn’t instantly generalize to other symmetric monoidal 2-categories. And surely the fact that the coproducts in SMC are biproducts must rely on the fact that Cat is a cartesian 2-category; this must fail for symmetric pseudomonoids in a general symmetric monoidal 2-category.

They do more:

Theorem 2.2. The 2-category SMC has all small products and coproducts, and products are strict.

September 23, 2020

Tommaso DorigoThe Hunt For A Cool Rare Decay Of W Bosons

W bosons, what are they? To answer this question, let me first tell you that our world is made of matter held together by forces. If you look deep within, you will realize that matter is essentially constituted by "fermions": quarks and leptons, particles that possess a half-integer unit of spin, in a certain meaningful system of measurement units. Forces, on the other hand, are the result of fermions exchanging different particles called "bosons", particles that possess integer units of spin.

read more

September 22, 2020

John BaezAscendancy vs. Reserve

Why is biodiversity ‘good’? To what extent is this sort of goodness even relevant to ecosystems—as opposed to us humans? I’d like to study this mathematically.

To do this, we’d need to extract some answerable questions out of the morass of subtlety and complexity. For example: what role does biodiversity play in the ability of ecosystems to be robust under sudden changes of external conditions? This is already plenty hard to study mathematically, since it requires understanding ‘biodiversity’ and ‘robustness’.

Luckily there has already been a lot of work on the mathematics of biodiversity and its connection to entropy. For example:

• Tom Leinster, Measuring biodiversity, Azimuth, 7 November 2011.

But how does biodiversity help robustness?

There’s been a lot of work on this. This paper has some inspiring passages:

• Robert E. Ulanowicz,, Sally J. Goerner, Bernard Lietaer and Rocio Gomez, Quantifying sustainability: Resilience, efficiency and the return of information theory, Ecological Complexity 6 (2009), 27–36.

I’m not sure the math lives up to their claims, but I like these lines:

In other words, (14) says that the capacity for a system to undergo evolutionary change or self-organization consists of two aspects: It must be capable of exercising sufficient directed power (ascendancy) to maintain its integrity over time. Simultaneously, it must possess a reserve of flexible actions that can be used to meet the exigencies of novel disturbances. According to (14) these two aspects are literally complementary.

The two aspects are ‘ascendancy’, which is something like efficiency or being optimized, and ‘reserve capacity’, which is something like random junk that might come in handy if something unexpected comes up.

You know those gadgets you kept in the back of your kitchen drawer and never needed… until you did? If you’re aiming for ‘ascendancy’ you’d clear out those drawers. But if you keep that stuff, you’ve got more ‘reserve capacity’. They both have their good points. Ideally you want to strike a wise balance. You’ve probably sensed this every time you clean out your house: should I keep this thing because I might need it, or should I get rid of it?

I think it would be great to make these concepts precise. The paper at hand attempts this by taking a matrix of nonnegative numbers T_{i j} to describe flows in an ecological network. They define a kind of entropy for this matrix, very similar in look to Shannon entropy. Then they write this as a sum of two parts: a part closely analogous to mutual information, and a part closely analogous to conditional entropy. This decomposition is standard in information theory. This is their equation (14).

If you want to learn more about the underlying math, click on this picture:

The new idea of these authors is that in the context of an ecological network, the mutual information can be understood as ‘ascendancy’, while the conditional entropy can be understood as ‘reserve capacity’.

I don’t know if I believe this! But I like the general idea of a balance between ascendancy and reserve capacity.

They write:

While the dynamics of this dialectic interaction can be quite subtle and highly complex, one thing is boldly clear—systems with either vanishingly small ascendancy or insignificant reserves are destined to perish before long. A system lacking ascendancy has neither the extent of activity nor the internal organization needed to survive. By contrast, systems that are so tightly constrained and honed to a particular environment appear ‘‘brittle’’ in the sense of Holling (1986) or ‘‘senescent’’ in the sense of Salthe (1993) and are prone to collapse in the face of even minor novel disturbances. Systems that endure—that is, are sustainable—lie somewhere between these extremes. But, where?

David Hoggsystematics in abundances

I spent part of the weekend looking at issues with chemical abundances as a function of position and velocity in the Solar neighborhood. In the data, it looks like somehow the main-sequence abundances in the APOGEE and GALAH data sets are different for stars in different positions along the same orbit. That's bad for my Chemical Torus Imaging (tm) project with Adrian Price-Whelan (Flatiron)! But slowly we realized that the issue is that the abundances depend on stellar effective temperatures, and different temperatures are differently represented in different parts of the orbit. Phew. But there is a problem with the data. (Okay actually this can be a real, physical effect or a problem with the data; either way, we have to deal.) Time to call Christina Eilers (MIT), who is thinking about exactly this kind of abundance-calibration problem.

John BaezElectric Cars

Some good news! According to this article, we’re rapidly approaching the tipping point when, even without subsidies, it will be as cheaper to own an electric car than one that burns fossil fuels.

• Jack Ewing, The age of electric cars is dawning ahead of schedule, New York Times, September 20, 2020.

FRANKFURT — An electric Volkswagen ID.3 for the same price as a Golf. A Tesla Model 3 that costs as much as a BMW 3 Series. A Renault Zoe electric subcompact whose monthly lease payment might equal a nice dinner for two in Paris.

As car sales collapsed in Europe because of the pandemic, one category grew rapidly: electric vehicles. One reason is that purchase prices in Europe are coming tantalizingly close to the prices for cars with gasoline or diesel engines.

At the moment this near parity is possible only with government subsidies that, depending on the country, can cut more than $10,000 from the final price. Carmakers are offering deals on electric cars to meet stricter European Union regulations on carbon dioxide emissions. In Germany, an electric Renault Zoe can be leased for 139 euros a month, or $164.

Electric vehicles are not yet as popular in the United States, largely because government incentives are less generous. Battery-powered cars account for about 2 percent of new car sales in America, while in Europe the market share is approaching 5 percent. Including hybrids, the share rises to nearly 9 percent in Europe, according to Matthias Schmidt, an independent analyst in Berlin.

As electric cars become more mainstream, the automobile industry is rapidly approaching the tipping point when, even without subsidies, it will be as cheap, and maybe cheaper, to own a plug-in vehicle than one that burns fossil fuels. The carmaker that reaches price parity first may be positioned to dominate the segment.

A few years ago, industry experts expected 2025 would be the turning point. But technology is advancing faster than expected, and could be poised for a quantum leap. Elon Musk is expected to announce a breakthrough at Tesla’s “Battery Day” event on Tuesday that would allow electric cars to travel significantly farther without adding weight.

The balance of power in the auto industry may depend on which carmaker, electronics company or start-up succeeds in squeezing the most power per pound into a battery, what’s known as energy density. A battery with high energy density is inherently cheaper because it requires fewer raw materials and less weight to deliver the same range.

“We’re seeing energy density increase faster than ever before,” said Milan Thakore, a senior research analyst at Wood Mackenzie, an energy consultant which recently pushed its prediction of the tipping point ahead by a year, to 2024.

However, the article also points out that this tipping point is of the overall lifetime cost of the vehicle! The sticker price of electric cars will still be higher for a while. And there aren’t nearly enough charging stations!

My next car will be electric. But first I’m installing solar power for my house. I’m working on it now.

September 21, 2020

John BaezThe Brownian Map

\phantom{x}

Nina Holden won the 2021 Maryam Mirzakhani New Frontiers Prize for her work on random surfaces and the mathematics of quantum gravity. I’d like to tell you what she did… but I’m so far behind I’ll just explain a bit of the background.

Suppose you randomly choose a triangulation of the sphere with n triangles. This is a purely combinatorial thing, but you can think of it as a metric space if each of the triangles is equilateral with all sides of length 1.

This is a distorted picture of what you might get, drawn by Jérémie Bettinelli:


The triangles are not drawn as equilateral, so we can fit this shape into 3d space. Visit Bettinelli’s page for images that you can rotate:

• Jérémie Bettinelli, Computer simulations of random maps.

I’ve described how to build a random space out of n triangles. In the limit n \to \infty, if you rescale the resulting space by a factor of n^{-1/4} so it doesn’t get bigger and bigger, it converges to a ‘random metric space’ with fascinating properties. It’s called the Brownian map.

This random metric space is on average so wrinkly and crinkly that ‘almost surely’—that is, with probability 1—its Hausdorff dimension is not 2 but 4. And yet it is almost surely homeomorphic to a sphere!

Rigorously proving this is hard: a mix of combinatorics, probability theory and geometry.

Ideas from physics are also important here. There’s a theory called
Liouville quantum gravity’ that describes these random 2-dimensional surfaces. So, physicists have ways of—nonrigorously—figuring out answers to some questions faster than the mathematicians!

A key step in understanding the Brownian map was this paper from 2013:

• Jean-François Le Gall, Uniqueness and universality of the Brownian map, Annals of Probability 41 (2013), 2880–2960.



The Brownian map is to surfaces what Brownian motion is to curves. For example, the Hausdorff dimension of Brownian motion is almost surely 2: twice the dimension of a smooth curve. For the Brownian map it’s almost surely 4, twice the dimension of a smooth surface.

Let me just say one more technical thing. There’s a ‘space of all compact metric spaces’, and the Brownian map is actually a probability measure on this space! It’s called the Gromov-Hausdorff space, and it itself is a metric space… but not compact. (So no, we don’t have a set containing itself as an element.)


There’s a lot more to say about this… but I haven’t gotten very close to understanding Nina Holden’s work yet. She wrote a 7-paper series leading up to this one:

• Nina Holden and Xin Sun, Convergence of uniform triangulations under the Cardy embedding.

They show that random triangulations of a disk can be chosen to a random metric on the disk which can also be obtained from Liouville quantum gravity.

This is a much easier place to start learning this general subject:

• Ewain Gwynne, Random surfaces and Liouville quantum gravity.

One reason I find all this interesting is that when I worked on ‘spin foam models’ of quantum gravity, we were trying to develop combinatorial theories of spacetime that had nice limits as the number of discrete building blocks approached infinity. We were studying theories much more complicated than random 2-dimensional triangulations, and it quickly became clear to me how much work it would be to carefully analyze these. So it’s interesting to see how mathematicians have entered this subject—starting with a much more tractable class of theories, which are already quite hard.

While the theory I just described gives random metric spaces whose Hausdorff dimension is twice their topological dimension, Liouville quantum gravity actually contains an adjustable parameter that lets you force these metric spaces to become less wrinkled, with lower Hausdorff dimension. Taming the behavior of random triangulations gets harder in higher dimensions. Renate Loll, Jan Ambjørn and others have argued that we need to work with Lorentzian rather than Riemannian geometries to get physically reasonable behavior. This approach to quantum gravity is called causal dynamical triangulations.

Doug NatelsonRice ECE assistant professor position in Quantum Engineering

The Department of Electrical and Computer Engineering at Rice University invites applications for a tenure track Assistant Professor Position in the area of experimental quantum engineering, broadly defined. Under exceptional circumstances, more experienced senior candidates may be considered. Specific areas of interest include, but are not limited to: quantum computation, quantum sensing, quantum simulation, and quantum networks.

The department has a vibrant research program in novel, leading-edge research areas, has a strong culture of interdisciplinary and multidisciplinary research with great national and international visibility, and is ranked #1 nationally in faculty productivity.* With multiple faculty involved in quantum materials, quantum devices, optics and photonics, and condensed matter physics, Rice ECE considers these areas as focal points of quantum engineering research in the coming decade. The successful applicant will be required to teach undergraduate courses and build a successful research program.

The successful candidate will have a strong commitment to teaching, advising, and mentoring undergraduate and graduate students from diverse backgrounds. Consistent with the National Research Council’s report, Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond, we are seeking candidates who have demonstrated ability to lead and work in research groups that “… [integrate] the knowledge, tools, and ways of thinking…” from engineering, mathematics, and computational, natural, social and behavioral sciences to solve societal problems using a convergent approach.

Applicants should submit a cover letter, curriculum vitae, statements of research and teaching interests, and at least three references through the Rice faculty application website: http://jobs.rice.edu/postings/24582. The deadline for applications is January 15, 2021; review of applications will commence November 15, 2020. The position is expected to be available July 1, 2021. Additional information can be found on our website: http://www.ece.rice.edu.

Rice University is a private university with a strong reputation for academic excellence in both undergraduate and graduate education and research. Located in the economically dynamic, internationally diverse city of Houston, Texas, 4th largest city in the U.S., Rice attracts outstanding undergraduate and graduate students from across the nation and around the world. Rice provides a stimulating environment for research, teaching, and joint projects with industry.

The George R. Brown School of Engineering ranks among the top 20 of undergraduate engineering programs (US News & World Report) and is strongly committed to nurturing the aspirations of faculty, staff, and students in an inclusive environment. Rice University is an Equal Opportunity Employer with commitment to diversity at all levels and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status. We seek greater representation of women, minorities, people with disabilities, and veterans in disciplines in which they have historically been underrepresented; to attract international students from a wider range of countries and backgrounds; to accelerate progress in building a faculty and staff who are diverse in background and thought; and we support an inclusive environment that fosters interaction and understanding within our diverse community.

*http://news.rice.edu/2007/11/30/rices-electrical-engineering-and-computer-science-programs-rank-no-1/ Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.

Doug NatelsonTenure-track faculty position in Astronomy at Rice University

The Department of Physics and Astronomy at Rice University invites applications for a tenure-track faculty position in astronomy in the general field of galactic star formation, including the formation and evolution of planetary systems. We seek an outstanding theoretical, observational, or computational astronomer whose research will complement and extend existing activities in these areas within the Department. In addition to developing an independent and vigorous research program, the successful applicant will be expected to teach, on average, one undergraduate or graduate course each semester, and contribute to the service missions of the Department and University. The Department anticipates making the appointment at the assistant professor level. A Ph.D. in astronomy/astrophysics or related field is required. 

Applications for this position must be submitted electronically at http://jobs.rice.edu/postings/24588. Applicants will be required to submit the following: (1) cover letter; (2) curriculum vitae; (3) statement of research; (4) statement on teaching, mentoring, and outreach; (5) PDF copies of up to three publications; and (6) the names, affiliations, and email addresses of three professional references. Rice University is committed to a culturally diverse intellectual community. In this spirit, we particularly welcome applications from all genders and members of historically underrepresented groups who exemplify diverse cultural experiences and who are especially qualified to mentor and advise all members of our diverse student population. We will begin reviewing applications December 1, 2020. To receive full consideration, all application materials must be received by January 1, 2021. The appointment is expected to begin in July, 2021. 

Rice University is an Equal Opportunity Employer with a commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status. We encourage applicants from diverse backgrounds to apply.


Scott AaronsonAgent 3203.7: Guest post by Eliezer Yudkowsky

In his day, Agent 3203.7 had stopped people from trying to kill Adolf Hitler, Richard Nixon, and even, in the case of one unusually thoughtful assassin, Henry David Thoreau. But this was a new one on him.

“So…” drawled the seventh version of Agent 3203. His prosthetic hand crushed the simple 21st-century gun into fused metal and dropped it. “You traveled to the past in order to kill… of all people… Donald Trump. Care to explain why?”

The time-traveller’s eyes looked wild. Crazed. Nothing unusual. “How can you ask me that? You’re a time-traveler too! You know what he does!”

That was a surprising level of ignorance even for a 21st-century jumper. “Different timelines, kid. Some are pretty obscure. What the heck did Trump do in yours that’s worth taking your one shot at time travel to assassinate him of all people?”

“He’s destroying my world!”

Agent 3203.7 took a good look at where Donald Trump was pridefully addressing the unveiling of the Trump Taj Mahal in New Jersey, then took another good look at the errant time-traveler. “Destroying it how, exactly? Did Trump turn mad scientist in your timeline?”

“He’s President of the United States!”

Agent 3203.7 took another long stare at his new prisoner. He was apparently serious. “How did Trump become President in your timeline? Strangely advanced technology, subliminal messaging?”

“He was elected in the usual way,” the prisoner said bitterly.

Agent 3203.7 shook his head in amazement. Talk about shooting the messenger. “Kid, I doubt Trump was your timeline’s main problem.”

(thanks to Eliezer for giving me permission to reprint here)

David Hoggscope of a paper on asteroseismology

Today Bonaca (Harvard) and I settled on a full scope for a first paper on asteroseismology of giant stars in the NASA TESS survey. We are going to find that our marginalized-likelihood formalism confirms beautifully many of the classical asteroseismology results; we are going to find that some get adjusted by us; we are going to find that some are totally invisible to us. And we will have reasons or discussion for all three kinds of cases. And then we will run on “everything” (subject to some cuts). That's a good paper! If we can do it. This weekend I need to do some writing to get our likelihood properly recorded in math.

John PreskillLove in the time of thermo

An 81-year-old medical doctor has fallen off a ladder in his house. His pet bird hopped out of his reach, from branch to branch of a tree on the patio. The doctor followed via ladder and slipped. His servants cluster around him, the clamor grows, and he longs for his wife to join him before he dies. She arrives at last. He gazes at her face; utters, “Only God knows how much I loved you”; and expires.

I set the book down on my lap and looked up. I was nestled in a wicker chair outside the Huntington Art Gallery in San Marino, California. Busts of long-dead Romans kept me company. The lawn in front of me unfurled below a sky that—unusually for San Marino—was partially obscured by clouds. My final summer at Caltech was unfurling. I’d walked to the Huntington, one weekend afternoon, with a novel from Caltech’s English library.1

What a novel.

You may have encountered the phrase “love in the time of corona.” Several times. Per week. Throughout the past six months. Love in the Time of Cholera predates the meme by 35 years. Nobel laureate Gabriel García Márquez captured the inhabitants, beliefs, architecture, mores, and spirit of a Colombian city around the turn of the 20th century. His work transcends its setting, spanning love, death, life, obsession, integrity, redemption, and eternity. A thermodynamicist couldn’t ask for more-fitting reading.

Love in the Time of Cholera centers on a love triangle. Fermina Daza, the only child of a wealthy man, excels in her studies. She holds herself with poise and self-assurance, and she spits fire whenever others try to control her. The girl dazzles Florentino Ariza, a poet, who restructures his life around his desire for her. Fermina Daza’s pride impresses Dr. Juvenal Urbino, a doctor renowned for exterminating a cholera epidemic. After rejecting both men, Fermina Daza marries Dr. Juvenal Urbino. The two personalities clash, and one betrays the other, but they cling together across the decades. Florentino Ariza retains his obsession with Fermina Daza, despite having countless affairs. Dr. Juvenal Urbino dies by ladder, whereupon Florentino Ariza swoops in to win Fermina Daza over. Throughout the book, characters mistake symptoms of love for symptoms of cholera; and lovers block out the world by claiming to have cholera and self-quarantining.

As a thermodynamicist, I see the second law of thermodynamics in every chapter. The second law implies that time marches only forward, order decays, and randomness scatters information to the wind. García Márquez depicts his characters aging, aging more, and aging more. Many characters die. Florentino Ariza’s mother loses her memory to dementia or Alzheimer’s disease. A pawnbroker, she buys jewels from the elite whose fortunes have eroded. Forgetting the jewels’ value one day, she mistakes them for candies and distributes them to children.

The second law bites most, to me, in the doctor’s final words, “Only God knows how much I loved you.” Later, the widow Fermina Daza sighs, “It is incredible how one can be happy for so many years in the midst of so many squabbles, so many problems, damn it, and not really know if it was love or not.” She doesn’t know how much her husband loved her, especially in light of the betrayal that rocked the couple and a rumor of another betrayal. Her husband could have affirmed his love with his dying breath, but he refused: He might have loved her with all his heart, and he might not have loved her; he kept the truth a secret to all but God. No one can retrieve the information after he dies.2 

Love in the Time of Cholera—and thermodynamics—must sound like a mouthful of horseradish. But each offers nourishment, an appetizer and an entrée. According to the first law of thermodynamics, the amount of energy in every closed, isolated system remains constant: Physics preserves something. Florentino Ariza preserves his love for decades, despite Fermina Daza’s marrying another man, despite her aging.

The latter preservation can last only so long in the story: Florentino Ariza, being mortal, will die. He claims that his love will last “forever,” but he won’t last forever. At the end of the novel, he sails between two harbors—back and forth, back and forth—refusing to finish crossing a River Styx. I see this sailing as prethermalization: A few quantum systems resist thermalizing, or flowing to the physics analogue of death, for a while. But they succumb later. Florentino Ariza can’t evade the far bank forever, just as the second law of thermodynamics forbids his boat from functioning as a perpetuum mobile.

Though mortal within his story, Florentino Ariza survives as a book character. The book survives. García Márquez wrote about a country I’d never visited, and an era decades before my birth, 33 years before I checked his book out of the library. But the book dazzled me. It pulsed with the vibrancy, color, emotion, and intellect—with the fullness—of life. The book gained another life when the coronavius hit. Thermodynamics dictates that people age and die, but the laws of thermodynamics remain.3 I hope and trust—with the caveat about humanity’s not destroying itself—that Love in the Time of Cholera will pulse in 350 years. 

What’s not to love?

1Yes, Caltech has an English library. I found gems in it, and the librarians ordered more when I inquired about books they didn’t have. I commend it to everyone who has access.

2I googled “Only God knows how much I loved you” and was startled to see the line depicted as a hallmark of romance. Please tell your romantic partners how much you love them; don’t make them guess till the ends of their lives.

3Lee Smolin has proposed that the laws of physics change. If they do, the change seems to have to obey metalaws that remain constant.

September 20, 2020

n-Category Café Open Systems: A Double Categorical Perspective (Part 2)

Back to Kenny Courser’s thesis:

One thing Kenny does here is explain the flaws in a well-known framework for studying open systems: decorated cospans. Decorated cospans were developed by my student Brendan Fong. Since I was Brendan’s advisor at the time, a hefty helping of blame for not noticing the problems belongs to me! But luckily, Kenny doesn’t just point out the problems: he shows how to fix them. As a result, everything we’ve done with decorated cospans can be saved.

The main theorem on decorated cospans is correct; it’s just less useful than we’d hoped! The idea is to cook up categories where the morphisms are open systems. The objects of such a category could be something simple like finite sets, but morphisms from XX to YY could be something more interesting, like ‘open graphs’:

Here XX and YY are mapped into a third set in the middle, but this set in the middle is the set of nodes of a graph. We say the set in the middle has been ‘decorated’ with a graph.

Here’s how the original theory of decorated cospans seeks to make this precise.

Fong’s Theorem. Suppose CC is a category with finite colimits, and make CC into a symmetric monoidal category with its coproduct as the tensor product. Suppose F:(C,+)(Set,×)F\colon (C,+) \to (\mathrm{Set},\times) is a lax symmetric monoidal functor. Define an F-decorated cospan to be a cospan

in CC together with an element of F(N)F(N). Then there is a symmetric monoidal category with

  • objects of CC as objects,
  • isomorphism classes of FF-decorated cospans as morphisms.

I won’t go into many details, but let me say how to compose two decorated spans, and also how this ‘isomorphism class’ business works.

Given two decorated cospans we compose their underlying cospans in the usual way, via pushout:

We get a cospan from XX to ZZ. To decorate this we need an element of F(M+ YN)F(M +_Y N). So, we take the decorations we have on the cospans being composed, which together give an element of F(N)×F(M)F(N) \times F(M), and apply this composite map:

F(N)×F(M)F(N+M)F(N+ YM) F(N) \times F(M) \longrightarrow F(N+M) \longrightarrow F(N+_Y M)

Here the first map, called the laxator, comes from the fact that FF is a lax monoidal functor, while the second comes from applying FF to the canonical map N+MN+ YMN+M \to N+_Y M.

Since composing cospans involves a pushout, which is defined via a universal property, the composite is only well-defined up to isomorphism. So, to get an actual category, we take isomorphism classes of decorated cospans as our morphisms.

Here an isomorphism of cospans is a commuting diagram like this:

where hh is an isomorphism. If the first cospan here has a decoration dF(N)d \in F(N) and the second has a decoration dF(N)d' \in F(N'), then we have an isomorphism of decorated cospans if F(h)(d)=dF(h)(d) = d'.

So, that’s the idea. The theorem is true, and it works fine for some applications — but not so well for others, like the example of open graphs!

Why not? Well, let’s look at that example in detail. Given a finite set NN, let’s define a graph on NN to be a finite set EE together with two functions s,t:ENs, t \colon E \to N. We call NN the set of nodes, EE the set of edges, and the functions ss and tt map each edge to its source and target, respectively. So, a graph on NN is a way of choosing a graph whose set of nodes is NN.

We can try to apply the above theorem taking C=FinSetC = \mathrm{FinSet} and letting F:FinSetSetF \colon \mathrm{FinSet} \to \mathrm{Set} be the functor sending each finite set NN to the set of all graphs on NN.

The first problem, which Brendan and I noticed right away, is that there’s not really a set of graphs on NN. There’s a proper class! EE ranges over all possible finite sets, and there’s not a set of all finite sets.

This should have set alarm bells ringing right away. But we used a standard dodge. In fact there are two. One is to replace FinSet\mathrm{FinSet} with an equivalent small category, and define a graph just as before but taking NN and EE to be objects in this equivalent category. Another is to invoke the axiom of universes. Either way, we get a set of graphs on each NN.

Then Fong’s theorem applies, and we get a decorated cospan category with:

  • ‘finite sets’ as objects,
  • isomorphism classes of open graphs as morphisms.

Here I’m putting ‘finite sets’ in quotes because of the trickery I just mentioned, but it’s really not a big deal so I’ll stop now. An open graph has a finite set NN of nodes, a finite set EE of edges, maps s,t:ENs,t \colon E \to N saying the source and target of each edge, and two maps f:XNf \colon X \to N and g:YNg \colon Y \to N.

These last two maps are what make it an open graph going from XX to YY:

Isomorphism classes of open graphs from XX to YY are the morphisms from XX to YY in our decorated cospan category.

But later, when Kenny was devising a bicategory of decorated cospans, we noticed a second problem. The concept of ‘isomorphic decorated cospan’ doesn’t behave well in this example: the concept of isomorphism is too narrowly defined!

Suppose you and I have isomorphic decorated cospans:

In the example at hand, this means you have a graph on the finite set NN and I have a graph on the finite set NN'. Call yours dF(N)d \in F(N) and mine dF(N)d' \in F(N').

We also have a bijection h:NNh \colon N \to N' such that

F(h)(d)=dF(h)(d) = d'

What does this mean? I haven’t specified the functor FF in detail so you’ll have to take my word for it, but it should be believable. It means that my graph is exactly like yours except that we replace the nodes of your graph, which are elements of NN, by the elements of NN' that they correspond to. But this means the edges of my graph must be exactly the same as the edges of yours. It’s not good enough for our graphs to have isomorphic sets of edges: they need to be equal!.

For a more precise account of this, with pictures, read the introduction to Chapter 3 of Kenny’s thesis.

So, our decorated cospan category has ‘too many morphisms’. Two open graphs will necessarily define different morphisms if they have different sets of edges.

This set Kenny and I to work on a new formalism, structured cospans, that avoids this problem. Later Kenny and Christina Vasilakopoulou also figured out a way to fix the decorated cospan formalism. Kenny’s thesis explains all this, and also how structured cospans are related to the ‘new, improved’ decorated cospans.

But then something else happened! Christina Vasilakopoulou was a postdoc at U.C. Riverside while all this was going on. She and my grad student Joe Moeller wrote a paper on something called the monoidal Grothendieck construction, which plays a key role in relating structured cospans to the new decorated cospans. But the anonymous referee of their paper pointed out another problem with the old decorated cospans!

Briefly, the problem is that the functor F:(FinSet,+)(Set,×)F \colon (\mathrm{FinSet},+) \to (\mathrm{Set},\times) that sends each NNto the set of graphs having NN as their set of vertices cannot be made lax monoidal in the desired way. To make FF lax monoidal, we must pick a natural transformation called the laxator:

ϕ N,M:F(N)×F(M)F(N+M)\phi_{N,M} \colon F(N) \times F(M) \to F(N+M)

I used this map earlier when explaining how to compose decorated cospans.

The idea seems straightforward enough: given a graph on NN and a graph on MM we get a graph on their disjoint union N+MN+M. This is true, and there is a natural transformation ϕ N,M\phi_{N,M} that does this.

But the definition of lax monoidal functor demands that the laxator make a certain hexagon commute! And it does not!

I won’t draw this hexagon here; you can see it at the link or in the intro to Chapter 3 of Kenny’s thesis, where he explains this problem. The problem arises because when we have three sets of edges, say E,E,EE,E',E'', we typically have

(E+E)+EE+(E+E)(E + E') + E'' \ne E + (E' + E'')

There is a sneaky way to partially get around this problem, which he also explains: define graphs using a category equivalent to FinSet\mathrm{FinSet} where ++ is strictly associative, not just up to isomorphism!

This is good enough to make FF lax monoidal. But Kenny noticed yet another problem: FF is still not lax symmetric monoidal. If you try to show it is, you wind up needing two graphs to be equal: one with E+EE+E' as its set of edges, and another with E+EE'+E as its set of edges. These aren’t equal! And at this point it seems we hit a dead end. While there’s a category equivalent to FinSet\mathrm{FinSet} where ++ is strictly associative, there’s no such category where ++ is also strictly commutative.

In the end, by going through all these contortions, we can use a watered-down version of Fong’s theorem to get a category with open graphs as morphisms, and we can make it monoidal — but not symmetric monoidal.

It’s clear that something bad is going on here. We are not following the tao of mathematics. We keep wanting things to be equal, that should only be isomorphic. The problem is that we’re treating a graph on a set as extra structure on that set, when it’s actually extra stuff.

Structured cospans, and the new decorated cospans, are the solution! For example, the new decorated cospans let us use a category of graphs with a given set of nodes, instead of a mere set. Now F(N)F(N) is a category rather than a set. This category lets us talk about isomorphic graphs with the same set of vertices, and all our problems evaporate.

n-Category Café Special Numbers in Category Theory

There are a few theorems in abstract category theory in which specific numbers play an important role. For example:

Theorem. Let S\mathsf{S} be the free symmetric monoidal category on an object xx. Regard S\mathsf{S} as a mere category. Then there exists an equivalence F:SSF \colon \mathsf{S} \to \mathsf{S} such that:

  • FF is not naturally isomorphic to the identity,
  • FF acts as the identity on all objects,
  • FF acts as the identity on all endomorphisms f:x nx nf \colon x^{\otimes n} \to x^{\otimes n} except when n=6n = 6.

This theorem would become false if we replaced 66 by any other number.

The proof is lurking here. The point is that S\mathsf{S} is the groupoid of finite sets and bijections, so hom(x n,x n)hom(x^{\otimes n} , x^{\otimes n}) is the symmetric group S nS_n — and of all the symmetric groups, only S 6S_6 has an outer automorphism.

If we replaced the free symmetric monoidal category on one object by some higher-dimensional analogues we could create theorems with all sorts of crazy numbers showing up, like 24 or 240, since we could get homotopy groups of spheres.

Still, it’s a surprise when a theorem with purely category-theoretic assumptions has a specific number other than 0, 1, or 2 in its conclusion. We were talking about these on Category Theory Community Server. Here’s one pointed out by Peter Arndt:

Theorem. The only category for which the Yoneda embedding is the rightmost of a string of 5 adjoints is the category Set\mathsf{Set}.

The proof is here:

Here’s another one that Arndt pointed out:

Theorem. There are just 3 possible lengths of maximal chains of adjoint functors between compactly generated tensor-triangulated categories: 3, 5 and \infty.

The proof is here:

Reid Barton pointed out another:

Theorem. There are just 9 model category structures on SetSet.

This was mentioned without proof by Tom Goodwillie on MathOverflow and explained here:

Do you know other nice theorems like this: hypotheses that sound like ‘general abstract nonsense’, with a surprising conclusion that involves a specific natural number other than 0, 1, and 2?

David Hoggtime-domain astronomy in Gotham

Today, Tyler Pritchard (NYU) and I assembled a group of time-domain-interested astrophysicists from around NYC (and a few who are part of the NYC community but more far-flung). In a two-hour meeting, all we did was introduce ourselves and our interests in time domain, multi-messenger, and Vera Rubin Observatory LSST, and then discuss what we might do, collectively, as a group. Time-domain expertise spanned an amazing range of scales, from asteroid search to exoplanet characterization to stellar rotation to classical novae to white-dwarf mergers with neutron stars, supernovae, light echoes, AGN variability, tidal-disruption events, and black-hole mergers. As we had predicted in advance, the group recognized a clear opportunity to create some kind of externally funded “Gotham” (the terminology we often use for NYC-area efforts these days) center for time-domain astrophysics.

Also, as we predicted, there was more confusion about whether we should be thinking about a real-time event broker for LSST. But we identified some themes in the group that might make for a good project: We have very good theorists working, who could help on physics-driven multi-messenger triggers. We have very good machine-learners working, who could help on data-driven triggers. And we have lots of non-supernovae (and weird-supernova) science cases among us. Could we make something that serves our collective science interests but is also extremely useful to global astrophysics? I think we could.

David Hogghow precisely can you locate K freqencies?

In Fourier transforms, or periodograms, or signal processing in general, when you look at a time stream that is generated by a single frequency f (plus noise, say) and has a total time length T, you expect the power-spectrum ppeak, or the likelihood function for f to have a width that is no narrower than 1/T in the frequency direction. This is for extremely deep reasons, that relate to—among other things—the uncertainty principle. You can't localize a signal in both frequency and time simultaneously.

Ana Bonaca (Harvard) and I are fitting combs of frequencies to light curves. That is, we are fitting a model with K frequencies, equally spaced. We are finding that the likelihood function has a peak that is a factor of K narrower than 1/T, in both the central-frequency direction, and the frequency-spacing direction. Is this interesting or surprising? I have ways to justify this point, heuristically. But is there a fundamental result here, and where is it in the literature?

September 18, 2020

Terence TaoExploring the toolkit of Jean Bourgain

I have uploaded to the arXiv my paper “Exploring the toolkit of Jean Bourgain“. This is one of a collection of papers to be published in the Bulletin of the American Mathematical Society describing aspects of the work of Jean Bourgain; other contributors to this collection include Keith Ball, Ciprian Demeter, and Carlos Kenig. Because the other contributors will be covering specific areas of Jean’s work in some detail, I decided to take a non-overlapping tack, and focus instead on some basic tools of Jean that he frequently used across many of the fields he contributed to. Jean had a surprising number of these “basic tools” that he wielded with great dexterity, and in this paper I focus on just a few of them:

  • Reducing qualitative analysis results (e.g., convergence theorems or dimension bounds) to quantitative analysis estimates (e.g., variational inequalities or maximal function estimates).
  • Using dyadic pigeonholing to locate good scales to work in or to apply truncations.
  • Using random translations to amplify small sets (low density) into large sets (positive density).
  • Combining large deviation inequalities with metric entropy bounds to control suprema of various random processes.

Each of these techniques is individually not too difficult to explain, and were certainly employed on occasion by various mathematicians prior to Bourgain’s work; but Jean had internalized them to the point where he would instinctively use them as soon as they became relevant to a given problem at hand. I illustrate this at the end of the paper with an exposition of one particular result of Jean, on the Erdős similarity problem, in which his main result (that any sum {S = S_1+S_2+S_3} of three infinite sets of reals has the property that there exists a positive measure set {E} that does not contain any homothetic copy {x+tS} of {S}) is basically proven by a sequential application of these tools (except for dyadic pigeonholing, which turns out not to be needed here).

I had initially intended to also cover some other basic tools in Jean’s toolkit, such as the uncertainty principle and the use of probabilistic decoupling, but was having trouble keeping the paper coherent with such a broad focus (certainly I could not identify a single paper of Jean’s that employed all of these tools at once). I hope though that the examples given in the paper gives some reasonable impression of Jean’s research style.

Matt von HippelWhen and How Scientists Reach Out

You’ve probably heard of the myth of the solitary scientist. While Newton might have figured out calculus isolated on his farm, most scientists work better when they communicate. If we reach out to other scientists, we can make progress a lot faster.

Even if you understand that, you might not know what that reaching out actually looks like. I’ve seen far too many crackpots who approach scientific communication like a spammer: sending out emails to everyone in a department, commenting in every vaguely related comment section they can find. While commercial spammers hope for a few gullible people among the thousands they contact, that kind of thing doesn’t benefit crackpots. As far as I can tell, they communicate that way because they genuinely don’t know any better.

So in this post, I want to give a road map for how we scientists reach out to other scientists. Keep these steps in mind, and if you ever need to reach out to a scientist you’ll know what to do.

First, decide what you want to know. This may sound obvious, but sometimes people skip this step. We aren’t communicating just to communicate, but because we want to learn something from the other person. Maybe it’s a new method or idea, maybe we just want confirmation we’re on the right track. We don’t reach out just to “show our theory”, but because we hope to learn something from the response.

Then, figure out who might know it. To do this, we first need to decide how specialized our question is. We often have questions about specific papers: a statement we don’t understand, a formula that seems wrong, or a method that isn’t working. For those, we contact an author from that paper. Other times, the question hasn’t been addressed in a paper, but does fall under a specific well-defined topic: a particular type of calculation, for example. For those we seek out a specialist on that specific topic. Finally, sometimes the question is more general, something anyone in our field might in principle know but we happen not to. For that kind of question, we look for someone we trust, someone we have a prior friendship with and feel comfortable asking “dumb questions”. These days, we can supplement that with platforms like PhysicsOverflow that let us post technical questions and invite anyone to respond.

Note that, for all of these, there’s some work to do first. We need to read the relevant papers, bone up on a topic, even check Wikipedia sometimes. We need to put in enough work to at least try to answer our question, so that we know exactly what we need the other person for.

Finally, contact them appropriately. Papers will usually give contact information for one, or all, of the authors. University websites will give university emails. We’d reach out with something like that first, and switch to personal email (or something even more casual, like Skype or social media) only for people we already have a track record of communicating with in that way.

By posing and directing our questions well, scientists can reach out and get help when we struggle. Science is a team effort, we’re stronger when we work together.

Scott AaronsonIn a world like this one, take every ally you can get

Update (Sep. 17): Several people, here and elsewhere, wrote to tell me that while they completely agreed with my strategic and moral stance in this post, they think that it’s the ads of Republican Voters Against Trump, rather than the Lincoln Project, that have been most effective in changing Trump supporters’ minds. So please consider donating to RVAT instead or in addition! In fact, what the hell, I’ll match donations to RVAT up to $1000.


For the past few months, I’ve alternated between periods of debilitating depression and (thankfully) longer stretches when I’m more-or-less able to work. Triggers for my depressive episodes include reading social media, watching my 7-year daughter struggle with prolonged isolation, and (especially) contemplating the ongoing apocalypse in the American West, the hundreds of thousands of pointless covid deaths, and an election in 48 days that if I didn’t know such things were impossible in America would seem likely to produce a terrifying standoff as a despot and millions of his armed loyalists refuse to cede control. Meanwhile, catalysts for my relatively functional periods have included teaching my undergrad quantum information class, Zoom calls with my students, life on Venus?!? (my guess is no, but almost entirely due to priors), learning new math (fulfilling a decades-old goal, I’m finally working my way through Paul Cohen’s celebrated proof of the independence of the Continuum Hypothesis—more about that later!).

Of course, when you feel crushed by the weight of the world’s horribleness, it improves your mood to be able even just to prick the horribleness with a pin. So I was gratified that, in response to a previous post, Shtetl-Optimized readers contributed at least $3,000, the first $2,000 of which I matched, mostly to the Biden-Harris campaign but a little to the Lincoln Project.

Alas, a commenter was unhappy with the latter:

Lincoln Project? Really? … Pushing the Overton window rightward during a worldwide fascist dawn isn’t good. I have trouble understanding why even extremely smart people have trouble with this sort of thing.

Since this is actually important, I’d like to spend the rest of this post responding to it.

For me it’s simple.

What’s the goal right now? To defeat Trump. In the US right now, that’s the prerequisite to every other sane political goal.

What will it take to achieve that goal? Turnout, energizing the base, defending the election process … but also, if possible, persuading a sliver of Trump supporters in swing states to switch sides, or at least vote third party or abstain.

Who is actually effective at that goal? Well, no one knows for sure. But while I thought the Biden campaign had some semi-decent ads, the Lincoln Project’s best stuff seems better to me, just savagely good.

Why are they effective? The answer seems obvious: for the same reason why a jilted ex is a more dangerous foe than a stranger. If anyone understood how to deprogram a Republican from the Trump cult, who would it be: Alexandria Ocasio-Cortez, or a fellow Republican who successfully broke from the cult?

Do I agree with the Lincoln Republicans about most of the “normal” issues that Americans once argued about? Not at all. Do I hold them, in part, morally responsible for creating the preconditions to the current nightmare? Certainly.

And should any of that cause me to boycott them? Not in my moral universe. If Churchill and FDR could team up with Stalin, then surely we in the Resistance can temporarily ally ourselves with the rare Republicans who chose their stated principles over power when tested—their very rarity attesting to the nontriviality of their choice.

To my mind, turning one’s back on would-be allies, in a conflict whose stakes obviously overshadow what’s bad about those allies, is simultaneously one of the dumbest and the ugliest things that human beings can do. It abandons reason for moral purity and ends up achieving neither.

September 17, 2020

n-Category Café Making Life Hard For First-Order Logic

I mentioned in a previous post Sundholm’s rendition in dependent type theory of the Donkey sentence:

Every farmer who owns a donkey beats it.

For those who find this unnatural, I offered

Anyone who owns a gun should register it.

The idea then is that sentences of the form

Every AA that RRs a BB, SSs it,

are rendered in type theory as

z: x:A y:BR(x,y)S(p(z),p(q(z)), \prod_{z: \sum_{x: A} \sum_{y: B} R(x, y)} S(p(z), p(q(z)), where pp and qq are the projections to first and second components. We see ‘it’ corresponds to p(q(z))p(q(z)).

This got me wondering if we could make life even harder for the advocate of first-order logic – let’s give them the typed version to be generous – by constructing a natural language sentence which it would be even more awkward to formalise.

One thing to note is that the type above has been formed from a dependency structure of the kind:

x:A,y:B,w:R(x,y)s:S(x,y). x: A, y: B, w: R(x, y) \vdash s: S(x, y).

Now I’ve been writing up on the nLab some of what Mike told us about 3-theories back here. Note in particular his comments on the limited dependencies which may be expressed in the ‘first-order 3-theory’:

one layer of dependency: there is one layer of types which cannot depend on anything, and then there is a second layer of types (“propositions” in the logical reading) which can depend on types in the first layer, but not on each other.

So to stretch this 3-theory we should look for a maximally dependent structure. Take something of the form

x:A,y:B(x),z:C(x,y)d:D(x,y,z). x: A, y: B(x), z: C(x, y) \vdash d:D(x, y, z).

Ideally the types will all be sets.

By the time quantification has been applied, the left hand side swept up into an iterated dependent sum and a dependent product formed, the proposition should be bristling with awkward anaphoric pronouns.

The best I could come up with was:

Whenever someone’s child speaks politely to them, they should praise them for it.

A gendered version with, say, men and daughters would help with the they/them clashes.

Whenever a man’s daughter speaks politely to him, he should praise her for it.

Any better candidates?

September 16, 2020

Terence TaoCourse announcement: Math 246A, complex analysis

Starting on Oct 2, I will be teaching Math 246A, the first course in the three-quarter graduate complex analysis sequence at the math department here at UCLA.  This first course covers much of the same ground as an honours undergraduate complex analysis course, in particular focusing on the basic properties of holomorphic functions such as the Cauchy and residue theorems, the classification of singularities, and the maximum principle, but there will be more of an emphasis on rigour, generalisation and abstraction, and connections with other parts of mathematics.  The main text I will be using for this course is Stein-Shakarchi (with Ahlfors as a secondary text), but I will also be using the blog lecture notes I wrote the last time I taught this course in 2016. At this time I do not expect to significantly deviate from my past lecture notes, though I do not know at present how different the pace will be this quarter when the course is taught remotely. As with my 247B course last spring, the lectures will be open to the public, though other coursework components will be restricted to enrolled students.

September 15, 2020

Noncommutative GeometryFAREWELL TO A GENIUS

We all learned with immense sorrow that Vaughan Jones died on Sunday September 6th.I met him in the late seventies when he was officially a student of André Haefliger but contacted me as a thesis advisor which I became at a non-official level. I had done in my work on factors the classification of periodic automorphisms of the hyperfinite factor and Vaughan Jones undertook the task of

Scott AaronsonMy Utility+ podcast with Matthew Putman

Another Update (Sep. 15): Sorry for the long delay; new post coming soon! To tide you over—or just to distract you from the darkness figuratively and literally engulfing our civilization—here’s a Fortune article about today’s announcement by IBM of its plans for the next few years in superconducting quantum computing, with some remarks from yours truly.

Another Update (Sep. 8): A reader wrote to let me know about a fundraiser for Denys Smirnov, a 2015 IMO gold medalist from Ukraine who needs an expensive bone marrow transplant to survive Hodgkin’s lymphoma. I just donated and I hope you’ll consider it too!

Update (Sep. 5): Here’s another quantum computing podcast I did, “Dunc Tank” with Duncan Gammie. Enjoy!



Thanks so much to Shtetl-Optimized readers, so far we’ve raised $1,371 for the Biden-Harris campaign and $225 for the Lincoln Project, which I intend to match for $3,192 total. If you’d like to donate by tonight (Thursday night), there’s still $404 to go!

Meanwhile, a mere three days after declaring my “new motto,” I’ve come up with a new new motto for this blog, hopefully a more cheerful one:

When civilization seems on the brink of collapse, sometimes there’s nothing left to talk about but maximal separations between randomized and quantum query complexity.

On that note, please enjoy my new one-hour podcast on Spotify (if that link doesn’t work, try this one) with Matthew Putman of Utility+. Alas, my umming and ahhing were more frequent than I now aim for, but that’s partly compensated for by Matthew’s excellent decision to speed up the audio. This was an unusually wide-ranging interview, covering everything from SlateStarCodex to quantum gravity to interdisciplinary conferences to the challenges of teaching quantum computing to 7-year-olds. I hope you like it!

September 14, 2020

Tommaso DorigoWhat It Means To Be Anti-Science

"Anti-scientific thinking" is a bad disease of our time, and one which may affect a wide range of human beings, from illiterate fanatics such as anti-vaxxers and religious fundamentalists on one side, to highly-educated and brilliant individuals on the other side. It is a sad realization to see how diversified and strong has become this general attitude of denying the usefulness of scientific progress and research, especially in a world where science is behind every good thing you use in your daily life, from the internet to your cell-phone, or from anti-cavity toothpaste to hadron therapy against tumours.

read more

September 13, 2020

Jordan EllenbergPandemic blog 34: teaching on the screen

A small proportion of UW-Madison courses were being given in person, until last week, that is, but not mine. I’m teaching two graduate courses, introduction to algebra (which I’ve taught several times before) and introduction to algebraic number theory, which I’ve taught before but not for quite a few years. And I’m teaching them sitting in my chair at home. So I thought I’d write down a bit about what that’s like, since depending on who you ask, we’ll never do it again (in which case it’s good to record the memory) or this is the way we’ll all teach in the future (in which case it’s good to record my first impression.)

First of all, it’s tiring. Just as tiring as teaching in the classroom, even though I don’t have to leave my chair. This surprised me! But, introspecting, I think I actually draw energy from the state of being in a room with people, talking at the board, walking around, interacting. I usually leave class feeling less tired than when I walked in.

On the screen, no. I teach lectures at 10 and 11 and at noon when both are done I’m wiped out.

My rig, settled on after other setups kept glitching out: Notability open on iPad, I write notes as if on blackboard with the Apple Pencil, iPad connected by physical cable to laptop, screensharing to a window on the laptop which window I am sharing in Microsoft Teams to the class while the laptop camera and mic capture my face and voice.

What I have not done:

  • Gotten a pro-quality microphone
  • Set up a curated “lecture space” from which to broadcast
  • Recorded lecture videos in advance so I can use the lecture hour for discussion
  • Used breakout rooms in Teams to let the students discuss among themselves

All of these seem like good ideas.

So far (but I am still in the part of both courses where the material isn’t too hard) the students and I seem to find this… OK. My handwriting is somewhat worse on the tablet than it is on the blackboard and it’s not great on the blackboard. The only student who has told me they prefer online is one who reports being too shy to speak in class, sometimes too shy even to attend, and who feels more able to participate by typing in the chat window with the camera turned off. That makes sense!

I have it easy — these courses have only thirty students each, so the logistical work of handling student questions, homework, etc. isn’t overwhelming. Teaching big undergraduate courses presents its own problems. What happens with calculus quizzes? In the spring it was reported that cheating was universal (there are lots of websites that will compute integrals for you in another window!) So we now have a system called Honorlock which inhabits the student’s browser, watches IP traffic for visits to cheating sites, and commandeers the student’s webcam (!) to check whether their eye motions indicate cheating (!!) This sounds awful and frankly kind of creepy and not worth it. And the students, unsurprisingly, hate it. But then how does assessment work? The obvious answer is to give exams which are open book and which measure something more contentful about the material than can be tested by a usual quiz. I can think of two problems:

  • Fluency with the basic manipulations (of both algebra and calculus) is actually one of the skills the class is meant to impart: yes, there are things a computer can do it’s good to be able to do mentally. (I don’t think I place a complicated trig substitution in this category, but knowing that the integral of x^n is on order x^{n+1}, yes.
  • Tests that measured understanding would be different from and a lot harder than what students are used to! And this is a crappy time to be an undergraduate. I don’t think it’s a great idea for their calculus course to become, without warning, much more difficult than the one they signed up for.

Jordan EllenbergPandemic blog 33: Smart Restart

I thought it was gonna work.

Really! I thought we could sort-of-open college again and not cause a big outbreak. Most of our students live here year-round. By all accounts, there have been fraternity parties all summer. We had a spike of cases in the campus area when bars opened back up at the end of June, which subsided when the county put back those restrictions (though never back down to the levels we’d seen in March, April, and May.) At the end of July I wrote “statewide, cases are growing and growing, and the situation is much worse in the South. I would fight back if you said this was a predictable consequence; nothing about this disease is predictable with any confidence. It could have worked.”

And maybe it could have; but it didn’t. As soon as school started last Wednesday, the percentage of student tests coming back positive, started growing, about 20% higher every day. On Saturday, nine Greek houses were quarantined. A week into school, with about 8% of tests positive, the University halted in-person classes and completely quarantined two first-year dormitories with two hours notice. Food is being brought in three times a day. Hope you like your roommate.

A lot of people, unlike me, saw this coming.

Maybe we can beat this back. Who knows? We did in July. But this outbreak is bigger.

Public schools in Madison are fully online right now. With a summer to prepare it’s working better than it did last spring. But it’s not great, and I would guess that for poor kids it’s a lot worse than “not great.” Private schools are allowed to be open in grades K-2, and a court decision that came down today has, at least for now, allowed them to open to all grades. More outbreaks? To be a broken record, who knows? The argument for opening K-2 sounds pretty good to me; while it’s not definite, most people seem to think younger children are less likely to spread and contract the disease, and that age range is where having kids at home limits parents most. Schools in Georgia have been open, and there have been lots of school outbreaks, and those schools get closed for a while and then reopen, but it doesn’t seem to have created big wave of cases statewide.

This article is good. Beating COVID isn’t all-or-nothing, but people seem to see it that way. If the bar’s open, that means it’s safe, and you can drink with whoever you want, as close as you want. No! Nothing is safe, if you mean safe safe. But also nothing is a guarantee of disaster. If everybody would do 50% of what they felt like doing, we could beat it. Or maybe 75%, who knows. But it feels like if we don’t insist on 0%, people will understand us to mean that 100% is OK. I don’t have any good ideas about how to fix this.

September 11, 2020

Matt von HippelTo Elliptics and Beyond!

I’ve been busy running a conference this week, Elliptics and Beyond.

After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been popping up lately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.

I almost suggested Buzz Lightyear for the logo but I chickened out

The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.

We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!

Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.

Doug NatelsonThe power of a timely collaboration

Sometimes it takes a while to answer a scientific question, and sometimes that answer ends up being a bit unexpected.  Three years ago, I wrote about a paper from our group, where we had found, much to our surprise, that the thermoelectric response of polycrystalline gold wires varied a lot as a function of position within the wire, even though the metal was, by every reasonable definition, a good, electrically homogeneous material.  (We were able to observe this by using a focused laser as a scannable heat source, and measuring the open-circuit photovoltage of the device as a function of the laser position.)  At the time, I wrote "Annealing the wires does change the voltage pattern as well as smoothing it out.  This is a pretty good indicator that the grain boundaries really are important here."

What would be the best way to test the idea that somehow the grain boundaries within the wire were responsible for this effect?  Well, the natural thought experiment would be to do the same measurement in a single crystal gold wire, and then ideally do a measurement in a wire with, say, a single grain boundary in a known location.  

Fig. 4 from this paper
Shortly thereafter, I had the good fortune to be talking with Prof. Jonathan Fan at Stanford.  His group had, in fact, come up with a clever way to create single-crystal gold wires, as shown at right.  Basically they create a wire via lithography, encapsulate it in silicon oxide so that the wire is sitting in its own personal crucible, and then melt/recrystallize the wire.  Moreover, they could build upon that technique as in this paper, and create bicrystals with a single grain boundary.  Focused ion beam could then be used to trim these to the desired width (though in principle that can disturb the surface).

We embarked on a rewarding collaboration that turned out to be a long, complicated path of measuring many many device structures of various shapes, sizes, and dimensions.  My student Charlotte Evans, measuring the photothermoelectric (PTE) response of these, worked closely with members of Prof. Fan's group - Rui Yang grew and prepared devices, and Lucia Gan did many hours of back-scatter electron diffraction measurements and analysis, for comparison with the photovoltage maps.  My student Mahdiyeh Abbasi learned the intricacies of finite element modeling to see what kind of spatial variation of Seebeck coefficient \(S\) would be needed to reproduce the photovoltage maps.  

From Fig. 1 of our new paper.  Panel g upper shows the local crystal
misorientation as found from electron back-scatter diffraction, while 
panel g lower shows a spatial map of the PTE response.  The two 
patterns definitely resemble each other (panel h), and this is seen
consistently across many devices.

A big result of this was published this week in PNAS.  The surprising result:  Individual high-angle grain boundaries produce a PTE signal so small as to be unresolvable in our measurement system.  In contrast, though, the PTE measurement could readily detect tiny changes in Seebeck response that correlate with small local misorientations of the local single crystal structure.  The wire is still a single crystal, but it contains dislocations and disclinations and stacking faults and good old-fashioned strain due to interactions with the surroundings when it crystallized.  Some of these seem to produce detectable changes in thermoelectric response.  When annealed, the PTE features smooth out and reduce in magnitude, as some (but not all) of the structural defects and strain can anneal away.  

So, it turns out it's likely not the grain boundaries that cause Seebeck variations in these nanostructures - instead it's likely residual strain and structural defects from the thin film deposition process, something to watch out for in general for devices made by lithography and thin film processing.  Also, opto-electronic measurements of thermoelectric response are sensitive enough to detect very subtle structural inhomogeneities, an effect that can in principle be leveraged for things like defect detection in manufactured structures.  It took a while to unravel, but it is satisfying to get answers and see the power of the measurement technique.

Peter Rohde Rohde vs Zuckerberg: Part 7.6a

Hmm... that’s funny. I just tried and I’m pretty sure I can.
I mean, in all fairness, it is pretty offensive. Just look closely at the guy above the letter ‘u’ in the text at the bottom of the infographic. He’s stepping with a very long step, and is probably in a rush. He’s also leaning forwards, congruent with his haste, the reason being that the WC is offscreen left, which he is utterly determined to locate in time, lest his right hand – visibly attempting to hold in in what needs to be held in, his raised right elbow being the dead visual giveaway – buckle and facilitate a deadly blast. This symptomatically overlaps with just being and old man with a bad back – a view substantiated by his hat – however it’s not, and you shouldn’t fall for it, because despite this amateur act of deception, the far greater issue is that he’s walking right onto the railway tracks (probably drunk), which is not only a health and safety issue, but an extremely bad example to set for our impressionable youth. This guy needs to grow up and get a job.
I’m glad this has been clarified, and I accept that with these highly detailed guidelines, I’m now in a position to ensure that the same mistake is never repeated again.
FYI, the choice is binary: you either click a button to disagree, or you don’t. There is nothing more than that, even if you press the button (I presume everyone presses the button).

September 09, 2020

Terence TaoVaughan Jones

Vaughan Jones, who made fundamental contributions in operator algebras and knot theory (in particular developing a surprising connection between the two), died this week, aged 67.

Vaughan and I grew up in extremely culturally similar countries, worked in adjacent areas of mathematics, shared (as of this week) a coauthor in Dima Shylakhtenko, started out our career with the same postdoc position (as UCLA Hedrick Assistant Professors, sixteen years apart) and even ended up in sister campuses of the University of California, but surprisingly we only interacted occasionally, via chance meetings at conferences or emails on some committee business. I found him extremely easy to get along with when we did meet, though, perhaps because of our similar cultural upbringing.

I have not had much occasion to directly use much of Vaughan’s mathematical contributions, but I did very much enjoy reading his influential 1999 preprint on planar algebras (which, for some odd reason has never been formally published). Traditional algebra notation is one-dimensional in nature, with algebraic expressions being described by strings of mathematical symbols; a linear operator T, for instance, might appear in the middle of such a string, taking in an input x on the right and returning an output Tx on its left that might then be fed into some other operation. There are a few mathematical notations which are two-dimensional, such as the commutative diagrams in homological algebra, the tree expansions of solutions to nonlinear PDE (particularly stochastic nonlinear PDE), or the Feynman diagrams and Penrose graphical notations from physics, but these are the exception rather than the rule, and the notation is often still concentrated on a one-dimensional complex of vertices and edges (or arrows) in the plane. Planar algebras, by contrast, fully exploit the topological nature of the plane; a planar “operator” (or “operad”) inhabits some punctured region of the plane, such as an annulus, with “inputs” entering from the inner boundaries of the region and “outputs” emerging from the outer boundary. These algebras arose for Vaughan in both operator theory and knot theory, and have since been used in some other areas of mathematics such as representation theory and homology. I myself have not found a direct use for this type of algebra in my own work, but nevertheless I found the mere possibility of higher dimensional notation being the natural choice for a given mathematical problem to be conceptually liberating.

Peter Rohde The Peacock

I’m pleased to announce my new comic series, “The Peacock”.

Terence TaoZarankiewicz’s problem for semilinear hypergraphs

Abdul Basit, Artem Chernikov, Sergei Starchenko, Chiu-Minh Tran and I have uploaded to the arXiv our paper Zarankiewicz’s problem for semilinear hypergraphs. This paper is in the spirit of a number of results in extremal graph theory in which the bounds for various graph-theoretic problems or results can be greatly improved if one makes some additional hypotheses regarding the structure of the graph, for instance by requiring that the graph be “definable” with respect to some theory with good model-theoretic properties.

A basic motivating example is the question of counting the number of incidences between points and lines (or between points and other geometric objects). Suppose one has {n} points and {n} lines in a space. How many incidences can there be between these points and lines? The utterly trivial bound is {n^2}, but by using the basic fact that two points determine a line (or two lines intersect in at most one point), a simple application of Cauchy-Schwarz improves this bound to {n^{3/2}}. In graph theoretic terms, the point is that the bipartite incidence graph between points and lines does not contain a copy of {K_{2,2}} (there does not exist two points and two lines that are all incident to each other). Without any other further hypotheses, this bound is basically sharp: consider for instance the collection of {p^2} points and {p^2+p} lines in a finite plane {{\bf F}_p^2}, that has {p^3+p^2} incidences (one can make the situation more symmetric by working with a projective plane rather than an affine plane). If however one considers lines in the real plane {{\bf R}^2}, the famous Szemerédi-Trotter theorem improves the incidence bound further from {n^{3/2}} to {O(n^{4/3})}. Thus the incidence graph between real points and lines contains more structure than merely the absence of {K_{2,2}}.

More generally, bounding on the size of bipartite graphs (or multipartite hypergraphs) not containing a copy of some complete bipartite subgraph {K_{k,k}} (or {K_{k,\dots,k}} in the hypergraph case) is known as Zarankiewicz’s problem. We have results for all {k} and all orders of hypergraph, but for sake of this post I will focus on the bipartite {k=2} case.

In our paper we improve the {n^{3/2}} bound to a near-linear bound in the case that the incidence graph is “semilinear”. A model case occurs when one considers incidences between points and axis-parallel rectangles in the plane. Now the {K_{2,2}} condition is not automatic (it is of course possible for two distinct points to both lie in two distinct rectangles), so we impose this condition by fiat:

Theorem 1 Suppose one has {n} points and {n} axis-parallel rectangles in the plane, whose incidence graph contains no {K_{2,2}}‘s, for some large {n}.
  • (i) The total number of incidences is {O(n \log^4 n)}.
  • (ii) If all the rectangles are dyadic, the bound can be improved to {O( n \frac{\log n}{\log\log n} )}.
  • (iii) The bound in (ii) is best possible (up to the choice of implied constant).

We don’t know whether the bound in (i) is similarly tight for non-dyadic boxes; the usual tricks for reducing the non-dyadic case to the dyadic case strangely fail to apply here. One can generalise to higher dimensions, replacing rectangles by polytopes with faces in some fixed finite set of orientations, at the cost of adding several more logarithmic factors; also, one can replace the reals by other ordered division rings, and replace polytopes by other sets of bounded “semilinear descriptive complexity”, e.g., unions of boundedly many polytopes, or which are cut out by boundedly many functions that enjoy coordinatewise monotonicity properties. For certain specific graphs we can remove the logarithmic factors entirely. We refer to the preprint for precise details.

The proof techniques are combinatorial. The proof of (i) relies primarily on the order structure of {{\bf R}} to implement a “divide and conquer” strategy in which one can efficiently control incidences between {n} points and rectangles by incidences between approximately {n/2} points and boxes. For (ii) there is additional order-theoretic structure one can work with: first there is an easy pruning device to reduce to the case when no rectangle is completely contained inside another, and then one can impose the “tile partial order” in which one dyadic rectangle {I \times J} is less than another {I' \times J'} if {I \subset I'} and {J' \subset J}. The point is that this order is “locally linear” in the sense that for any two dyadic rectangles {R_-, R_+}, the set {[R_-,R_+] := \{ R: R_- \leq R \leq R_+\}} is linearly ordered, and this can be exploited by elementary double counting arguments to obtain a bound which eventually becomes {O( n \frac{\log n}{\log\log n})} after optimising certain parameters in the argument. The proof also suggests how to construct the counterexample in (iii), which is achieved by an elementary iterative construction.

September 08, 2020

Doug NatelsonMaterials and popular material

This past week was a great one for my institution, as the Robert A. Welch Foundation and Rice University announced the creation of the Welch Institute for Advanced Materials.  Exactly how this is going to take shape and grow is still in the works, but the stated goals of materials-by-design and making Rice and Houston a global destination for advanced materials research are very exciting.  

Long-time readers of this blog know my view that the amazing physics of materials is routinely overlooked in part because materials are ubiquitous - for example, the fact that the Pauli principle in some real sense is what is keeping you from falling through the floor right now.  I'm working on refining a few key concepts/topics that I think are translatable to the general reading public.  Emergence, symmetry, phases of matter, the most important physical law most people have never heard about (the Pauli principle), quasiparticles, the quantum world (going full circle from the apparent onset of the classical to using collective systems to return to quantum degrees of freedom in qubits).   Any big topics I'm leaving out?

September 07, 2020

Terence TaoFractional free convolution powers

Dimitri Shlyakhtenko and I have uploaded to the arXiv our paper Fractional free convolution powers. For me, this project (which we started during the 2018 IPAM program on quantitative linear algebra) was motivated by a desire to understand the behavior of the minor process applied to a large random Hermitian {N \times N} matrix {A_N}, in which one takes the successive upper left {n \times n} minors {A_n} of {A_N} and computes their eigenvalues {\lambda_1(A_n) \leq \dots \leq \lambda_n(A_n)} in non-decreasing order. These eigenvalues are related to each other by the Cauchy interlacing inequalities

\displaystyle  \lambda_i(A_{n+1}) \leq \lambda_i(A_n) \leq \lambda_{i+1}(A_{n+1})

for {1 \leq i \leq n < N}, and are often arranged in a triangular array known as a Gelfand-Tsetlin pattern, as discussed in these previous blog posts.

When {N} is large and the matrix {A_N} is a random matrix with empirical spectral distribution converging to some compactly supported probability measure {\mu} on the real line, then under suitable hypotheses (e.g., unitary conjugation invariance of the random matrix ensemble {A_N}), a “concentration of measure” effect occurs, with the spectral distribution of the minors {A_n} for {n = \lfloor N/k\rfloor} for any fixed {k \geq 1} converging to a specific measure {k^{-1}_* \mu^{\boxplus k}} that depends only on {\mu} and {k}. The reason for this notation is that there is a surprising description of this measure {k^{-1}_* \mu^{\boxplus k}} when {k} is a natural number, namely it is the free convolution {\mu^{\boxplus k}} of {k} copies of {\mu}, pushed forward by the dilation map {x \mapsto k^{-1} x}. For instance, if {\mu} is the Wigner semicircular measure {d\mu_{sc} = \frac{1}{\pi} (4-x^2)^{1/2}_+\ dx}, then {k^{-1}_* \mu_{sc}^{\boxplus k} = k^{-1/2}_* \mu_{sc}}. At the random matrix level, this reflects the fact that the minor of a GUE matrix is again a GUE matrix (up to a renormalizing constant).

As first observed by Bercovici and Voiculescu and developed further by Nica and Speicher, among other authors, the notion of a free convolution power {\mu^{\boxplus k}} of {\mu} can be extended to non-integer {k \geq 1}, thus giving the notion of a “fractional free convolution power”. This notion can be defined in several different ways. One of them proceeds via the Cauchy transform

\displaystyle  G_\mu(z) := \int_{\bf R} \frac{d\mu(x)}{z-x}

of the measure {\mu}, and {\mu^{\boxplus k}} can be defined by solving the Burgers-type equation

\displaystyle  (k \partial_k + z \partial_z) G_{\mu^{\boxplus k}}(z) = \frac{\partial_z G_{\mu^{\boxplus k}}(z)}{G_{\mu^{\boxplus k}}(z)} \ \ \ \ \ (1)

with initial condition {G_{\mu^{\boxplus 1}} = G_\mu} (see this previous blog post for a derivation). This equation can be solved explicitly using the {R}-transform {R_\mu} of {\mu}, defined by solving the equation

\displaystyle  \frac{1}{G_\mu(z)} + R_\mu(G_\mu(z)) = z

for sufficiently large {z}, in which case one can show that

\displaystyle  R_{\mu^{\boxplus k}}(z) = k R_\mu(z).

(In the case of the semicircular measure {\mu_{sc}}, the {R}-transform is simply the identity: {R_{\mu_{sc}}(z)=z}.)

Nica and Speicher also gave a free probability interpretation of the fractional free convolution power: if {A} is a noncommutative random variable in a noncommutative probability space {({\mathcal A},\tau)} with distribution {\mu}, and {p} is a real projection operator free of {A} with trace {1/k}, then the “minor” {[pAp]} of {A} (viewed as an element of a new noncommutative probability space {({\mathcal A}_p, \tau_p)} whose elements are minors {[pXp]}, {X \in {\mathcal A}} with trace {\tau_p([pXp]) := k \tau(pXp)}) has the law of {k^{-1}_* \mu^{\boxplus k}} (we give a self-contained proof of this in an appendix to our paper). This suggests that the minor process (or fractional free convolution) can be studied within the framework of free probability theory.

One of the known facts about integer free convolution powers {\mu^{\boxplus k}} is monotonicity of the free entropy

\displaystyle  \chi(\mu) = \int_{\bf R} \int_{\bf R} \log|s-t|\ d\mu(s) d\mu(t) + \frac{3}{4} + \frac{1}{2} \log 2\pi

and free Fisher information

\displaystyle  \Phi(\mu) = \frac{2\pi^2}{3} \int_{\bf R} \left(\frac{d\mu}{dx}\right)^3\ dx

which were introduced by Voiculescu as free probability analogues of the classical probability concepts of differential entropy and classical Fisher information. (Here we correct a small typo in the normalization constant of Fisher entropy as presented in Voiculescu’s paper.) Namely, it was shown by Shylakhtenko that the quantity {\chi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-decreasing for integer {k}, and the Fisher information {\Phi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-increasing for integer {k}. This is the free probability analogue of the corresponding monotonicities for differential entropy and classical Fisher information that was established by Artstein, Ball, Barthe, and Naor, answering a question of Shannon.

Our first main result is to extend the monotonicity results of Shylakhtenko to fractional {k \geq 1}. We give two proofs of this fact, one using free probability machinery, and a more self contained (but less motivated) proof using integration by parts and contour integration. The free probability proof relies on the concept of the free score {J(X)} of a noncommutative random variable, which is the analogue of the classical score. The free score, also introduced by Voiculescu, can be defined by duality as measuring the perturbation with respect to semicircular noise, or more precisely

\displaystyle  \frac{d}{d\varepsilon} \tau( Z P( X + \varepsilon Z) )|_{\varepsilon=0} = \tau( J(X) P(X) )

whenever {P} is a polynomial and {Z} is a semicircular element free of {X}. If {X} has an absolutely continuous law {\mu = f\ dx} for a sufficiently regular {f}, one can calculate {J(X)} explicitly as {J(X) = 2\pi Hf(X)}, where {Hf} is the Hilbert transform of {f}, and the Fisher information is given by the formula

\displaystyle  \Phi(X) = \tau( J(X)^2 ).

One can also define a notion of relative free score {J(X:B)} relative to some subalgebra {B} of noncommutative random variables.

The free score interacts very well with the free minor process {X \mapsto [pXp]}, in particular by standard calculations one can establish the identity

\displaystyle  J( [pXp] : [pBp] ) = k {\bf E}( [p J(X:B) p] | [pXp], [pBp] )

whenever {X} is a noncommutative random variable, {B} is an algebra of noncommutative random variables, and {p} is a real projection of trace {1/k} that is free of both {X} and {B}. The monotonicity of free Fisher information then follows from an application of Pythagoras’s theorem (which implies in particular that conditional expectation operators are contractions on {L^2}). The monotonicity of free entropy then follows from an integral representation of free entropy as an integral of free Fisher information along the free Ornstein-Uhlenbeck process (or equivalently, free Fisher information is essentially the rate of change of free entropy with respect to perturbation by semicircular noise). The argument also shows when equality holds in the monotonicity inequalities; this occurs precisely when {\mu} is a semicircular measure up to affine rescaling.

After an extensive amount of calculation of all the quantities that were implicit in the above free probability argument (in particular computing the various terms involved in the application of Pythagoras’ theorem), we were able to extract a self-contained proof of monotonicity that relied on differentiating the quantities in {k} and using the differential equation (1). It turns out that if {d\mu = f\ dx} for sufficiently regular {f}, then there is an identity

\displaystyle  \partial_k \Phi( k^{-1/2}_* \mu^{\boxplus k} ) = -\frac{1}{2\pi^2} \lim_{\varepsilon \rightarrow 0} \sum_{\alpha,\beta = \pm} f(x) f(y) K(x+i\alpha \varepsilon, y+i\beta \varepsilon)\ dx dy \ \ \ \ \ (2)

where {K} is the kernel

\displaystyle  K(z,w) := \frac{1}{G(z) G(w)} (\frac{G(z)-G(w)}{z-w} + G(z) G(w))^2

and {G(z) := G_\mu(z)}. It is not difficult to show that {K(z,\overline{w})} is a positive semi-definite kernel, which gives the required monotonicity. It would be interesting to obtain some more insightful interpretation of the kernel {K} and the identity (2).

These monotonicity properties hint at the minor process {A \mapsto [pAp]} being associated to some sort of “gradient flow” in the {k} parameter. We were not able to formalize this intuition; indeed, it is not clear what a gradient flow on a varying noncommutative probability space {({\mathcal A}_p, \tau_p)} even means. However, after substantial further calculation we were able to formally describe the minor process as the Euler-Lagrange equation for an intriguing Lagrangian functional that we conjecture to have a random matrix interpretation. We first work in “Lagrangian coordinates”, defining the quantity {\lambda(s,y)} on the “Gelfand-Tsetlin pyramid”

\displaystyle  \Delta = \{ (s,y): 0 < s < 1; 0 < y < s \}

by the formula

\displaystyle  \mu^{\boxplus 1/s}((-\infty,\lambda(s,y)/s])=y/s,

which is well defined if the density of {\mu} is sufficiently well behaved. The random matrix interpretation of {\lambda(s,y)} is that it is the asymptotic location of the {\lfloor yN\rfloor^{th}} eigenvalue of the {\lfloor sN \rfloor \times \lfloor sN \rfloor} upper left minor of a random {N \times N} matrix {A_N} with asymptotic empirical spectral distribution {\mu} and with unitarily invariant distribution, thus {\lambda} is in some sense a continuum limit of Gelfand-Tsetlin patterns. Thus for instance the Cauchy interlacing laws in this asymptotic limit regime become

\displaystyle  0 \leq \partial_s \lambda \leq \partial_y \lambda.

After a lengthy calculation (involving extensive use of the chain rule and product rule), the equation (1) is equivalent to the Euler-Lagrange equation

\displaystyle  \partial_s L_{\lambda_s}(\partial_s \lambda, \partial_y \lambda) + \partial_y L_{\lambda_y}(\partial_s \lambda, \partial_y \lambda) = 0

where {L} is the Lagrangian density

\displaystyle  L(\lambda_s, \lambda_y) := \log \lambda_y + \log \sin( \pi \frac{\lambda_s}{\lambda_y} ).

Thus the minor process is formally a critical point of the integral {\int_\Delta L(\partial_s \lambda, \partial_y \lambda)\ ds dy}. The quantity {\partial_y \lambda} measures the mean eigenvalue spacing at some location of the Gelfand-Tsetlin pyramid, and the ratio {\frac{\partial_s \lambda}{\partial_y \lambda}} measures mean eigenvalue drift in the minor process. This suggests that this Lagrangian density is some sort of measure of entropy of the asymptotic microscale point process emerging from the minor process at this spacing and drift. There is work of Metcalfe demonstrating that this point process is given by the Boutillier bead model, so we conjecture that this Lagrangian density {L} somehow measures the entropy density of this process.

September 05, 2020

Tommaso DorigoA Science Communication Proposal For Pandemic Times

As every other aspect of human life, science communication has suffered a significant setback due to the ongoing Covid-19-induced pandemic. While regular meetings of scientific teams can be effectively held online, through zoom or skype, it is the big conferences that are suffering the biggest blow. And this is not good, for several reasons.

read more

September 04, 2020

Matt von HippelZero-Point Energy, Zero-Point Diagrams

Listen to a certain flavor of crackpot, or a certain kind of science fiction, and you’ll hear about zero-point energy. Limitless free energy drawn from quantum space-time itself, zero-point energy probably sounds like bullshit. Often it is. But lurking behind the pseudoscience and the fiction is a real physics concept, albeit one that doesn’t really work like those people imagine.

In quantum mechanics, the zero-point energy is the lowest energy a particular system can have. That number doesn’t actually have to be zero, even for empty space. People sometimes describe this in terms of so-called virtual particles, popping up from nothing in particle-antiparticle pairs only to annihilate each other again, contributing energy in the absence of any “real particles”. There’s a real force, the Casimir effect, that gets attributed to this, a force that pulls two metal plates together even with no charge or extra electromagnetic field. The same bubbling of pairs of virtual particles also gets used to explain the Hawking radiation of black holes.

I’d like to try explaining all of these things in a different way, one that might clear up some common misconceptions. To start, let’s talk about, not zero-point energy, but zero-point diagrams.

Feynman diagrams are a tool we use to study particle physics. We start with a question: if some specific particles come together and interact, what’s the chance that some (perhaps different) particles emerge? We start by drawing lines representing the particles going in and out, then connect them in every way allowed by our theory. Finally we translate the diagrams to numbers, to get an estimate for the probability. In particle physics slang, the number of “points” is the total number of particles: particles in, plus particles out. For example, let’s say we want to know the chance that two electrons go in and two electrons come out. That gives us a “four-point” diagram: two in, plus two out. A zero-point diagram, then, means zero particles in, zero particles out.

A four-point diagram and a zero-point diagram

(Note that this isn’t why zero-point energy is called zero-point energy, as far as I can tell. Zero-point energy is an older term from before Feynman diagrams.)

Remember, each Feynman diagram answers a specific question, about the chance of particles behaving in a certain way. You might wonder, what question does a zero-point diagram answer? The chance that nothing goes to nothing? Why would you want to know that?

To answer, I’d like to bring up some friends of mine, who do something that might sound equally strange: they calculate one-point diagrams, one particle goes to none. This isn’t strange for them because they study theories with defects.

For some reason, they didn’t like my suggestion to use this stamp on their papers

Normally in particle physics, we think about our particles in an empty, featureless space. We don’t have to, though. One thing we can do is introduce features in this space, like walls and mirrors, and try to see what effect they have. We call these features “defects”.

If there’s a defect like that, then it makes sense to calculate a one-point diagram, because your one particle can interact with something that’s not a particle: it can interact with the defect.

A one-point diagram with a wall, or “defect”

You might see where this is going: let’s say you think there’s a force between two walls, that comes from quantum mechanics, and you want to calculate it. You could imagine it involves a diagram like this:

A “zero-point diagram” between two walls

Roughly speaking, this is the kind of thing you could use to calculate the Casimir effect, that mysterious quantum force between metal plates. And indeed, it involves a zero-point diagram.

Here’s the thing, though: metal plates aren’t just “defects”. They’re real physical objects, made of real physical particles. So while you can think of the Casimir effect with a “zero-point diagram” like that, you can also think of it with a normal diagram, more like the four-point diagram I showed you earlier: one that computes, not a force between defects, but a force between the actual electrons and protons that make up the two plates.

A lot of the time when physicists talk about pairs of virtual particles popping up out of the vacuum, they have in mind a picture like this. And often, you can do the same trick, and think about it instead as interactions between physical particles. There’s a story of roughly this kind for Hawking radiation: you can think of a black hole event horizon as “cutting in half” a zero-point diagram, and see pairs of particles going out from the black hole…but you can also do a calculation that looks more like particles interacting with a gravitational field.

This also might help you understand why, contra the crackpots and science fiction writers, zero-point energy isn’t a source of unlimited free energy. Yes, a force like the Casimir effect comes “from the vacuum” in some sense. But really, it’s a force between two particles. And just like the gravitational force between two particles, this doesn’t give you unlimited free power. You have to do the work to move the particles back over and over again, using the same amount of power you gained from the force to begin with. And unlike the forces you’re used to, these are typically very small effects, as usual for something that depends on quantum mechanics. So it’s even less useful than more everyday forces for this.

Why do so many crackpots and authors expect zero-point energy to be a massive source of power? In part, this is due to mistakes physicists made early on.

Sometimes, when calculating a zero-point diagram (or any other diagram), we don’t get a sensible number. Instead, we get infinity. Physicists used to be baffled by this. Later, they understood the situation a bit better, and realized that those infinities were probably just due to our ignorance. We don’t know the ultimate high-energy theory, so it’s possible something happens at high energies to cancel those pesky infinities. Without knowing exactly what happened, physicists would estimate by using a “cutoff” energy where they expected things to change.

That kind of calculation led to an estimate you might have heard of, that the zero-point energy inside single light bulb could boil all the world’s oceans. That estimate gives a pretty impressive mental image…but it’s also wrong.

This kind of estimate led to “the worst theoretical prediction in the history of physics”, that the cosmological constant, the force that speeds up the expansion of the universe, is 120 orders of magnitude higher than its actual value (if it isn’t just zero). If there really were energy enough inside each light bulb to boil the world’s oceans, the expansion of the universe would be quite different than what we observe.

At this point, it’s pretty clear there is something wrong with these kinds of “cutoff” estimates. The only unclear part is whether that’s due to something subtle or something obvious. But either way, this particular estimate is just wrong, and you shouldn’t take it seriously. Zero-point energy exists, but it isn’t the magical untapped free energy you hear about in stories. It’s tiny quantum corrections to the forces between particles.

Noncommutative GeometryThe Noncommutative Geometry Seminar

The Noncommutative Geometry SeminarThis is a virtual seminar on topics in noncommutative geometry, which is open to anyone anywhere interested in noncommutative geometry.Starting September 22 we meet on Tuesdays at 15:00-16:00 (CET) via zoom (link will be distributed via mail)Organizers: Giovanni Landi, Ryszard Nest, Walter van Suijlekom, Hang WangMore infomation on the schedule, and also how to

September 01, 2020

Sean CarrollThe Biggest Ideas in the Universe | 24. Science

For the triumphant final video in the Biggest Ideas series, we look at a big idea indeed: Science. What is science, and why is it so great? And I also take the opportunity to dip a toe into the current state of fundamental physics — are predictions that unobservable universes exist really science? What if we never discover another particle? Is it worth building giant expensive experiments? Tune in to find out.

YouTube Video

Thanks to everyone who has watched along the way. It’s been quite a ride.

Scott AaronsonMy new motto

Update (Sep 1): Thanks for the comments, everyone! As you can see, I further revised this blog’s header based on the feedback and on further reflection.

The Right could only kill me and everyone I know.
The Left is scarier; it could convince me that it was my fault
!

(In case you missed it on the blog’s revised header, right below “Quantum computers aren’t just nondeterministic Turing machines” and “Hold the November US election by mail.” I added an exclamation point at the end to suggest a slightly comic delivery.)

Update: A friend expressed concern that, because my new motto appears to “blame both sides,” it might generate confusion about my sympathies or what I want to happen in November. So to eliminate all ambiguity: I hereby announce that I will match all reader donations made in the next 72 hours to either the Biden-Harris campaign or the Lincoln Project, up to a limit of $2,000. Honor system; just tell me in the comments what you donated.

August 31, 2020

Sean CarrollThe Biggest Ideas in the Universe | 23. Criticality and Complexity

Spherical cows are important because they let us abstract away all the complications of the real world and think about underlying principles. But what about when the complications are the point? Then we enter the realm of complex systems — which, interestingly, has its own spherical cows. One such is the idea of a “critical” system, balanced at a point where there is interesting dynamics at all scales. We know a lot about such systems, without approaching anything like a complete understanding just yet.

YouTube Video

And here is the associated Q&A video:

YouTube Video

John PreskillIf the (quantum-metrology) key fits…

My maternal grandfather gave me an antique key when I was in middle school. I loved the workmanship: The handle consisted of intertwined loops. I loved the key’s gold color and how the key weighed on my palm. Even more, I loved the thought that the key opened something. I accompanied my mother to antique shops, where I tried unlocking chests, boxes, and drawers.

Z

My grandfather’s antique key

I found myself holding another such key, metaphorically, during the autumn of 2018. MIT’s string theorists had requested a seminar, so I presented about quasiprobabilities. Quasiprobabilities represent quantum states similarly to how probabilities represent a swarm of classical particles. Consider the steam rising from asphalt on a summer day. Calculating every steam particle’s position and momentum would require too much computation for you or me to perform. But we can predict the probability that, if we measure every particle’s position and momentum, we’ll obtain such-and-such outcomes. Probabilities are real numbers between zero and one. Quasiprobabilities can assume negative and nonreal values. We call these values “nonclassical,” because they’re verboten to the probabilities that describe classical systems, such as steam. I’d defined a quasiprobability, with collaborators, to describe quantum chaos. 

k2

David Arvidsson-Shukur was sitting in the audience. David is a postdoctoral fellow at the University of Cambridge and a visiting scholar in the other Cambridge (at MIT). He has a Swedish-and-southern-English accent that I’ve heard only once before and, I learned over the next two years, an academic intensity matched by his kindliness.1 Also, David has a name even longer than mine: David Roland Miran Arvidsson-Shukur. We didn’t know then, but we were destined to journey together, as postdoctoral knights-errant, on a quest for quantum truth.

David studies the foundations of quantum theory: What distinguishes quantum theory from classical? David suspected that a variation on my quasiprobability could unlock a problem in metrology, the study of measurements.

k1

Suppose that you’ve built a quantum computer. It consists of gates—uses of, e.g., magnets or lasers to implement logical operations. A classical gate implements operations such as “add 11.” A quantum gate can implement an operation that involves some number \theta more general than 11. You can try to build your gate correctly, but it might effect the wrong \theta value. You need to measure \theta.

How? You prepare some quantum state | \psi \rangle and operate on it with the gate. \theta imprints itself on the state, which becomes | \psi (\theta) \rangle. Measure some observable \hat{O}. You repeat this protocol in each of many trials. The measurement yields different outcomes in different trials, according to quantum theory. The average amount of information that you learn about \theta per trial is called the Fisher information.

1

Let’s change this protocol. After operating with the gate, measure another observable, \hat{F}, and postselect: If the \hat{F} measurement yields a desirable outcome f, measure \hat{O}. If the \hat{F}-measurement doesn’t yield the desirable outcome, abort the trial, and begin again. If you choose \hat{F} and f adroitly, you’ll measure \hat{O} only when the trial will provide oodles of information about \theta. You’ll save yourself many \hat{O} measurements that would have benefited you little.2

2

Why does postselection help us? We could understand easily if the system were classical: The postselection would effectively improve the input state. To illustrate, let’s suppose that (i) a magnetic field implemented the gate and (ii) the input were metal or rubber. The magnetic field wouldn’t affect the rubber; measuring \hat{O} in rubber trials would provide no information about the field. So you could spare yourself \hat{O} measurements by postselecting on the system’s consisting of metal.

Magnet

Postselection on a quantum system can defy this explanation. Consider optimizing your input state | \psi \rangle, beginning each trial with the quantum equivalent of metal. Postselection could still increase the average amount of information information provided about \theta per trial. Postselection can enhance quantum metrology even when postselection can’t enhance the classical analogue.

David suspected that he could prove this result, using, as a mathematical tool, the quasiprobability that collaborators and I had defined. We fulfilled his prediction, with Hugo Lepage, Aleks Lasek, Seth Lloyd, and Crispin Barnes. Nature Communications published our paper last month. The work bridges the foundations of quantum theory with quantum metrology and quantum information theory—and, through that quasiprobability, string theory. David’s and my quantum quest continues, so keep an eye out for more theory from us, as well as a photonic experiment based on our first paper.

k3

I still have my grandfather’s antique key. I never found a drawer, chest, or box that it opened. But I don’t mind. I have other mysteries to help unlock.

 

1The morning after my wedding this June, my husband and I found a bouquet ordered by David on our doorstep.

2Postselection has a catch: The \hat{F} measurement has a tiny probability of yielding the desirable outcome. But, sometimes, measuring \hat{O} costs more than preparing | \psi \rangle, performing the gate, and postselecting. For example, suppose that the system is a photon. A photodetector will measure \hat{O}. Some photodetectors have a dead time: After firing, they take a while to reset, to be able to fire again. The dead time can outweigh the cost of the beginning of the experiment.

August 28, 2020

Matt von HippelGrants at the Other End

I’m a baby academic. Two years ago I got my first real grant, a Marie Curie Individual Fellowship from the European Union. Applying for it was a complicated process, full of Word templates and mismatched expectations. Two years later the grant is over, and I get another new experience: grant reporting.

Writing a report after a grant is sort of like applying for a grant. Instead of summarizing and justifying what you intend to do, you summarize and justify what you actually did. There are also Word templates. Grant reports are probably easier than grant applications: you don’t have to “hook” your audience or show off. But they are harder in one aspect: they highlight the different ways different fields handle uncertainty.

If you do experiments, having a clear plan makes sense. You buy special equipment and hire postdocs and even technicians to do specific jobs. Your experiments may or may not find what you hope for, but at least you can try to do them on schedule, and describe the setbacks when you can’t.

As a theorist, you’re more nimble. Your equipment are computers, your postdocs have their own research. Overall, it’s easy to pick up new projects as new ideas come in. As a result, your plans change more. New papers might inspire you to try new things. They might also discourage you, if you learn the idea you had won’t actually work. The field can move fast, and you want to keep up with it.

Writing my first grant report will be interesting. I’ll need to thread the gap between expectations and reality, to look back on my progress and talk about why. And of course, I have to do it in Microsoft Word.

August 26, 2020

Tommaso DorigoA New Method For Muon Energy Measurement In A Granular Calorimeter

Muons are very special particles. They are charged particles that obey the same physical laws and interaction phenomenology of electrons, but their 207 times heavier mass (105 MeV, versus the half MeV of electrons) makes them behave in an entirely different fashion.

For one thing, muons are not stable. As they weigh more than electrons, they may transform the excess weight into energy, undergoing a disintegration (muon decay) which produces an electron and two neutrinos. And since everything that is not prohibited is compulsory in the subnuclear world, this process happens with a half time of 2 microseconds.

read more

August 23, 2020

Sean CarrollThe Biggest Ideas in the Universe | 22. Cosmology

Surely one of the biggest ideas in the universe has to be the universe itself, no? Or, as I claim, the very fact that the universe is comprehensible — as an abstract philosophical point, but also as the empirical observation that the universe we see is a pretty simple place, at least on the largest scales. We focus here mostly on the thermal history — how the constituents of the universe evolve as space expands and the temperature goes down.

YouTube Video

And here is the associated Q&A video:

YouTube Video

August 21, 2020

Sean CarrollHow to Make Educational Videos with a Tablet

We’re well into the Biggest Ideas in the Universe series, and some people have been asking how I make the actual videos. I explained the basic process in the Q&A video for Force, Energy, and Action – embedded below – but it turns out that not everyone watches every single video from start to finish (weird, I know), and besides the details have changed a little bit. And for some reason a lot of people want to do pedagogy via computer these days.

YouTube Video

Brief video tutorial starts at 44:45, but note that details have changed since this was made.

So let me explain my process here, starting with the extremely short version. You need:

  • A computer
  • A video camera (webcam okay)
  • A microphone (computer mic okay)
  • A tablet with writing instrument (e.g. iPad Pro and Apple Pencil)
  • Writing app on the tablet (e.g. Notability)
  • Screen capturing/video editing software on the computer (e.g. Screenflow)
  • Whatever wires and dongles are required to hook all that stuff together.

Hmm, looking over that list it doesn’t seem as simple as I thought. And this is the quick-and-easy version! But you can adapt the level of commitment to your own needs.

The most important step here is to capture your writing, in real time, on the video. (You obviously don’t have to include an image of yourself at all, but it makes things a bit more human, and besides who can possibly talk without making gestures, right?) So you need some kind of tablet to write on. I like the iPad Pro quite a bit, but note that not all iPad models are compatible with a Pencil (or other stylus). And writing with your fingers just doesn’t cut it here.

You also need an app that does that. I am quite fond of both Notability and Notes Plus. (I’m sure that non-iOS ecosystems have their own apps, but there’s no sense in which I’m familiar with the overall landscape; I can only tell you about what I use.) These two apps are pretty similar, with small differences at the edges. When I’m taking notes or marking up PDFs, I’m actually more likely to use Notes Plus, as its cutting/pasting is a bit simpler. And that’s what I used for the very early Biggest Ideas videos. But I got numerous requests to write on a dark background rather than a light one, which is completely reasonable. Notability has that feature and as far as I know Notes Plus does not. And it’s certainly more than good enough for the job.

Then you need to capture your writing, and your voice, and optionally yourself, onto video and edit it together. (Again, no guarantees that my methods are simplest or best, only that they are mine.) Happily there are programs that do everything you want at once: they will capture video from a camera, separately capture audio input, and also separately capture part or all of your computer screen, and/or directly from an external device. Then they will let you edit it all together how you like. Pretty sweet, to be honest.

I started out using Camtasia, which worked pretty well overall. But not perfectly, as I eventually discovered. It wasn’t completely free of crashes, which can be pretty devastating when you’re 45 minutes into an hour-long video. And capture from the iPad was pretty clunky; I had to show the iPad screen on my laptop screen, then capture that region into Camtasia. (The app is smart enough to capture either the whole screen, or any region on it.) By the way, did you know you can show your iPhone/iPad screen on your computer, at least with a Mac? Just plug the device into the computer, open up QuickTime, click “new movie recording,” and ask it to display from the mobile device. Convenient for other purposes.

But happily on Screenflow, which I’ve subsequently switched to, that workaround isn’t necessary; it will capture directly from your tablet (as long as it’s connected to your computer). And in my (very limited) experience it seems a bit more robust and user-friendly.

Okay, so you fire up your computer, open Screenflow, plug in your tablet, point your webcam at yourself, and you’re ready to go. Screenflow will give you a window in which you can make sure it’s recording all the separate things you need (tablet screen, your video, your audio). Hit “Record,” and do your thing. When you’re done, hit “Stop recording.”

What you now have is a Screenflow document that has different tracks corresponding to everything you’ve just recorded. I’m not going to do a full tutorial about editing things together — there’s a big internet out there, full of useful advice. But I will note that you will have to do some editing, it’s not completely effortless. Fortunately it is pretty intuitive once you get the hand of the basic commands. Here is what your editing window in Screenflow will look like.

Main panel at the top left, and all of your tracks at the bottom — in this case (top to bottom) camera video, audio, iPad capture, and static background image. The panel on the right toggles between various purposes; in this case it’s showing all the different files that go into making those tracks. (The video is chopped up into multiple files for reasons having to do with my video camera.) Note that I use a green screen, and one of the nice things about Screenflow is that it will render the green transparent for you with a click of a button. (Camtasia does too, but I’ve found that it doesn’t do as well.)

Editing features are quite good. You can move and split tracks, resize windows, crop windows, add text, set the overall dimensions, etc. One weird thing is that some of the editing features require that you hit Control or Shift or whatever, and when exactly you’re supposed to do this is not always obvious. But it’s all explained online somewhere.

So that’s the basic setup, or at least enough that you can figure things out from there. You also have to upload to YouTube or to your class website or whatever you so choose, but that’s up to you.

Okay now onto some optional details, depending on how much you want to dive into this.

First, webcams are not the best quality, especially the ones built-in to your laptop. I thought about using my iPhone as a camera — the lenses etc. on recent ones are quite good — but surprisingly the technology for doing this is either hard to find or nonexistent. (Of course you can make videos using your phone, but using your phone as a camera to make and edit videos elsewhere seems to be much harder, at least for me.) You can upgrade to an external webcam; Logitech has some good models. But after some experimenting I found it was better just to get a real video camera. Canon has some decent options, but if you already have a camera lying around it should be fine; we’re not trying to be Stanley Kubrick here. (If you’re getting the impression that all this costs money … yeah. Sorry.)

If you go that route, you have to somehow get the video from the camera to your computer. You can get a gizmo like the Cam Link that will pipe directly from a video camera to your computer, so that basically you’re using the camera as a web cam. I tried and found that it was … pretty bad? Really hurt the video quality, though it’s completely possible I just wasn’t setting things up properly. So instead I just record within the camera to an SD card, then transfer to the computer after the fact. For that you’ll need an SD to USB adapter, or maybe you can find a camera that can do it over wifi (mine doesn’t, sigh). It’s a straightforward drag-and-drop to get the video into Screenflow, but my camera chops them up into 20-minute segments. That’s fine, Screenflow sews them together seemlessly.

You might also think about getting a camera that can be controlled wirelessly, either via dedicated remote or by your phone, so that you don’t have to stand up and walk over to it every time you want to start and stop recording. (Your video will look slightly better if you place the camera away from you and zoom in a bit, rather than placing it nearby.) Sadly this is something I also neglected to do.

If you get a camera, it will record sound as well as video, but chances are that sound won’t be all that great (unless maybe you use a wireless lavalier mic? Haven’t tried that myself). Also your laptop mic isn’t very good, trust me. I have an ongoing podcast, so I am already over-equipped on that score. But if you’re relatively serious about audio quality, it would be worth investing in something like a Blue Yeti.

If you want to hear the difference between good and non-so-good microphones, listen to the Entropy video, then the associated Q&A. In the latter, by mistake I forgot to turn on the real mic, and had to use another audio track. (To be honest I forget whether it was from the video camera or my laptop.) I did my best to process the latter to make it sound reasonable, but the difference is obvious.

Of course if you do separately record video and audio, you’ll have to sync them together. Screenflow makes this pretty easy. When you import your video file, it will come with attached audio, but there’s an option to — wait for it — “detach audio.” You can then sync your other audio track (the track will display a waveform indicating volume, and just slide until they match up), and delete the original.

Finally, there’s making yourself look pretty. There is absolutely nothing wrong with just showing whatever office/home background you shoot in front of — people get it. But you can try to be a bit creative with a green screen, and it works much better than the glitchy Zoom backgrounds etc.

Bad news is, you’ll have to actually purchase a green screen, as well as something to hold it up. It’s a pretty basic thing, a piece of cloth or plastic. And, like it or not, if you go this route you’re also going to have to get lights to point at the green screen. If it’s not brightly lit, it’s much harder to remove it in the editor. The good news is, once you do all that, removing the green is a snap in Screenflow (which is much better at this than Camtasia, I found).

You’ll also want to light yourself, with at least one dedicated light. (Pros will insist on at least three — fill, key, and backlight — but we all have our limits.) Maybe this is not so important, but if you want a demonstration, my fondness for goofing up has once again provided for you — on the Entanglement Q&A video, I forgot to turn on the light. Difference in quality is there for you to judge.

My home office looks like this now. At least for the moment.

Oh right one final thing. If you’re making hour-long (or so) videos, the file sizes get quite big. The Screenflow project for one of my videos will be between 20 and 30 GB, and I export to an mp4 that is another 5 GB or so. It adds up if you make a lot of videos! So you might think about investing in an external hard drive. The other options are to save on a growing collection of SD cards, or just delete files once you’ve uploaded to YouTube or wherever. Neither of which was very palatable for me.

You can see my improvement at all these aspects over the series of videos. I upgraded my video camera, switched from light background to dark background on the writing screen, traded in Camtasia for Screenflow, and got better at lighting. I also moved the image of me from the left-hand side to the right-hand side of the screen, which I understand makes the captions easier to read.

I’ve had a lot of fun and learned a lot. And probably put more work into setting things up than most people will want to. But what’s most important is content! If you have something to say, it’s not that hard to share it.

August 18, 2020

Tommaso DorigoAssholery In Academia

Have you ever behaved like an a**hole? Or did you ever have the impulse to do so? Did you ever use your position, your status, your authority to please yourself by crushing some ego? Please answer this in good faith to yourself - nobody is looking behind your shoulders. Take a breath. I know, it's hard to admit it. But we all have.

It is, after all, part of human nature. Humans are ready to make huge sacrifices to acquire a status or a position from which they can harass other human beings. Perhaps we have the unspoken urge to take revenge of the times when we were at the receiving end of such harassment. Or maybe we just tasted the sweet sensation it gives to use your power against somebody who can't fight back.

read more

August 16, 2020

Sean CarrollThe Biggest Ideas in the Universe | 21. Emergence

Little things can come together to make big things. And those big things can often be successfully described by an approximate theory that can be qualitatively different from the theory of the little things. We say that a macroscopic approximate theory has “emerged” from the microscopic one. But the concept of emergence is a bit more general than that, covering any case where some behavior of one theory is captured by another one even in the absence of complete information. An important and subtle example is (of course) how the classical world emerges from the quantum one.

YouTube Video

And here is the Q&A video. Sorry, I hadn’t realized that comments were showing up here on the blog! I have a crack team rushing to get that fixed.

YouTube Video

In the video I refer to a bunch of research papers, here they are:

Mad-Dog Everettianism: https://arxiv.org/abs/1801.08132

Locality from the Spectrum: https://arxiv.org/abs/1702.06142

Finite-Dimensional Hilbert Space: https://arxiv.org/abs/1704.00066

The Einstein Equation of State: https://arxiv.org/abs/gr-qc/9504004

Space from Hilbert Space: https://arxiv.org/abs/1606.08444

Einstein’s Equation from Entanglement: https://arxiv.org/abs/1712.02803

Quantum Mereology: https://arxiv.org/abs/2005.12938

August 13, 2020

Scott AaronsonThe Busy Beaver Frontier

Update (July 27): I now have a substantially revised and expanded version (now revised and expanded even a second time), which incorporates (among other things) the extensive feedback that I got from this blog post. There are new philosophical remarks, some lovely new open problems, and an even-faster-growing (!) integer sequence. Check it out!

Another Update (August 13): Nick Drozd now has a really nice blog post about his investigations of my Beeping Busy Beaver (BBB) function.


A life that was all covid, cancellations, and Trump, all desperate rearguard defense of the beleaguered ideals of the Enlightenment, would hardly be worth living. So it was an exquisite delight, these past two weeks, to forget current events and write an 18-page survey article about the Busy Beaver function: the staggeringly quickly-growing function that probably encodes a huge portion of all interesting mathematical truth in its first hundred values, if only we could know those values or exploit them if we did.

Without further ado, here’s the title, abstract, and link:

The Busy Beaver Frontier
by Scott Aaronson

The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB(20) independent of set theory? Can we prove that BB(n+1)>2BB(n) for large enough n? Given BB(n), how many advice bits are needed to compute BB(n+1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable whether BB(n) is even or odd?

The article is slated to appear soon in SIGACT News. I’m grateful to Bill Gasarch for suggesting it—even with everything else going on, this was a commission I felt I couldn’t turn down!

Besides Bill, I’m grateful to the various Busy Beaver experts who answered my inquiries, to Marijn Heule and Andy Drucker for suggesting some of the open problems, to Marijn for creating a figure, and to Lily, my 7-year-old daughter, for raising the question about the first value of n at which the Busy Beaver function exceeds the Ackermann function. (Yes, Lily’s covid homeschooling has included multiple lessons on very large positive integers.)

There are still a few days until I have to deliver the final version. So if you spot anything wrong or in need of improvement, don’t hesitate to leave a comment or send an email. Thanks in advance!

Of course Busy Beaver has been an obsession that I’ve returned to many times in my life: for example, in that Who Can Name the Bigger Number? essay that I wrote way back when I was 18, in Quantum Computing Since Democritus, in my public lecture at Festivaletteratura, and in my 2016 paper with Adam Yedidia that showed that the values of all Busy Beaver numbers beyond the 7910th are independent of the axioms of set theory (Stefan O’Rear has since shown that independence starts at the 748th value or sooner). This survey, however, represents the first time I’ve tried to take stock of BusyBeaverology as a research topic—collecting in one place all the lesser-known theorems and empirical observations and open problems that I found the most striking, in the hope of inspiring not just contemplation or wonderment but actual progress.

Within the last few months, the world of deep mathematics that you can actually explain to a child lost two of its greatest giants: John Conway (who died of covid, and who I eulogized here) and Ron Graham. One thing I found poignant, and that I didn’t know before I started writing, is that Conway and Graham both play significant roles in the story of the Busy Beaver function. Conway, because most of the best known candidates for Busy Beaver Turing machines turn out, when you analyze them, to be testing variants of the notorious Collatz Conjecture—and Conway is the one who proved, in 1972, that the set of “Collatz-like questions” is Turing-undecidable. And Graham because of Graham’s number from Ramsey theory—a candidate for the biggest number that’s ever played a role in mathematical research—and because of the discovery, four years ago, that the 18th Busy Beaver number exceeds Graham’s number.

(“Just how big is Graham’s number? So big that the 17th Busy Beaver number is not yet known to exceed it!”)

Anyway, I tried to make the survey pretty accessible, while still providing enough technical content to sink one’s two overgrown front teeth into (don’t worry, there are no such puns in the piece itself). I hope you like reading it at least 1/BB(10) as much as I liked writing it.

Update (July 24): Longtime commenter Joshua Zelinsky gently reminded me that one of the main questions discussed in the survey—namely, whether we can prove BB(n+1)>2BB(n) for all large enough n—was first brought to my attention by him, Joshua, in a 2013 Ask-Me-Anything session on this blog! I apologize to Joshua for the major oversight, which has now been corrected. On the positive side, we just got a powerful demonstration both of the intellectual benefits of blogging, and of the benefits of sharing paper drafts on one’s blog before sending them to the editor!

August 11, 2020

Clifford JohnsonOnline Teaching Methods

[caption id="attachment_19517" align="aligncenter" width="499"] Sharing my live virtual chalkboard while online teaching using Zoom (the cable for the iPad is for power only).[/caption]It is an interesting time for all of us right now, whatever our walk of life. For those of us who make our living by standing up in front of people and talking and/or leading discussion (as is the case for teachers, lecturers, and professors of various sorts), there has been a lot of rapid learning of new techniques and workflows as we scramble to keep doing that while also not gathering in groups in classrooms and seminar rooms. I started thinking about this last week (the week of 2nd March), prompted by colleagues in the physics department here at USC, and then tested it out last Friday (6th) live with students from my general relativity class (22 students). But they were in the room so that we could iron out any issues, and get a feel for what worked best. Since then, I gave an online research seminar to the combined Harvard/MIT/USC theoretical physics groups on Wednesday (cancelling my original trip to fly to the East Coast to give it in person), and that worked pretty well.

But the big test was this morning. Giving a two hour lecture to my General Relativity class where we were really not all in the same room, but scattered over the campus and city (and maybe beyond), while being able to maintain a live play-by-play working environment on the board, as opposed to just showing slides. Showing slides (by doing screen-sharing) is great, but for the kind of physics techniques I’m teaching, you need to be able to show how to calculate, and bring the material to life - the old “chalk and talk” that people in other fields tend to frown upon, but which is so essential to learning how to actually *think* and navigate the language of physics, which is in large part the diagrams and equations. This is the big challenge lots of people are worried about with regards going online - how do I do that? (Besides, making a full set of slides for every single lecture you might want to do For the next month or more seems to me like a mammoth task - I’d not want to do that.)

So I’ve arrived at a system that works for me, and I thought I’d share it with those of you who might not yet have found your own solution. Many of the things I will say may well be specific to me and my institution (USC) at some level of detail, but aspects of it will generalize to other situations. Adapt as applies to you.

Do share the link to this page with others if you wish to - I may well update it from time to time with more information.

Here goes:
[...] Click to continue reading this post

The post Online Teaching Methods appeared first on Asymptotia.

August 04, 2020

Mark Chu-CarrollThe Futility of Pi Denialism

To me, the strangest crackpots I’ve encountered through this blog are the π denialists.

When people have trouble with Cantor and differently sized infinities, I get it. It defies our intuitions. It doesn’t seem to make sense.

When you look at Gödel incompleteness theorem – it’s really hard to wrap your head around. It doesn’t seem to make sense. I get it.

When you talk about things like indescribable numbers, it’s crazy. How could it possibly be true? I get it.

But π?

It’s a pretty simple number: the ratio of the diameter of the circle and the circle’s circumference. There’s nothing all that difficult about what it means. And there are so many different ways of calculating it! We can use the nature of a circle, and derive series that compute it. We can write simple programs, do tactile demos, measure actual physical phenomena. And yet, there are people who fervently believe that it’s all a sham: that the value of π isn’t what we say it is. It’s exactly 4. Or it’s exactly 22/7. Or it’s exactly \frac{4}{\phi^{\frac{1}{2}}}. Or it’s not a number, it’s an acceleration.

It’s amazing. I constantly get mail – mostly from fans of either Jain (the author of the &phi;-based &pi; mentioned above), or from followers of Miles Mathis (he of “&pi; isn’t a ratio, it’s an acceleration” fame), insisting that I’m part of the great mathematical conspiracy to deny the true factual value of &pi;.

And yet… It’s so simple to demonstrate how wrong that is.

My favorite version is a simple program.

Here’s the idea, followed by the code.

  • Take the unit square – the region of the graph from (0, 0) to (1, 1), and inside of it, an arc of the circle of radius 1 around (0,0).
  • Pick a random point, (x, y), anywhere inside of that square.
  • If the distance from the origin (x^2 + y^2) is less than one, then the point is inside the circle. If it isn’t, then it’s outside of the circle.
  • The probability, p, of any given random point being inside that circle is equal to the ratio of the area of the circle to the area of the square. The area of that region of the circle is: \pi*1^2/4, and the area of the the square is 1^2. So the probability is (1/4)\pi/1, or \pi/4.
  • So take a ton of random points, and count how many are inside the circle.
  • The ratio of points inside the circle to total random points is \pi/4. The more random points you do this with, the closer you get to π.

We can turn that into a simple Python program:

from random import random

def computePi(points):
    inside = 0
    for i in range(points):
        x = random()
        y = random()
        if (x*x + y*y) < 1.0:
            inside = inside + 1
    return (inside*1.0)/points * 4.0


for i in range(30):
    pi = computePi(2**i)
    print(f"Pi at 2**{i} iterations = {pi}")

The exact value that you’ll get when you run this depends on the random number generator, and the initial seed value. If you don’t specify a seed, most random number libraries will use something like last 32 digits of the current system time in nanoseconds, so you’ll get slightly different results each time you run it. I just ran it, and got:

Pi at 2**0 iterations = 4.0
Pi at 2**1 iterations = 4.0
Pi at 2**2 iterations = 3.0
Pi at 2**3 iterations = 2.0
Pi at 2**4 iterations = 3.5
Pi at 2**5 iterations = 2.75
Pi at 2**6 iterations = 3.0625
Pi at 2**7 iterations = 3.125
Pi at 2**8 iterations = 3.109375
Pi at 2**9 iterations = 3.1875
Pi at 2**10 iterations = 3.171875
Pi at 2**11 iterations = 3.126953125
Pi at 2**12 iterations = 3.12109375
Pi at 2**13 iterations = 3.14013671875
Pi at 2**14 iterations = 3.169677734375
Pi at 2**15 iterations = 3.1324462890625
Pi at 2**16 iterations = 3.14453125
Pi at 2**17 iterations = 3.147247314453125
Pi at 2**18 iterations = 3.138519287109375
Pi at 2**19 iterations = 3.1364669799804688
Pi at 2**20 iterations = 3.1443214416503906
Pi at 2**21 iterations = 3.141223907470703
Pi at 2**22 iterations = 3.141301155090332
Pi at 2**23 iterations = 3.1419320106506348
Pi at 2**24 iterations = 3.1415367126464844
Pi at 2**25 iterations = 3.1421539783477783
Pi at 2**26 iterations = 3.1420511603355408
Pi at 2**27 iterations = 3.1415300369262695
Pi at 2**28 iterations = 3.141532242298126
Pi at 2**29 iterations = 3.1415965482592583

I suspect that I could do a lot better using a special number library to reduce or eliminate the floating point roundoff errors, but I don’t really think it’s worth the time. Just this much, using a really simple, obvious, intuitive method produces a better result than any of the numbers pushed by the crackpots.

To support that previous statement: the best crackpot value for π is the one based on the golden ratio. That version insists that the true value of π is 3.14460551103. But you can see – by using the simple metric of counting points inside and outside the circle – that the actual value is quite different from that.

That’s what makes this breed of denialism so stupid. π isn’t complicated: it’s a simple ratio. And it’s easy to test using simple concepts. Pi relates the diameter (or radius) of a circle to the circumference or area of that circle. So any test that works with circles can easily show you what π is. There’s nothing mysterious or counterintuitive or debatable about it. It is what it is, and you can test it yourself.

August 01, 2020

Jordan EllenbergPandemic blog 32: writing

Taylor Swift surprised everyone by releasing a surprise new album, which she wrote and recorded entirely during the quarantine. My favorite song on it is the poignant “Invisible String”

which has an agreeable Penguin Cafe Orchestra vibe, see e.g.

(The one thing about “Invisible String” is that people seem to universally read it as a song about how great it is to finally have found true love, but people, if you say

And isn’t it just so pretty to think
All along there was some
Invisible string
Tying you to me?

you are (following Hemingway at the end of The Sun Also Rises) saying it would be lovely to think there was some kind of karmic force-bond tying you and your loved one together, but that, despite what’s pretty, there isn’t, and you fly apart.)

Anyway, I too, like my fellow writer Taylor Swift, have been working surprisingly fast during this period of enforced at-homeness. Even with the kids here all the time, not going anywhere is somehow good writing practice. And this book I’m writing, the one that’s coming out next spring, is now almost done. I’m somewhat tetchy about saying too much before the book really exist, but it’s called Shape, there is a lot of different stuff in it, and I hope you’ll like it.

ResonaancesDeath of a forgotten anomaly

Anomalies come with a big splash, but often go down quietly. A recent ATLAS measurement, just posted on arXiv, killed a long-standing and by now almost forgotten anomaly from the LEP collider.  LEP was an electron-positron collider operating some time in the late Holocene. Its most important legacy is the very precise measurements of the interaction strength between the Z boson and matter, which to this day are unmatched in accuracy. In the second stage of the experiment, called LEP-2, the collision energy was gradually raised to about 200 GeV, so that pairs of W bosons could be produced. The experiment was able to measure the branching fractions for W decays into electrons, muons, and tau leptons.  These are precisely predicted by the Standard Model: they should be equal to 10.8%, independently of the flavor of the lepton (up to a very small correction due to the lepton masses).  However, LEP-2 found 

Br(W → τν)/Br(W → eν) = 1.070 ± 0.029,     Br(W → τν)/Br(W → μν) = 1.076 ± 0.028.

While the decays to electrons and muons conformed very well to the Standard Model predictions, 
there was a 2.8 sigma excess in the tau channel. The question was whether it was simply a statistical fluctuation or new physics violating the Standard Model's sacred principle of lepton flavor universality. The ratio Br(W → τν)/Br(W → eν) was later measured at the Tevatron, without finding any excess, however the errors were larger. More recently, there have been hints of large lepton flavor universality violation in B-meson decays, so it was not completely crazy to think that the LEP-2 excess was a part of the same story.  

The solution came 20 years later LEP-2: there is no large violation of lepton flavor universality in W boson decays. The LHC has already produced hundreds million of top quarks, and each of them (as far as we know) creates a W boson in the process of its disintegration. ATLAS used this big sample to compare the W boson decay rate to taus and to muons. Their result: 

Br(W → τν)/Br(W → μν) = 0.992 ± 0.013.

There is no slightest hint of an excess here. But what is most impressive is that the error is smaller,  by more than a factor of two, than in LEP-2! After the W boson mass, this is another precision measurement where a dirty hadron collider environment achieves a better accuracy than an electron-positron machine. 
Yes, more of that :)   

Thanks to the ATLAS measurement, our knowledge of the W boson couplings has increased substantially, as shown in the picture (errors are 1 sigma): 


The current uncertainty is a few per mille. This is still worse than for the Z boson couplings to leptons, in which case the accuracy is better than per mille, but we're getting there... Within the present accuracy, the W boson couplings to all leptons are consistent with the Standard Model prediction, and with lepton flavor universality in particular. Some tensions appearing in earlier global fits are all gone. The Standard Model wins again, nothing to see here, we can move on to the next anomaly. 

July 27, 2020

John PreskillA quantum walk down memory lane

In elementary and middle school, I felt an affinity for the class three years above mine. Five of my peers had siblings in that year. I carpooled with a student in that class, which partnered with mine in holiday activities and Grandparents’ Day revues. Two students in that class stood out. They won academic-achievement awards, represented our school in science fairs and speech competitions, and enrolled in rigorous high-school programs.

Those students came to mind as I grew to know David Limmer. David is an assistant professor of chemistry at the University of California, Berkeley. He studies statistical mechanics far from equilibrium, using information theory. Though a theorist ardent about mathematics, he partners with experimentalists. He can pass as a physicist and keeps an eye on topics as far afield as black holes. According to his faculty page, I discovered while writing this article, he’s even three years older than I. 

I met David in the final year of my PhD. I was looking ahead to postdocking, as his postdoc fellowship was fading into memory. The more we talked, the more I thought, I’d like to be like him.

Playground

I had the good fortune to collaborate with David on a paper published by Physical Review A this spring (as an Editors’ Suggestion!). The project has featured in Quantum Frontiers as the inspiration for a rewriting of “I’m a little teapot.” 

We studied a molecule prevalent across nature and technologies. Such molecules feature in your eyes, solar-fuel-storage devices, and more. The molecule has two clumps of atoms. One clump may rotate relative to the other if the molecule absorbs light. The rotation switches the molecule from a “closed” configuration to an “open” configuration.

Molecular switch

These molecular switches are small, quantum, and far from equilibrium; so modeling them is difficult. Making assumptions offers traction, but many of the assumptions disagreed with David. He wanted general, thermodynamic-style bounds on the probability that one of these molecular switches would switch. Then, he ran into me.

I traffic in mathematical models, developed in quantum information theory, called resource theories. We use resource theories to calculate which states can transform into which in thermodynamics, as a dime can transform into ten pennies at a bank. David and I modeled his molecule in a resource theory, then bounded the molecule’s probability of switching from “closed” to “open.” I accidentally composed a theme song for the molecule; you can sing along with this post.

That post didn’t mention what David and I discovered about quantum clocks. But what better backdrop for a mental trip to elementary school or to three years into the future?

I’ve blogged about autonomous quantum clocks (and ancient Assyria) before. Autonomous quantum clocks differ from quantum clocks of another type—the most precise clocks in the world. Scientists operate the latter clocks with lasers; autonomous quantum clocks need no operators. Autonomy benefits you if you want for a machine, such as a computer or a drone, to operate independently. An autonomous clock in the machine ensures that, say, the computer applies the right logical gate at the right time.

What’s an autonomous quantum clock? First, what’s a clock? A clock has a degree of freedom (e.g., a pair of hands) that represents the time and that moves steadily. When the clock’s hands point to 12 PM, you’re preparing lunch; when the clock’s hands point to 6 PM, you’re reading Quantum Frontiers. An autonomous quantum clock has a degree of freedom that represents the time fairly accurately and moves fairly steadily. (The quantum uncertainty principle prevents a perfect quantum clock from existing.)

Suppose that the autonomous quantum clock constitutes one part of a machine, such as a quantum computer, that the clock guides. When the clock is in one quantum state, the rest of the machine undergoes one operation, such as one quantum logical gate. (Experts: The rest of the machine evolves under one Hamiltonian.) When the clock is in another state, the rest of the machine undergoes another operation (evolves under another Hamiltonian).

Clock 2

Physicists have been modeling quantum clocks using the resource theory with which David and I modeled our molecule. The math with which we represented our molecule, I realized, coincided with the math that represents an autonomous quantum clock.

Think of the molecular switch as a machine that operates (mostly) independently and that contains an autonomous quantum clock. The rotating clump of atoms constitutes the clock hand. As a hand rotates down a clock face, so do the nuclei rotate downward. The hand effectively points to 12 PM when the switch occupies its “closed” position. The hand effectively points to 6 PM when the switch occupies its “open” position.

The nuclei account for most of the molecule’s weight; electrons account for little. They flit about the landscape shaped by the atomic clumps’ positions. The landscape governs the electrons’ behavior. So the electrons form the rest of the quantum machine controlled by the nuclear clock.

Clock 1

Experimentalists can create and manipulate these molecular switches easily. For instance, experimentalists can set the atomic clump moving—can “wind up” the clock—with ultrafast lasers. In contrast, the only other autonomous quantum clocks that I’d read about live in theory land. Can these molecules bridge theory to experiment? Reach out if you have ideas!

And check out David’s theory lab on Berkeley’s website and on Twitter. We all need older siblings to look up to.

July 25, 2020

Jordan EllenbergPandemic blog 31: farmers’ market

First trip back to the Westside Community Market, which in ordinary times is an every Saturday morning trip for me. It feels like a model for people just sitting down and figuring out how to arrange for people to do the things they want to do in a way that minimizes transmission. We don’t have to eliminate every chance for someone to get COVID. If we cut transmissions to a third of what it would otherwise be, that doesn’t mean a third as many people get COVID — it means the pandemic dies out instead of exploding. Safe is impossible, safer is important!

They’ve reorganized everything so that the stalls are farther apart. Everybody’s wearing masks, both vendors and customers. There are several very visible hand-washing stations. Most of the vendors now take credit cards through Square, and at least one asked me to pay with Venmo. It’s easy for people to keep their distance (though the vendors told me it was more crowded earlier in the morning.)

And of course it’s summer, the fields are doing what the fields do, the Flyte Farm blueberries, best in Wisconsin, are ready — I bought five pounds, and four containers of Murphy Farms cottage cheese. All you need is those two things for the perfect Wisconsin summer meal.