# Planet Musings

## April 23, 2019

### Georg von Hippel — Book Review: "Lattice QCD — Practical Essentials"

There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

### Georg von Hippel — Looking for guest blogger(s) to cover LATTICE 2018

Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

### Georg von Hippel — Looking for guest bloggers to cover LATTICE 2019

My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, send me an email or write a comment to this post.

### Backreaction — Comments on “Quantum Gravity in the Lab” in Scientific American

The April issue of Scientific American has an article by Tim Folger titled “Quantum Gravity in the Lab”. It is mainly about the experiment by the Aspelmeyer group in Vienna, which I wrote about three years ago here. Folger also briefly mentions the two other proposals by Bose et al and Marletto and Vedral. (For context and references see my recent blogpost about the possibility to

## April 22, 2019

### David Hogg — binary stars and lots more

Today was a very very special Stars meeting, at least from my perspective! I won't do it justice. Carles Badenes (Pitt) led us off with a discussion of how much needs to be done to get a complete picture of binary stars and their evolution. It's a lot! And a lot of the ideas here are very causal. For example: If you find that the binary fraction varies with metallicity, what does it really vary with? Since, after all, stellar age varies with metallicity, as do all the specific abundance ratios. And also star-formation environment! It will take lots of data and theory combined to answer these questions.

Andreas Flörs (ESO) spoke about the problem of fitting models to the nebular phase of late-time supernovae, where you want to see the different elements in emission and figure out what's being produced and decaying. The problem is: There are many un-modeled ions and the fits to the data are technically bad! How to fix this. We discussed Gaussian-process fixes, both stationary and non-stationary. And also model elaboration. And the connection between these two!

Helmer Koppelman (Kapteyn) showed some amazing structure in the overlap of ESA Gaia data and various spectroscopic surveys (including LAMOST and APOGEE and others). He was showing visualizations in the z-max vs azimuthal-action plane. We discussed any ways it could be selection effects. It could be; it is always dangerous to plot the data in derived (rather than more closely observational) properties.

Tyson Littenberg (NASA Marshall) told us about white-dwarf–white-dwarf (see what I did with dashes there?) binaries in ESA LISA. He has performed an information-theoretic analysis for a realistic Milky Way simulation. He showed that many binaries will be very well localized; many thousands will be clearly detected; and some will get full 6-d kinematics because the chirp mass will be visible. Of course there are simplifying assumptions about the binary environments and accelerations, but there is no doubt that it will be incredible. Late in the day we discussed how you might model all the sea of sources that aren't individually detectable. But that said, everything to many tens of kpc in the MW will be visible, so incompleteness isn't a problem until you get seriously extragalactic. Amazing!

### Tommaso Dorigo — Anomaly Detection: Unsupervised Learning For New Physics Searches

Experimental particle physics, the field of research I have been involved in since my infancy as a scientist, consists of folks like you and me, who are enthusiastic about constructing new experiments and testing our understanding of Nature. Some spend their life materially designing and building the apparata, others are more attracted by torturing the data until they speak.

To be precise, data analysts can be divided further into two classes, as I was once taught by my friend Paolo Giromini (a colleague in the late CDF experiment, about whose chase for new physics I have writtein in my book "Anomaly!"). These are aptly called "gatherers" and "hunters".

## April 19, 2019

### Scott Aaronson — Not yet retired from research

Last night, two papers appeared on the quantum physics arXiv that my coauthors and I have been working on for more than a year, and that I’m pretty happy about.

The first paper, with Guy Rothblum, is Gentle Measurement of Quantum States and Differential Privacy (85 pages, to appear in STOC’2019). This is Guy’s first paper that has anything to do with quantum, and also my first paper that has anything to do with privacy. (What do I care about privacy? I just share everything on this blog…) The paper has its origin when I gave a talk at the Weizmann Institute about “shadow tomography” (a task where you have to measure quantum states very carefully to avoid destroying them), and Guy was in the audience, and he got all excited that the techniques sounded just like what they use to ensure privacy in data-mining, and I figured it was just some wacky coincidence and brushed him off, but he persisted, and it turned out that he was 100% right, and our two fields were often studying the same problems from different angles and we could prove it. Anyway, here’s the abstract:

In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is α-gentle for small α is also O(α)-DP, and any product measurement that is ε-DP is also O(ε√n)-gentle.

Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state ρ, as well as known two-outcome measurements E1,…,Em, shadow tomography asks us to estimate Pr[Ei accepts ρ], for every i∈[m], by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)2(log d)2) copies of ρ, compared to Aaronson’s previous bound of ~O((log m)4(log d)). Our protocol has the advantages of being online (that is, the Ei‘s are processed one at a time), gentle, and conceptually simple.

Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.

The second paper, with Robin Kothari, UT Austin PhD student William Kretschmer, and Justin Thaler, is Quantum Lower Bounds for Approximate Counting via Laurent Polynomials. Here’s the abstract:

Given only a membership oracle for S, it is well-known that approximate counting takes Θ(√(N/|S|)) quantum queries. But what if a quantum algorithm is also given “QSamples”—i.e., copies of the state |S⟩=Σi∈S|i⟩—or even the ability to apply reflections about |S⟩? Our first main result is that, even then, the algorithm needs either Θ(√(N/|S|)) queries or else Θ(min{|S|1/3,√(N/|S|)}) reflections or samples. We also give matching upper bounds.

We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new “explosion argument” that pits the positive- and negative-degree parts of the polynomial against each other, and a new formulation of the dual polynomials method.

Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. More precisely, we show that, even if Arthur can make T quantum queries to the set S⊆[N], and also receives an m-qubit quantum witness from Merlin in support of S being large, we have Tm=Ω(min{|S|,√(N/|S|)}). This resolves the open problem of giving an oracle separation between SBP, the complexity class that captures approximate counting, and QMA.

Note that QMA is “stronger” than the queries+QSamples model in that Merlin’s witness can be anything, rather than just the specific state |S⟩, but also “weaker” in that Merlin’s witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the “Laurent polynomial method” might be broadly useful in complexity theory.

I need to get ready for our family’s Seder now, but after that, I’m happy to answer any questions about either of these papers in the comments.

Meantime, the biggest breakthrough in quantum complexity theory of the past month isn’t either of the above: it’s the paper by Anand Natarajan and John Wright showing that MIP*, or multi-prover interactive proof systems with entangled provers, contains NEEXP, or nondeterministic doubly-exponential time (!!). I’ll try to blog about this later, but if you can’t wait, check out this excellent post by Thomas Vidick.

### David Hogg — binaries

Great Astro Seminar today by Carles Badenes (Pitt), who has been studying binary stars, in the regime that you only have a few radial-velocity measurements. In this regime, you can tell that something is a binary, but you can't tell what its period or velocity amplitude is with any precision (and often almost no precision). He showed results relevant to progenitors of supernovae and other stellar explosions, and also exoplanet populations. Afterwards, Andy Casey (Monash) and I continued the discussion over drinks.

### n-Category CaféUnivalence in (∞,1)-toposes

It’s been believed for a long time that homotopy type theory should be an “internal logic” for $(\infty,1)$-toposes, in the same way that higher-order logic (or extensional type theory) is for 1-toposes. Over the past decade a lot of steps have been taken towards realizing this vision, but one important gap that’s remained is the construction of sufficiently strict universe objects in model categories presenting $(\infty,1)$-toposes: the object classifiers of an $(\infty,1)$-topos correspond directly only to a kind of “weakly Tarski” universe in type theory, which would probably be tedious to use in practice (no one has ever seriously tried).

Yesterday I posted a preprint that closes this gap (in the cases of most interest), by constructing strict univalent universe objects in a class of model categories that suffice to present all Grothendieck $(\infty,1)$-toposes. The model categories are, perhaps not very surprisingly, left exact localizations of injective model structures on simplicial presheaves, which were previously known to model all the rest of type theory; the main novelty is a new more explicit “algebraic” characterization of the injective fibrations, enabling the construction of universes.

I don’t have time to write a long blog post today, but if you want a longer and more detailed overview you can just check out the chatty 7-page introduction to the paper, or the slides from my talk about it at last month’s Midwest HoTT seminar. Let me just note here some important points:

• The metatheory is ZFC plus as many inaccessible cardinals as we want universes.
• The type theory is Book HoTT, with univalence as an axiom.
• The model categories are simplicial, not cubical. It’s possible that similar methods would work cubically, but cubical sets are less well-studied in model category theory, and it’s not yet known what $(\infty,1)$-toposes can be presented by (what varieties of) cubical sets.
• The $(\infty,1)$-toposes are Grothendieck, not elementary, although the techniques might prove useful for the elementary case (e.g. via a Yoneda embedding).
• We only construct an interpretation of type theory into $(\infty,1)$-toposes, not the “strong internal language conjecture” that a “homotopy theory of type theories” is equivalent to the homotopy theory of some flavor of elementary $(\infty,1)$-topos — although again, the same techniques might prove useful for the latter.
• The initiality principle is not addressed; it remains as open or as solved as you previously believed it to be.
• The main remaining semantic open problem is constructing higher inductive types in such a way that the universes are closed under them. I am optimistic about this and have some ideas; in particular, it’s possible that some of the ideas used in cubical models could be adapted to the simplicial case.

### Matt von Hippel — The Black Box Theory of Everything

What is science? What makes a theory scientific?

There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.

In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).

Seems reasonable, right? Let’s try a thought experiment.

In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.

Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.

Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.

Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.

Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.

(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)

Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!

But is this “quark theory” scientific?

“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.

And in practice? I doubt that many scientists would care.

“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.

First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.

Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).

If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.

What if we do the experiment first, though?

If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.

I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.

My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.

The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?

That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.

What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)

It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.

The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.

### David Hogg — topological gravity; time domain

Much excellent science today. I am creating a Monday-morning check-in and parallel working time session for the undergraduates I work with. We spoke about box-least-squares for exoplanet transit finding, about FM-radio demodulators and what they have to do with timing approaches to timing-based planet finding, scientific visualization and its value in communication, and software development for science.

At lunch, the Brown-Bag talk (my favorite hour of the week) was by two CCPP PhD students. Cedric Yu (NYU) spoke about the topological form of general relativity. As my loyal reader could possibly know, I love the reformulation of GR in which you take the square-root of the metric (the tetrad, in the business). Yu showed that if you augment this with some spin fields, you can reformulate GR entirely in terms of topological invariants! That's amazing and beautiful. It relates to some cool things relating geometry and topology in old-school math. Oliver Janssen (NYU) spoke about the wave function of the Universe, and what it might mean for the initial conditions. There is a sign ambiguity, apparently, in the argument of an exponential in the action! That's a big deal. But the ideas are interesting because they force thinking about how quantum mechanics relates to the entire Universe (and hence gravity).

In addition to all this, today was the first-ever meeting of the NYU Time Domain Astrophysics group meeting, which brings together a set of people at NYU working in the time domain. It is super diverse, because we have people working on exoplanets, asteroseismology, stellar explosions, stellar mergers, black-hole binaries, tidal disruption events, and more. We are hoping to use our collective wisdom and power to help each other and also influence the time-domain observing projects in which many of us are involved.

### David Hogg — the Snail

As my loyal reader knows, I like to call the phase spiral in the vertical structure of the disk (what's sometimes called the Antoja spiral) by the name The Snail. Today I discussed with Suroor Gandhi (NYU) how we might use the Snail to measure the disk midplane, the local standard of rest (vertically), the mass density of the disk, and the run of this density with Galactocentric radius. We have a 9-parameter model to fit, in each angular-momentum slice. More as this develops!

### n-Category CaféCan 1+1 Have More Than Two Points?

I feel I’ve asked this before… but now I really want to know. Christian Williams and I are working on cartesian closed categories, and this is a gaping hole in my knowledge.

Question 1. Is there a cartesian closed category with finite coproducts such that there exist more than two morphisms from $1$ to $1 + 1$?

Cartesian closed categories with finite coproducts are a nice context for ‘categorified arithmetic’, since they have $0$, $1$, addition, multiplication and exponentiation. The example we all know is the category of finite sets. But every cartesian closed category with finite coproducts obeys what Tarski called the ‘high school algebra’ axioms:

$x + y \cong y + x$

$(x + y) + z \cong x + (y + z)$

$x \times 1 \cong x$

$x \times y \cong y \times x$

$(x \times y) \times z \cong x \times (y \times z)$

$x \times (y + z) \cong x \times y + x \times z$

$1^x \cong 1$

$x^1 \cong x$

$x^{(y + z)} \cong x y \times x z$

$(x \times y)^z \cong x^z \times y^z$

$(x^y)^z \cong x^{(y \times z)}$

together with some axioms involving $0$ which for some reason Tarski omitted: perhaps he was scared to admit that in this game we want $0^0 = 1$.

So, one way to think about my question is: how weird can such a category be?

For all I know, the answer to Question 1 could be “no” but the answer to this one might still be “yes”:

Question 2. Let $C$ be a cartesian closed category with finite coproducts. Let $N$ be the full subcategory on the objects that are finite coproducts of the terminal object. Can $N$ be inequivalent to the category of finite sets?

In fact I’m so clueless that for all I know the answer to Question 1 could be “no” but the answer to this one might still be “yes”:

Question 3. Is there a cartesian closed category with finite coproducts such that there exist more than three morphisms from $1$ to $1 + 1 + 1$?

Or similarly for other numbers.

Question 4. Is there a category with finite coproducts and a terminal object such that there exist more than two morphisms from $1$ to $1 + 1$?

Just to stick my neck out, I’ll bet that the answer to this last one, at least, is “yes”.

## April 18, 2019

### David Hogg — Sagittarius dark matter?

It's a bad week, research-wise. But I did chat with Bonaca (Harvard) this morning, and she showed that it is at least possible (not confirmed yet, but possible) that the dark substructure we infer from the GD-1 stream has kinematics consistent with it having fallen into the Milky Way along with the Sagittarius dwarf galaxy. This, if true, could lead to all sorts of new inferences and measurements.

Reminder: The idea is that there is a gap and spur in the stream, which we think was caused by a gravitational interaction with an unseen, compact mass. We took radial-velocity data which pin down the kinematics of that mass, and put joint constraints on mass, velocity, and timing. Although these constraints span a large space, it would still be very remarkable, statistically, if the constraints overlap the Sagittarius galaxy stream.

Philosophically, this connects to interesting ideas in inference: We can assume that the dark mass has nothing to do with Sag. This is conservative, and we get conservative constraints on its properties. Or we can assume that it is associated with Sag. This is not conservative, but if we make the assumption, it will improve enormously what we can measure or predict. It really gets at the conditionality or subjectivity of inference.

### Jordan Ellenberg — In which I almost waste four dollars at Amazon

Instructive anecdote. I needed a somewhat expensive book and the UW library didn’t have it. So I decided to buy it. Had the Amazon order queued up and ready to go, $45 with free shipping, then had a pang of guilt about the destruction of the publishing industry and decided it was worth paying a little extra to order it directly from the publisher (Routledge.) From the publisher it was$41, with free shipping.

I think it really did used to be true that the Amazon price was basically certain to be the best price. Not anymore. Shop around!

## April 17, 2019

### Scott Aaronson — Wonderful world

I was saddened about the results of the Israeli election. The Beresheet lander, which lost its engine and crashed onto the moon as I was writing this, seems like a fitting symbol for where the country is now headed. Whatever he was at the start of his career, Netanyahu has become the Israeli Trump—a breathtakingly corrupt strongman who appeals to voters’ basest impulses, sacrifices his country’s future and standing in the world for short-term electoral gain, considers himself unbound by laws, recklessly incites hatred of minorities, and (despite zero personal piety) cynically panders to religious fundamentalists who help keep him in power. Just like with Trump, it’s probably futile to hope that lawyers will swoop in and free the nation from the demagogue’s grip: legal systems simply aren’t designed for assaults of this magnitude.

(If, for example, you were designing the US Constitution, how would you guard against a presidential candidate who openly supported and was supported by a hostile foreign power, and won anyway? Would it even occur to you to include such possibilities in your definitions of concepts like “treason” or “collusion”?)

The original Zionist project—the secular, democratic vision of Herzl and Ben-Gurion and Weizmann and Einstein, which the Nazis turned from a dream to a necessity—matters more to me than most things in this world, and that was true even before I’d spent time in Israel and before I had a wife and kids who are Israeli citizens. It would be depressing if, after a century of wildly improbable victories against external threats, Herzl’s project were finally to eat itself from the inside. Of course I have analogous worries (scaled up by two orders of magnitude) for the US—not to mention the UK, India, Brazil, Poland, Hungary, the Philippines … the world is now engaged in a massive test of whether Enlightenment liberalism can long endure, or whether it’s just a metastable state between one Dark Age and the next. (And to think that people have accused me of uncritical agreement with Steven Pinker, the world’s foremost optimist!)

In times like this, one takes one’s happiness where one can find it.

So, yeah: I’m happy that there’s now an “image of a black hole” (or rather, of radio waves being bent around a black hole’s silhouette). If you really want to understand what the now-ubiquitous image is showing, I strongly recommend this guide by Matt Strassler.

I’m happy that integer multiplication can apparently now be done in O(n log n), capping a decades-long quest (see also here).

I’m happy that there’s now apparently a spectacular fossil record of the first minutes after the asteroid impact that ended the Cretaceous period. Even better will be if this finally proves that, yes, some non-avian dinosaurs were still alive on impact day, and had not gone extinct due to unrelated climate changes slightly earlier. (Last week, my 6-year-old daughter sang a song in a school play about how “no one knows what killed the dinosaurs”—the verses ran through the asteroid and several other possibilities. I wonder if they’ll retire that song next year.) If you haven’t yet read it, the New Yorker piece on this is a must.

I’m happy that my good friend Zach Weinersmith (legendary author of SMBC Comics), as well as the GMU economist Bryan Caplan (rabblerousing author of The Case Against Education, which I reviewed here), have a new book out: a graphic novel that makes a moral and practical case for open borders (!). Their book is now available for pre-order, and at least at some point cracked Amazon’s top 10. Just last week I met Bryan for the first time, when he visited UT Austin to give a talk based on the book. He said that meeting me (having known me only from the blog) was like meeting a fictional character; I said the feeling was mutual. And as for Bryan’s talk? It felt great to spend an hour visiting a fantasyland where the world’s economies are run by humane rationalist technocrats, and where walls are going down rather than up.

I’m happy that, according to this Vanity Fair article, Facebook will still ban you for writing that “men are scum” or that “women are scum”—having ultimately rejected the demands of social-justice activists that it ban only the latter sentence, not the former. According to the article, everyone on Facebook’s Community Standards committee agreed with the activists that this was the right result: dehumanizing comments about women have no place on the platform, while (superficially) dehumanizing comments about men are an important part of feminist consciousness-raising that require protection. The problem was simply that the committee couldn’t come up with any general principle that would yield that desired result, without also yielding bad results in other cases.

I’m happy that the 737 Max jets are grounded and will hopefully be fixed, with no thanks to the FAA. Strangely, while this was still the top news story, I gave a short talk about quantum computing to a group of Boeing executives who were visiting UT Austin on a previously scheduled trip. The title of the session they put me in? “Disruptive Computation.”

I’m happy that Greta Thunberg, the 15-year-old Swedish climate activist, has attracted a worldwide following and might win the Nobel Peace Prize. I hope she does—and more importantly, I hope her personal story will help galvanize the world into accepting things that it already knows are true, as with the example of Anne Frank (or for that matter, Gandhi or MLK). Confession: back when I was an adolescent, I often daydreamed about doing exactly what Thunberg is doing right now, leading a worldwide children’s climate movement. I didn’t, of course. In my defense, I wouldn’t have had the charisma for it anyway—and also, I got sidetracked by even more pressing problems, like working out the quantum query complexity of recursive Fourier sampling. But fate seems to have made an excellent choice in Thunberg. She’s not the prop of any adult—just a nerdy girl with Asperger’s who formed the idea to sit in front of Parliament every day to protest the destruction of the world, because she couldn’t understand why everyone else wasn’t.

I’m happy that the college admissions scandal has finally forced Americans to confront the farcical injustice of our current system—a system where elite colleges pretend to peer into applicants’ souls (or the souls of the essay consultants hired by the applicants’ parents?), and where they preen about the moral virtue of their “holistic, multidimensional” admissions criteria, which amount in practice to shutting out brilliant working-class Asian kids in favor of legacies and rich badminton players. Not to horn-toot, but Steven Pinker and I tried to raise the alarm about this travesty five years ago (see for example this post), and were both severely criticized for it. I do worry, though, that people will draw precisely the wrong lessons from the scandal. If, for example, a few rich parents resorted to outright cheating on the SAT—all their other forms of gaming and fraud apparently being insufficient—then the SAT itself must be to blame so we should get rid of it. In reality, the SAT (whatever its flaws) is almost the only bulwark we have against the complete collapse of college admissions offices into nightclub bouncers. This Quillette article says it much better than I can.

I’m happy that there will a symposium from May 6-9 at the University of Toronto, to honor Stephen Cook and the (approximate) 50th anniversary of the discovery of NP-completeness. I’m happy that I’ll be attending and speaking there. If you’re interested, there’s still time to register!

Finally, I’m happy about the following “Sierpinskitaschen” baked by CS grad student and friend-of-the-blog Jess Sorrell, and included with her permission (Jess says she got the idea from Debs Gardner).

### Scott Aaronson — Just says in P

Recently a Twitter account started called justsaysinmice. The only thing this account does, is to repost breathless news articles about medical research breakthroughs that fail to mention that the effect in question was only observed in mice, and then add the words “IN MICE” to them. Simple concept, but it already seems to be changing the conversation about science reporting.

It occurred to me that we could do something analogous for quantum computing. While my own deep-seated aversion to Twitter prevents me from doing it myself, which of my readers is up for starting an account that just reposts one overhyped QC article after another, while appending the words “A CLASSICAL COMPUTER COULD ALSO DO THIS” to each one?

### Matt Strassler — A Non-Expert’s Guide to a Black Hole’s Silhouette

[Note added April 16: some minor improvements have been made to this article as my understanding has increased, specifically concerning the photon-sphere, which is the main region from which the radio waves are seen in the recently released image. See later blog posts for the image and its interpretation.]

About fifteen years ago, when I was a professor at the University of Washington, the particle physics theorists and the astronomer theorists occasionally would arrange to have lunch together, to facilitate an informal exchange of information about our adjacent fields. Among the many enjoyable discussions, one I was particularly excited about — as much as an amateur as a professional — was that in which I learned of the plan to make some sort of image of a black hole. I was told that this incredible feat would likely be achieved by 2020. The time, it seems, has arrived.

The goal of this post is to provide readers with what I hope will be a helpful guide through the foggy swamp that is likely to partly obscure this major scientific result. Over the last days I’ve been reading what both scientists and science journalists are writing in advance of the press conference Wednesday morning, and I’m finding many examples of jargon masquerading as English, terms poorly defined, and phrasing that seems likely to mislead. As I’m increasingly concerned that many non-experts will be unable to understand what is presented tomorrow, and what the pictures do and do not mean, I’m using this post to answer a few questions that many readers (and many of these writers) have perhaps not thought to ask.

A caution: I am not an expert on this subject. At the moment, I’m still learning about the more subtle issues. I’ll try to be clear about when I’m on the edge of my knowledge, and hopefully won’t make any blunders [but experts, please point them out if you see any!]

Which black holes are being imaged?

The initial plan behind the so-called “Event Horizon Telescope” (the name deserves some discussion; see below) has been to make images of two black holes at the centers of galaxies (the star-cities in which most stars are found.) These are not black holes formed by the collapse of individual stars, such as the ones whose collisions have been observed through their gravitational waves. Central galactic black holes are millions or even billions of times larger!  The ones being observed are

1. the large and close’ black hole at the center of the Milky Way (the giant star-city we call home), and
2. the immense but much more distant back hole at the center of M87 (a spectacularly big star-megalopolis.)

Left: the Milky Way as seen in the night sky; we see our galaxy from within, and so cannot see its spiral shape directly.  The brightest region is toward the center of the galaxy, and deep within it is the black hole of interest, as big as a large star but incredibly small in this image.  Right: the enormous elliptically-shaped galaxy M87, which sports an enormous black hole (but again, incredibly small in this image) at its center.  The blue stripe is a jet of material hurled at near-light-speed from the region very close to the black hole.

Why go after both of these black holes at the same time? Just as the Sun and Moon appear the same size in our sky because of an accident — the Sun’s greater distance is almost precisely compensated by its greater size — these two black holes appear similarly sized, albeit very tiny, from our vantage point.

Our galaxy’s central black hole has a mass of about four million Suns, and its size is about twenty times wider than our Sun (in contrast to the black holes whose gravitational waves were observed, which are the size of a human city.) But from the center of our galaxy, it takes light tens of thousands of years to reach Earth, and at such a great distance, this big black hole appears as small as would a tiny grain of sand from a plane at cruising altitude. Try to see the sand grains on a beach from a plane window!

Meanwhile, although M87 lies about two thousand times further away, its central black hole has a mass and radius about two thousand times larger than our galaxy’s black hole. Consequently it appears roughly the same size on the sky.

We may get our first images of both black holes at Wednesday’s announcement, though it is possible that so far only one image is ready for public presentation.

How can one see something as black as a black hole?!?

First of all, aren’t black holes black? Doesn’t that mean they don’t emit (or reflect) any light? Yes, and yes. [With a caveat — Hawking radiation — but while that’s very important for extremely small black holes, it’s completely irrelevant for these ones.]

So how can we see something that’s black against a black night sky? Well, a black hole off by itself would indeed be almost invisible.  [Though detectable because its gravity bends the light of objects behind it, proving it was a black hole and not a clump of, say, dark matter would be tough.]

But the centers of galaxies have lots of gas’ — not quite what we call gas’ in school, and certainly not what we put in our cars. Gas’ is the generalized term astronomers use for any collection of independently wandering atoms (or ions), atomic nuclei, and electrons; if it’s not just atoms it should really be called plasma, but let’s not worry about that here. Some of this gas’ is inevitably orbiting the black hole, and some of it is falling in (and, because of complex interactions with magnetic fields, some is being blasted outward  before it falls in.) That gas, unlike the black hole, will inevitably glow — it will emit lots of light. What the astronomers are observing isn’t the black hole; they’re observing light from the gas!

And by light, I don’t only mean what we can see with human eyes. The gas emits electromagnetic waves of all frequencies, including not only the visible frequencies but also much higher frequencies, such as those we call X-rays, and much lower frequencies, such as those we call radio waves. To detect these invisible forms of light, astronomers build all sorts of scientific instruments, which we call them telescopes’ even though they don’t involve looking into a tube as with traditional telescopes.

Is this really a “photograph” of [the gas in the neighborhood of] a black hole?

Yes and (mostly) no.  What you’ll be shown is not a picture you could take with a cell-phone camera if you were in a nearby spaceship.  It’s not visible light that’s being observed.  But it is invisible light — radio waves — and since all light, visible and not, is made from particles called photons’, technically you could still say is a “photo”-graph.

As I said, the telescope being used in this doesn’t have a set of mirrors in a tube like your friendly neighbor’s amateur telescope. Instead, it uses radio receivers to detect electromagnetic waves that have frequencies above what your traditional radio or cell phone can detect [in the hundred gigahertz range, over a thousand times above what your FM radio is sensitive to.]  Though some might call them microwaves, let’s just call them radio waves; it’s just a matter of definition.

So the images you will see are based on the observation of electromagnetic waves at these radio frequencies, but they are turned into something visible for us humans using a computer. That means the color of the image is inserted by the computer user and will be arbitrary, so pay it limited attention. It’s not the color you would see if you were nearby.  Scientists will choose a combination of the color and brightness of the image so as to indicate the brightness of the radio waves observed.

If you were nearby and looked toward the black hole, you’d see something else. The gas would probably appear colorless — white-hot — and the shape and brightness, though similar, wouldn’t be identical.

If I had radios for eyes, is this what I would see?

Suppose you had radio receivers for eyes instead of the ones you were born with; is this image what you would see if you looked at the black hole from a nearby planet?

Well, to some extent — but still, not exactly.  There’s another very important way that what you will see is not a photograph. It is so difficult to make this measurement that the image you will see is highly processed — that is to say, it will have been cleaned up and adjusted using special mathematics and powerful computers. Various assumptions are key ingredients in this image-processing. Thus, you will not be seeing the true’ appearance of the [surroundings of the] black hole. You will be seeing the astronomers’ best guess as to this appearance based on partial information and some very intelligent guesswork. Such guesswork may be right, but as we may learn only over time, some of it may not be.

This guesswork is necessary.  To make a nice clear image of something so tiny and faint using radio waves, you’d naively need a radio receiver as large as the Earth. A trick astronomers use when looking at distant objects it to build gigantic arrays of large radio receivers, which can be combined together to make a telescope’ much larger and more powerful than any one receiver. The tricks for doing this efficiently are beyond what I can explain here, but involve the term interferometry‘. Examples of large radio telescope arrays include ALMA, the center part of which can be seen in this image, which was built high up on a plateau between two volcanoes in the Atacama desert.

But even ALMA isn’t up to the task. And we can’t make a version of ALMA that covers the Earth. So the next best thing is to use ALMA and all of its friends, which are scattered at different locations around the Earth — an incomplete array of single and multiple radio receivers, combined using all the tricks of interferometry. This is a bit like (and not like) using a telescope that is powerful but has large pieces of its lens missing. You can get an image, but it will be badly distorted.

To figure out what you’re seeing, you must use your knowledge of your imperfect lens, and work backwards to figure our what your distorted image really would have looked like if you had a perfect lens.

Even that’s not quite enough: to do this, you need to have a pretty good guess about what you were going to see. That is where you might go astray; if your assumptions are wrong, you might massage the image to look like what you expected instead of how it really ought to look.   [Is this a serious risk?   I’m not yet expert enough to know the details of how delicate the situation might be.]

It is possible that in coming days and weeks there will be controversies about whether the image-processing techniques used and the assumptions underlying them have created artifacts in the images that really shouldn’t be there, or removed features that should have been there. This could lead to significant disagreements as to how to interpret the images. [Consider these concerns a statement of general prudence based on past experience; I don’t have enough specific expertise here to give a more detailed opinion.]

So in summary, this is not a photograph you’d see with your eyes, and it’s not a complete unadulterated image — it’s a highly-processed image of what a creature with radio eyes might see.  The color is arbitrary; some combination of color and brightness will express the intensity of the radio waves, nothing more.  Treat the images with proper caution and respect.

Will the image show what [the immediate neighborhood of] a black hole looks like?

Oh, my friends and colleagues! Could we please try to be a little more careful using the phrase looks like‘? The term has two very different meanings, and in contexts like this one, it really, really matters.

Let me put a spoon into a glass of water: on the left, I’ve drawn a diagram of what it looks like, and on the right, I’ve put a photograph of what it looks like.

On the left, a sketchy drawing of a spoon in water, showing what it “looks like” in truth; on the right, what it “looks like” to your eyes, distorted by the bending of the light’s path between the spoon and my camera.

You notice the difference, no doubt. The spoon on the left looks like a spoon; the spoon on the right looks like something invented by a surrealist painter.  But it’s just the effect of water and glass on light.

What’s on the left is a drawing of where the spoon is located inside the glass; it shows not what you would see with your eyes, but rather a depiction of the true location’ of the spoon. On the right is what you will see with your eyes and brain, which is not showing you where the objects are in truth but rather is showing you where the light from those objects is coming from. The truth-depiction is drawn as though the light from the object goes straight from the object into your eyes. But when the light from an object does not take a straight path from the objects to you — as in this case, where the light’s path bends due to interaction of the light with the water and the glass — then the image created in your eyes or camera can significantly differ from a depiction of the truth.

The same issue arises, of course, with any sort of lens or mirror that isn’t flat. A room seen through a curved lens, or through an old, misshapen window pane, can look pretty strange.  And gravity? Strong gravity near a black hole drastically modifies the path of any light traveling close by!

In the figure below, the left panel shows a depiction of what we think the region around a black hole typically looks like in truth. There is a spinning disk of gas, called the accretion disk, a sort of holding station for the gas. At any moment, a small fraction of the gas, at the inner edge of the disk, has crept too close to the black hole and is starting to spiral in. There are also usually jets of material flying out, roughly aligned with the axis of the black hole’s rapid rotation and its magnetic field. As I mentioned above, that material is being flung out of the black hole’s neighborhood (not out of the black hole itself, which would be impossible.)

Left: a depiction in truth’ of the neighborhood of a black hole, showing the disk of slowly in-spiraling gas and the jets of material funneled outward by the black hole’s magnetic field.  The black hole is not directly shown, but is significantly smaller than the inner edge of the disk.  The color is not meaningful.  Right: a simulation by Hotaka Shiokawa of how such a black hole may appear to the Event Horizon Telescope [if its disk is tipped up a bit more than in the image at left.]  The color is arbitrary; mainly the brightness matters.  The left side of the disk appears brighter than right side due to a ​Doppler effect’; on the left the gas is moving toward us, increasing the intensity of the radio waves, while on the right side it is moving away.  The dark area at the center is the black hole’s sort-of-silhouette; see below.

The image you will be shown, however, will perhaps look like the one on the right. That is an image of the radio waves as observed here at Earth, after the waves’ paths have been wildly bent — warped by the intense gravity near the black hole. Just as with any lens or mirror or anything similar, what you will see does not directly reveal what is truly there. Instead, you must infer what is there from what you see.

Just as you infer, when you see the broken twisted spoon in the glass, that probably the spoon is not broken or twisted, and the water and glass have refracted the light in familiar ways, so too we must make assumptions to understand what we’re really looking at in truth after we see the images tomorrow.

How serious are these assumptions?  Certainly, at their first attempt, astronomers will assume Einstein’s theory of gravity, which predicts how the light is bent around a black hole, is correct. But the details of what we infer from what we’re shown might depend upon whether Einstein’s formulas are precisely right. It also may depend on the accuracy of our understanding of and assumptions about accretion disks. Further complicating the procedure is that the rate and axis of the black hole’s rotation affects the details of the bending of the light, and since we’re not sure of the rotation yet for these black holes, that adds to the assumptions that must gradually be disentangled.

Because of these assumptions, we will not have an unambiguous understanding of the true nature of what appears in these first images.

Are we seeing the shadow’ of a black hole?

A shadow: that’s what astronomers call it, but as far as I can tell, this word is jargon masquerading as English… the most pernicious type of jargon.

What’s a shadow, in English? Your shadow is a dark area on the ground created by you blocking the light from the Sun, which emits light and illuminates the area around you. How would I see your shadow? I’d look at the ground — not at you.

This is not what we are doing in this case. The gas is glowing, illuminating the region. The black hole is blocking’ [caution! see below] some of the light. We’re looking straight toward the black hole, and seeing dark areas where illumination doesn’t reach us. This is more like looking at someone’s silhouette, not someone’s shadow!

With the Sun providing a source of illumination, a person standing between you and the Sun would appear as a silhouette that blocks part of the Sun, and would also create a shadow on the ground. [I’ve drawn the shadow slightly askew to avoid visual confusion.]  In the black hole images, illumination is provided by the glowing gas, and we’ll see a sort-of-silhouette [but see below!!!] of the black hole.  There’s nothing analogous to the ground, or to the person’s shadow, in the black hole images.

That being said, it’s much more clever than a simple silhouette, because of all that pesky bending of the light that the black hole is doing.  In an ordinary silhouette, the light from the illuminator travels in straight lines, and an object blocks part of the light.   But a black hole does not block your view of what’s behind it;  the light from the gas behind it gets bent around it, and thus can be seen after all!

Still, after you calculate all the bending, you find out that there’s a dark area from which no light emerges, which I’ll informally call a quasi-silhouette.  Just outside this is a photon-sphere’, which creates a bright ring; I’ll explain this elsewhere.   That resembles what happens with certain lenses, in contrast to the person’s silhouette shown above, where the light travels in straight lines.  Imagine that a human body could bend light in such a way; a whimsical depiction of what that might look like is shown below:

If a human body could bend light the way a black hole does, it would distort the Sun’s appearance.  The light we’d expect to be blocked would instead be warped around the edges.  The dark area, no longer a simple outline of the body, would take on a complicated shape.

Note also that the black hole’s quasi-silhouette probably won’t be entirely dark. If material from the accretion disk (or a jet pointing toward us) lies between us and the black hole, it can emit light in our direction, partly filling in the dark region.

Thus the quasi-silhouette we’ll see in the images is not the outline of the black hole’s edge, but an effect of the light bending, and is in fact considerably larger than the black hole.  In truth it may be as much as 50% larger in radius than the event horizon, and the silhouette as seen in the image may appear  more than 2.5 to 5 times larger (depending on how fast the black hole rotates) than the true event horizon — all due to the bent paths of the radio waves.

Interestingly, it turns out that the details of how the black hole is rotating don’t much affect the size of the quasi-silhouette. The black hole in the Milky Way is already understood well enough that astronomers know how big its quasi-silhouette ought to appear, even though we don’t know its rotation speed and axis.  The quasi-silhouette’s size in the image will therefore be an important test of Einstein’s formulas for gravity, even on day one.  If the size disagrees with expectations, expect a lot of hullabaloo.

What is the event horizon, and is it present in the image?

The event horizon of a black hole, in Einstein’s theory of gravity, is not an object. It’s the edge of a region of no-return, as I’ve explained here. Anything that goes past that point can’t re-emerge, and nothing that happens inside can ever send any light or messages back out.

Despite what some writers are saying, we’re not expecting to see the event horizon in the images. As is hopefully clear by now, what astronomers are observing is (a) the radio waves from the gas near the black hole, after the waves have taken strongly curved paths, and (b) a quasi-silhouette of the black hole from which radio waves don’t emerge. But as I explained above, this quasi-silhouette is considerably larger than the event horizon, both in truth and in appearance. The event horizon does not emit any light, and it sits well inside the quasi-silhouette, not at its edge.

Still, what we’ll be seeing is closer to the event horizon than anything we’ve ever seen before, which is really exciting!   And if the silhouette has an unexpected appearance, we just might get our first hint of a breakdown of Einstein’s understanding of event horizons.  Don’t bet on it, but you can hope for it.

Can we hope to see the singularity of a black hole?

No, for two reasons.

First, there probably is no singularity in the first place. It drives me nuts when people say there’s a singularity inside a black hole; that’s just wrong. The correct statement is that in Einstein’s theory of gravity, singularities (i.e. unavoidable infinities) arise in the math — in the formulas for black holes (or more precisely, in the solutions to those formulas that describe black holes.) But a singularity in the math does not mean that there’s a singularity in nature!  It usually means that the math isn’t quite right — not that anyone made a mistake, but that the formulas that we’re using aren’t appropriate, and need to be modified in some way, because of some natural phenomenon that we don’t yet understand.

A singularity in a formula implies a mystery in nature — not a singularity in nature.

In fact, historically, in every previous case where singularities have appeared in formulas, it simply meant that the formula (or solution) was not accurate in that context, or wasn’t being used correctly. We already know that Einstein’s theory of gravity can’t be complete (it doesn’t accommodate quantum physics, which is also a part of the real world) and it would be no surprise if its incompleteness is responsible for these infinities. The math singularity merely signifies that the physics very deep inside a black hole isn’t understood yet.

[The same issue afflicts the statement that the Big Bang began with a singularity; the solution to the equations has a singularity, yes, but that’s very far from saying that nature’s Big Bang actually began with one.]

Ok, so how about revising the question: is there any hope of seeing the region deep inside the black hole where Einstein’s equations have a singularity? No. Remember what’s being observed is the stuff outside the black hole, and the black hole’s quasi-silhouette. What happens inside a black hole stays inside a black hole. Anything else is inference.

[More precisely, what happens inside a huge black hole stays inside a black hole for eons.  For tiny black holes it comes out sooner, but even so it is hopelessly scrambled in the form of Hawking radiation.’]

Any other questions?

Maybe there are some questions that are bothering readers that I haven’t addressed here?  I’m super-busy this afternoon and all day Wednesday with non-physics things, but maybe if you ask the question early enough I can get address it here before the press conference (at 9 am Wednesday, New York City time).   Also, if you find my answers confusing, please comment; I can try to further clarify them for later readers.

It’s a historic moment, or at least the first stage in a historic process.  To me, as I hope to all of you, it’s all very exciting and astonishing how the surreal weirdness of Einstein’s understanding of gravity, and the creepy mysteries of black holes, have suddenly, in just a few quick years, become undeniably real!

### Matt Strassler — The Black Hole Photo’: What Are We Looking At?

The short answer: I’m really not sure yet.  [This post is now largely superseded by the next one, in which some of the questions raised below have now been answered.]

Neither are some of my colleagues who know more about the black hole geometry than I do. And at this point we still haven’t figured out what the Event Horizon Telescope experts do and don’t know about this question… or whether they agree amongst themselves.

[Note added: last week, a number of people pointed me to a very nice video by Veritasium illustrating some of the features of black holes, accretion disks and the warping of their appearance by the gravity of the black hole.  However, Veritasium’s video illustrates a non-rotating black hole with a thin accretion disk that is edge-on from our perspective; and this is definitely NOT what we are seeing!]

As I emphasized in my pre-photo blog post (in which I described carefully what we were likely to be shown, and the subtleties involved), this is not a simple photograph of what’s actually there.’ We all agree that what we’re looking at is light from some glowing material around the solar-system-sized black hole at the heart of the galaxy M87.  But that light has been wildly bent on its path toward Earth, and so — just like a room seen through an old, warped window, and a dirty one at that — it’s not simple to interpret what we’re actually seeing. Where, exactly, is the material in truth’, such that its light appears where it does in the image? Interpretation of the image is potentially ambiguous, and certainly not obvious.

The naive guess as to what to expect — which astronomers developed over many years, based on many studies of many suspected black holes — is crudely illustrated in the figure at the end of this post.  Material around a black hole has two main components:

• An accretion disk of gas’ (really plasma, i.e. a very hot collection of electrons, protons, and other atomic nuclei) which may be thin and concentrated, or thick and puffy, or something more complicated.  The disk extends inward to within a few times the radius of the black hole’s event horizon, the point of no-return; but how close it can be depends on how fast the black hole rotates.
• Two oppositely-directed jets of material, created somehow by material from the disk being concentrated and accelerated by magnetic fields tied up with the black hole and its accretion disk; the jets begin not far from the event horizon, but then extend outward all the way to the outer edges of the entire galaxy.

But even if this is true, it’s not at all obvious (at least to me) what these objects look like in an image such as we saw Wednesday. As far as I am currently aware, their appearance in the image depends on

• Whether the disk is thick and puffy, or thin and concentrated;
• How far the disk extends inward and outward around the black hole;
• The process by which the jets are formed and where exactly they originate;
• How fast the black hole is spinning;
• The orientation of the axis around which the black hole is spinning;
• The typical frequencies of the radio waves emitted by the disk and by the jets (compared to the frequency, about 230 Gigahertz, observed by the Event Horizon Telescope);

and perhaps other things. I can’t yet figure out what we do and don’t know about these things; and it doesn’t help that some of the statements made by the EHT scientists in public and in their six papers seem contradictory (and I can’t yet say whether that’s because of typos, misstatements by them, or [most likely] misinterpretations by me.)

So here’s the best I can do right now, for myself and for you. Below is a figure that is nothing but an illustration of my best attempt so far to make sense of what we are seeing. You can expect that some fraction of this figure is wrong. Increasingly I believe this figure is correct in cartoon form, though the picture on the left is too sketchy right now and needs improvement.  What I’ll be doing this week is fixing my own misconceptions and trying to become clear on what the experts do and don’t know. Experts are more than welcome to set me straight!

In short — this story is not over, at least not for me. As I gain a clearer understanding of what we do and don’t know, I’ll write more about it.

My personal confused and almost certainly inaccurate understanding [the main inaccuracy is that the disk and jets are fatter than shown, and connected to one another near the black hole; that’s important because the main illumination source may be the connection region; also jets aren’t oriented quite right] of how one might interpret the black hole image; all elements subject to revision as I learn more. Left: the standard guess concerning the immediate vicinity of M87’s black hole: an accretion disk oriented nearly face-on from Earth’s perspective, jets aimed nearly at and away from us, and a rotating black hole at the center.  The orientation of the jets may not be correct relative to the photo.  Upper right: The image after the radio waves’ paths are bent by gravity.  The quasi-silhouette of the black hole is larger than the true’ event horizon, a lot of radio waves are concentrated at the ‘photon-sphere’ just outside (brighter at the bottom due to the black-hole spinning clockwise around an axis slightly askew to our line of sight); some additional radio waves from the accretion disk and jets further complicate the image. Most of the disk and jets are too dim to see.  Lower Right: This image is then blurred out by the Event Horizon Telescope’s limitations, partly compensated for by heavy-duty image processing.

### Matt Strassler — The Black Hole Photo’: Seeing More Clearly

Ok, after yesterday’s post, in which I told you what I still didn’t understand about the Event Horizon Telescope (EHT) black hole image (see also the pre-photo blog post in which I explained pedagogically what the image was likely to show and why), today I can tell you that quite a few of the gaps in my understanding are filling in (thanks mainly to conversations with Harvard postdoc Alex Lupsasca and science journalist Davide Castelvecchi, and to direct answers from professor Heino Falcke, who leads the Event Horizon Telescope Science Council and co-wrote a founding paper in this subject).  And I can give you an update to yesterday’s very tentative figure.

First: a very important point, to which I will return in a future post, is that as I suspected, it’s not at all clear what the EHT image really shows.   More precisely, assuming Einstein’s theory of gravity is correct in this context:

• The image itself clearly shows a black hole’s quasi-silhouette (called a shadow’ in expert jargon) and its bright photon-sphere where photons [particles of light — of all electromagnetic waves, including radio waves] can be gathered and focused.
• However, all the light (including the observed radio waves) coming from the photon-sphere was emitted from material well outside the photon-sphere; and the image itself does not tell you where that material is located.  (To quote Falcke: this is a blessing and a curse’; insensitivity to the illumination source makes it easy to interpret the black hole’s role in the image but hard to learn much about the material near the black hole.) It’s a bit analogous to seeing a brightly shining metal ball while not being able to see what it’s being lit by… except that the photon-sphere isn’t an object.  It’s just a result of the play of the light [well, radio waves] directed by the bending effects of gravity.  More on that in a future post.
• When you see a picture of an accretion disk and jets drawn to illustrate where the radio waves may come from, keep in mind that it involves additional assumptions — educated assumptions that combine many other measurements of M87’s black hole with simulations of matter, gravity and magnetic fields interacting near a black hole.  But we should be cautious: perhaps not all the assumptions are right.  The image shows no conflicts with those assumptions, but neither does it confirm them on its own.

Just to indicate the importance of these assumptions, let me highlight a remark made at the press conference that the black hole is rotating quickly, clockwise from our perspective.  But (as the EHT papers state) if one doesn’t make some of the above-mentioned assumptions, one cannot conclude from the image alone that the black hole is actually rotating.  The interplay of these assumptions is something I’m still trying to get straight.

Second, if you buy all the assumptions, then the picture I drew in yesterday’s post is mostly correct except (a) the jets are far too narrow, and shown overly disconnected from the disk, and (b) they are slightly mis-oriented relative to the orientation of the image.  Below is an improved version of this picture, probably still not the final one.  The new features: the jets (now pointing in the right directions relative to the photo) are fatter and not entirely disconnected from the accretion disk.  This is important because the dominant source of illumination of the photon-sphere might come from the region where the disk and jets meet.

Updated version of yesterday’s figure: main changes are the increased width and more accurate orientation of the jets.  Working backwards: the EHT image (lower right) is interpreted, using mainly Einstein’s theory of gravity, as (upper right) a thin photon-sphere of focused light surrounding a dark patch created by the gravity of the black hole, with a little bit of additional illumination from somewhere.  The dark patch is 2.5 – 5 times larger than the event horizon of the black hole, depending on how fast the black hole is rotating; but the image itself does not tell you how the photon-sphere is illuminated or whether the black hole is rotating.  Using further assumptions, based on previous measurements of various types and computer simulations of material, gravity and magnetic fields, a picture of the black hole’s vicinity (upper left) can be inferred by the experts. It consists of a fat but tenuous accretion disk of material, almost face-on, some of which is funneled into jets, one heading almost toward us, the other in the opposite direction.  The material surrounds but is somewhat separated from a rotating black hole’s event horizon.  At this radio frequency, the jets and disk are too dim in radio waves to see in the image; only at (and perhaps close to) the photon-sphere, where some of the radio waves are collected and focused, are they bright enough to be easily discerned by the Event Horizon Telescope.

### Doug Natelson — Brief items, + "grant integrity"

As I have been short on time to do as much writing of my own as I would like, here are links to some good, fun articles:

• Ryan Mandelbaum at Gizmodo has a very good, lengthy article about the quest for high temperature superconductivity in hydrogen-rich materials.
• Natalie Wolchover at Quanta has a neat piece about the physics of synchronization
• Adam Mann in Nat Geo has a brief piece pointing toward this PNAS paper arguing the existence of a really weird state of matter for potassium under certain conditions.  Basically the structure consists of a comparatively well-defined framework of potassium with 1D channels, and the channels are filled with (and leak in 3D) liquid-like mobile potassium atoms.  Weird.
Not fun:  US Senator Grassley is pushing for an "expanded grant integrity probe" of the National Science Foundation.   The stated issue is concern that somehow foreign powers may be able to steal the results of federally funded research.  Now, there are legitimate concerns about intellectual property theft, industrial espionage, and the confidentiality of the grant review process.  The disturbing bit is the rhetoric about foreign actors learning about research results, and the ambiguity about whether international students fall under that label in his or other eyes.  Vague wording like that also appeared in a DOE memo reported by Science earlier in the year.  International students and scholars are an enormous source of strength for the US, not a weakness or vulnerability.  Policy-makers need to be reminded of this, emphatically.

### John Preskill — Long live Yale’s cemetery

Call me morbid, but, the moment I arrived at Yale, I couldn’t wait to visit the graveyard.

I visited campus last February, to present the Yale Quantum Institute (YQI) Colloquium. The YQI occupies a building whose stone exterior honors Yale’s Gothic architecture and whose sleekness defies it. The YQI has theory and experiments, seminars and colloquia, error-correcting codes and small-scale quantum computers, mugs and laptop bumper stickers. Those assets would have drawn me like honey. But my host, Steve Girvin, piled molasses, fudge, and cookie dough on top: “you should definitely reserve some time to go visit Josiah Willard Gibbs, Jr., Lars Onsager, and John Kirkwood in the Grove Street Cemetery.”

Gibbs, Onsager, and Kirkwood pioneered statistical mechanics. Statistical mechanics is the physics of many-particle systems, energy, efficiency, and entropy, a measure of order. Statistical mechanics helps us understand why time flows in only one direction. As a colleague reminded me at a conference about entropy, “You are young. But you will grow old and die.” That conference featured a field trip to a cemetery at the University of Cambridge. My next entropy-centric conference took place next to a cemetery in Banff, Canada. A quantum-thermodynamics conference included a tour of an Oxford graveyard.1 (That conference reincarnated in Santa Barbara last June, but I found no cemeteries nearby. No wonder I haven’t blogged about it.) Why shouldn’t a quantum-thermodynamics colloquium lead to the Grove Street Cemetery?

Home of the Yale Quantum Institute

The Grove Street Cemetery lies a few blocks from the YQI. I walked from the latter to the former on a morning whose sunshine spoke more of springtime than of February. At one entrance stood a gatehouse that looked older than many of the cemetery’s residents.

“Can you tell me where to find Josiah Willard Gibbs?” I asked the gatekeepers. They handed me a map, traced routes on it, and dispatched me from their lodge. Snow had fallen the previous evening but was losing its battle against the sunshine. I sloshed to a pathway labeled “Locust,” waded along Locust until passing Myrtle, and splashed back and forth until a name caught my eye: “Gibbs.”

One entrance of the Grove Street Cemetery

Josiah Willard Gibbs stamped his name across statistical mechanics during the 1800s. Imagine a gas in a box, a system that illustrates much of statistical mechanics. Suppose that the gas exchanges heat with a temperature-$T$ bath through the box’s walls. After exchanging heat for a long time, the gas reaches thermal equilibrium: Large-scale properties, such as the gas’s energy, quit changing much. Imagine measuring the gas’s energy. What probability does the measurement have of outputting $E$? The Gibbs distribution provides the answer, $e^{ - E / (k_{\rm B} T) } / Z$. The $k_{\rm B}$ denotes Boltzmann’s constant, a fundamental constant of nature. The $Z$ denotes a partition function, which ensures that the probabilities sum to one.

Gibbs lent his name to more than probabilities. A function of probabilities, the Gibbs entropy, prefigured information theory. Entropy features in the Gibbs free energy, which dictates how much work certain thermodynamic systems can perform. A thermodynamic system has many properties, such as temperature and pressure. How many can you control? The answer follows from the Gibbs-Duheim relation. You’ll be able to follow the Gibbs walk, a Yale alumnus tells me, once construction on Yale’s physical-sciences complex ends.

Back I sloshed along Locust Lane. Turning left onto Myrtle, then right onto Cedar, led to a tree that sheltered two tombstones. They looked like buddies about to throw their arms around each other and smile for a photo. The lefthand tombstone reported four degrees, eight service positions, and three scientific honors of John Gamble Kirkwood. The righthand tombstone belonged to Lars Onsager:

NOBEL LAUREATE*

[ . . . ]

*ETC.

Onsager extended thermodynamics beyond equilibrium. Imagine gently poking one property of a thermodynamic system. For example, recall the gas in a box. Imagine connecting one end of the box to a temperature-$T$ bath and the other end to a bath at a slightly higher temperature, $T' \gtrsim T$. You’ll have poked the system’s temperature out of equilibrium. Heat will flow from the hotter bath to the colder bath. Particles carry the heat, energy of motion. Suppose that the particles have electric charges. An electric current will flow because of the temperature difference. Similarly, heat can flow because of an electric potential difference, or a pressure difference, and so on. You can cause a thermodynamic system’s elbow to itch, Onsager showed, by tickling the system’s ankle.

To Onsager’s left lay John Kirkwood. Kirkwood had defined a quasiprobability distribution in 1933. Quasiprobabilities resemble probabilities but can assume negative and nonreal values. These behaviors can signal nonclassical physics, such as the ability to outperform classical computers. I generalized Kirkwood’s quasiprobability with collaborators. Our generalized quasiprobability describes quantum chaos, thermalization, and the spread of information through entanglement. Applying the quasiprobability across theory and experiments has occupied me for two-and-a-half years. Rarely has a tombstone pleased anyone as much as Kirkwood’s tickled me.

The Grove Street Cemetery opened my morning with a whiff of rosemary. The evening closed with a shot of adrenaline. I met with four undergrad women who were taking Steve Girvin’s course, an advanced introduction to physics. I should have left the conversation bled of energy: Since visiting the cemetery, I’d held six discussions with nine people. But energy can flow backward. The students asked how I’d come to postdoc at Harvard; I asked what they might major in. They described the research they hoped to explore; I explained how I’d constructed my research program. They asked if I’d had to work as hard as they to understand physics; I confessed that I might have had to work harder.

I left the YQI content, that night. Such a future deserves its past; and such a past, its future.

With thanks to Steve Girvin, Florian Carle, and the Yale Quantum Institute for their hospitality.

1Thermodynamics is a physical theory that emerges from statistical mechanics.

## April 16, 2019

### Doug Natelson — This week in the arxiv

A fun paper jumped out at me from last night's batch of preprints on the condensed matter arxiv.

arXiv:1904.06409 - Ivashtenko et al., Origami launcher
A contest at the International Physics Tournament asked participants to compete to see who could launch a standard ping-pong ball the highest using a launcher made from a single A4 sheet of paper (with folds).  The authors do a fun physics analysis of candidate folded structures.  At first, they show that one can use idealized continuum elasticity to come up with a model that really does not work well at all, in large part because the act of creasing the paper (a layered fibrous composite) alters its mechanical properties quite a bit.  They then perform an analysis based on a dissipative mechanical model of a folded crease matched with experimental studies, and are able to do a much better job at predicting how a particular scheme performs in experiments.  Definitely fun.

There were other interesting papers this week as well, but I need to look more carefully at them.

### Backreaction — The LHC has broken physics, according to this philosopher

The Large Hadron Collider (LHC) has not found any new, fundamental particles besides the Higgs-boson. This is a slap in the face of thousands of particle physicists who were confident the mega-collider should see more: Other new particles, additional dimensions of space, tiny black holes, dark matter, new symmetries, or something else entirely. None of these fancy things have shown up. The

## April 15, 2019

### Backreaction — How Heroes Hurt Science

Einstein, Superhero Tell a story. That’s the number one advice from and to science communicators, throughout centuries and all over the globe. We can recite the seven archetypes forward and backward, will call at least three people hoping they disagree with each other, ask open-ended questions to hear what went on backstage, and trade around the always same anecdotes: Wilson checking the

### Tommaso Dorigo — Six Questions On AMVA4NewPhysics

The European Commission pays close attention to document the work of the projects that benefited of its funding. With that intent, the AMVA4NewPhysics network has been described, along with its goals, in a 2016 article on the Horizon magazine.

## April 14, 2019

### John Baez — Symposium on Compositional Structures 4

There’s yet another conference in this fast-paced series, and this time it’s in Southern California!

Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California.

The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.
The first SYCO was in September 2018, at the University of Birmingham. The second SYCO was in December 2018, at the University of Strathclyde. The third SYCO was in March 2019, at the University of Oxford. Each meeting attracted about 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere. Think creatively—you could submit a recent paper, or notes on work in progress, or even a recent Masters or PhD thesis.

While no list of topics could be exhaustive, SYCO welcomes submissions
with a compositional focus related to any of the following areas, in
particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to
defer some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions can be re-submitted to any future SYCO meeting, where they would not need peer review, and where they would be prioritised for inclusion in the programme. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings will be held sufficiently frequently to avoid a backlog of deferred papers.

### Invited speakers

John Baez, University of California, Riverside: Props in network theory.

Tobias Fritz, Perimeter Institute for Theoretical Physics: Categorical probability: results and challenges.

Nina Otter, University of California, Los Angeles: A unified framework for equivalences in social networks.

### Important dates

All times are anywhere-on-earth.

• Submission deadline: Wednesday 24 April 2019
• Author notification: Wednesday 1 May 2019
• Symposium dates: Wednesday 22 and Thursday 23 May 2019

### Submission

Submission is by EasyChair, via the following link:

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

### Programme Committee

• Miriam Backens, University of Oxford
• Ross Duncan, University of Strathclyde and Cambridge Quantum Computing
• Brendan Fong, Massachusetts Institute of Technology
• Stefano Gogioso, University of Oxford
• Chris Heunen, University of Edinburgh
• Dominic Horsman, University of Grenoble
• Martti Karvonen, University of Edinburgh
• Kohei Kishida, Dalhousie University (chair)
• Andre Kornell, University of California, Davis
• Martha Lewis, University of Amsterdam
• Samuel Mimram, École Polytechnique
• Benjamin Musto, University of Oxford
• Nina Otter, University of California, Los Angeles
• Simona Paoli, University of Leicester
• Dorette Pronk, Dalhousie University
• Pawel Sobocinski, University of Southampton
• Joshua Tan, University of Oxford
• Sean Tull, University of Oxford
• Dominic Verdon, University of Bristol
• Jamie Vicary, University of Birmingham and University of Oxford
• Maaike Zwart, University of Oxford

### n-Category CaféThe ZX-Calculus for Stabilizer Quantum Mechanics

guest post by Fatimah Ahmadi and John van de Wetering

This is the second post of Applied Category Theory School 2019. We present Backens’ completeness proof for the ZX-calculus for stabilizer quantum mechanics.

## Introduction

Physicists from all fields are used to graphical calculi, the most common being Feynman diagrams. Feynman diagrams were introduced by Richard Feynman in the 1940s as a bookkeeping method for the messy calculations of quantum electrodynamics. In David Kaiser’s words,

In the hands of a postwar generation, a tool intended to lead quantum electrodynamics out of a decades-long morass helped transform physics.

The applications of Feynman diagrams are not limited to QED and prevailed in other areas of physics, such as cosmology, or statistical and condensed matter physics. Another well-known example of a graphical calculus was invented by Roger Penrose for the calculation of tensors; a convenient method to match and contract the middle indices of tensors.

However, neither of these methods were initially based on a rigorous mathematical setting. As mentioned earlier, they were just introduced as more of a bookkeeping method based on what we expect to obtain if a calculation is done in a standard formalism.

In 2004, Samson Abramsky and Bob Coecke pioneered a new formalism for quantum mechanics using monoidal category theory which comes with a rigorous graphical calculus. They found that compact closed categories were well suited to describing quantum mechanics.

In this formalism, physical systems are objects of a compact closed category and a process between two systems is a morphism. A state of a system is also interpreted as a process, meaning a morphism from the monoidal unit to an object. The sequential application of processes is the composition of morphisms, and the joint system/process is given by the monoidal product of two objects/morphisms. This framework has classical physics in one extreme as the category of sets and quantum-like physics in another extreme.

Although one can recover Hilbert space quantum mechanics from Categorical Quantum Mechanics (CQM), it intends to suggest a fresh insight into quantum mechanics in the foundational level. In this post, however, we will restrict ourselves to the categorical formulation of standard quantum physics: the category of finite-dimensional Hilbert spaces. We restrict the model even further and only introduce a graphical calculus, namely the ZX-calculus, as a useful tool for the qubit-based quantum computation with post-selected measurements.

We assume a working knowledge of quantum computation, you may also find the introduction of the previous post useful.

The building blocks of quantum computers in the matrix formalism are:

1. qubits, which are encoded in the states of a two-dimensional Hilbert space, like the spin of an electron,
2. quantum gates, particularly the Clifford+T gates.
3. projective measurements, represented by self-adjoint operators.

Despite the indisputable usefulness of the matrix formalism in quantum computation, it is still a low-level language. Calculating the tensor products of matrices is impractical because the sizes of the matrices increase exponentially with the number of qubits. An alternative that scales better with an increasing amount of qubits are graphical languages. A graphical language consists of a set of generators, some ways to compose them into a diagram, and a set of rules that tell you when two diagrams are equal. The dominant graphical language in quantum computation is the circuit model. In the circuit model, time goes left to right, each wire represents a qubit, boxes are gates and the tensor product of two qubits is depicted by parallel wires.

The circuit model makes tracking and simplifying a computation easier. For instance take the circuit to the right: knowing that $V^2=Y^2=I$, $Y$ and $U$ commute and $Y=E \otimes R$, one can transform this circuit into a simpler one.

However, the circuit model is inflexible, and does not allow any topological deformation. In contrast, The ZX-calculus and other calculi inspired by CQM (like the ZW-calculus or ZH-calculus) have a set of built-in rewrite rules which have fewer restrictions on the topology, and as a result allows rewrites that have no counterpart in the circuit model. These rules propose a systematic way for simplification of diagrams. For instance, using naturality of braiding in a symmetric category, we can simplify this diagram.

Any graphical calculus should satisfy at least three main conditions to be able to replace the matrix formalism:

• Universality, meaning any gate/unitary matrix has a corresponding diagram. It implies a language is general enough to describe all possible calculations.
• Soundness, meaning if two diagrams are equivalent in a graphical calculus $D_1=D_2$, their matrix representation should also be the same $[\![ D_1 ]\!]\overset{?}{=}[\![ D_2 ]\!]$.
• Completeness, meaning if the matrix representations of two diagrams are equal, $[\![ D_1 ]\!]=[\![ D_2 ]\!]$, then two diagrams are equivalent, $D_1\overset{?}{=} D_2$. Completeness entails that a rule set is powerful enough to prove all equalities.

Remark: Equivalent diagrams might not be identical, but using rewrite rules, there is a series of moves which transform one to another.

The ZX-calculus is a PROP: a category of natural numbers with addition as a monoidal structure. There are only two types of morphism, red spiders and green spiders, which are decorated with an angle in $[0, 2 \pi)$ and can have an arbitrary amount of inputs and outputs (in this picture, red node is a node with white $\alpha$ and green node is the node with black $\alpha$).

These morphisms can be combined by juxtaposition, which corresponds to the tensor product, or by connecting the wires, which corresponds to the regular composition of linear maps. Rewrite rules quotient out the morphisms in the category. A functor from this category to the category of finite-dimensional Hilbert spaces is a matrix formalism interpretation. In this picture, soundness equates to the functoriality of the functor, universality means this functor is surjective on morphisms and completeness is captured by injectivity of this functor. In other words, if the ZX-calculus intends to fully describe Hilbert space quantum mechanics, this functor should be an equivalence.

The proof of universality and soundness of the ZX-calculus is straightforward. For universality, we only need to show each gate has a corresponding diagram. Soundness also follows from checking the matrix representation of each rewrite rule. The difficulty is in the completeness proof. In this post, we give the completeness proof for the ZX-calculus for stabilizer quantum mechanics. Note that this result ignores global scalars. That is we consider equality modulo normalisation: $U \cong U^\prime$ if $U^\prime = \alpha U$ such that $\alpha \in \mathbb{C}$ and $\alpha \neq 0$.

The outline of the rest of this post will be as follows: we introduce the ZX-calculus for the stabilizer quantum mechanics, then briefly review stabilizer quantum mechanics and its relation to graph states. We show that there is a correspondence between stabilizer states and graph states with local Cliffords and that this correspondence can be proven inside the ZX-calculus. Finally, as a pair of such diagrams are identical if and only if they correspond to the same quantum states, we can use this to show completeness of the calculus.

## The ZX-calculus for the stabilizer states

The basic elements of the ZX-calculus for stabilizer quantum mechanics are:

• wires, representing qubits (Wires are bendable, i.e. we have “cups” and “caps”, as well as straight wires.),
• spiders, they are morphisms of the category. There are two types of spiders: Green spiders are represented in the computational basis $\{ \frac{1}{\sqrt2}\begin{bmatrix} 1\\ 0 \end{bmatrix} , \frac{1}{\sqrt2}\begin{bmatrix} 0\\ 1 \end{bmatrix} \}$. Red spiders are represented in the Hadamard basis $\{ \frac{1}{\sqrt2}\begin{bmatrix} 1\\ 1 \end{bmatrix} , \frac{1}{\sqrt2}\begin{bmatrix} 1\\ -1 \end{bmatrix} \}$. $\alpha$ is in the set of $\{ 0, \pi/2, \pi, -\pi/2 \}$ . An unlabled node is a zero phase.

• the tensor product, the tensor product of two diagrams is given by gluing them side by side.

• for the composition of two spiders, we glue diagrams vertically and,

• rewrite rules which are shown in the following pictures:

Spider rules and identity

Bialgebra, Hopf algebra, and copying

$\pi$-copying $\pi$-commutation and color change

Euler decomposition of Hadamard. This rule cannot be derived from the categorical axioms; nonetheless, it is necessary for the completeness result.

### Stabilizer Quantum Mechanics

The stabilizer quantum computation is a non-universal fragment of quantum computation. Its basic elements are 1. qubits prepared in $|0\rangle ^{\otimes n}$ ; 2. Clifford gates and 3. Projective measurements in the computational basis. Based on the Gottesman-Knill theorem, this fragment can be efficiently simulated by a classical computer and it is not even a universal classical model. However, it is still an important fragment for fault-tolerant and measurement-based quantum computation.

### The completeness result

The proof of the main theorem is through an algorithmic procedure. It takes two diagrams with the same matrix representation. Then, separately, using the Choi-Jamiolkowski isomorphism, it transforms each of them into a state. Finally, by applying rewrite rules, it turns both of the diagrams into particular graph states with local Clifford operations which will represent the same state if and only if the diagrams are identical. We ignore the details of computation and only present the big picture of the proof. The interested reader can consult the paper for more detail on steps of the proof.

Definition 1: A group $G$ generated by some Pauli matrices $P_n$ stabilizes a state $\psi$ if for every element $U \in G$ in this group, $U \psi = \psi$. We call this state a stabilizer state.

Remark: Any stabilizer state can be given uniquely by the generators of this group. Thus, for specifying the state, it is enough to give the generators of this group.

Definition 2: A finite graph state $(V, E)$ with a finite set of vertices $V$ and a finite set of edges $E$, is an undirected simple graph.

Definition 3: Given a graph $(V, E)$ with $n$ vertices, $|V|=n$, the corresponding graph state is an n-qubit state with a stabilizer group generated by $n$ operators. These generators are given by assigning an $X$ operator to each node and a $Z$ operator to nodes connected to it. For instance, the generators of this graph are: $\{ X \otimes Z \otimes Z \otimes Z, Z \otimes X \otimes Z \otimes Z , Z \otimes Z \otimes X \otimes I , Z \otimes Z \otimes I \otimes X \}$

Definition 4: A n-qubit GS-LC diagram is a graph state modulo single qubit Clifford operators. I.e. G shown in the picture is a graph and $U$s are single qubit Clifford operators applied to nodes of the main graph.

Every graph state gives a stabilizer state, but not every stabilizer state is equal to a graph state. We define equivalence classes of stabilizer states which can be transformed into each other by applying local Clifford operators. In this way, we can get a reduced form of GS-LC state for every stabilizer state.

Definition 5: A stabilizer state diagram in reduced GS-LC (written rGS-LC) form is a diagram in GS-LC form satisfying the following additional constraints:

• All vertex operators belong to the set:

• Two adjacent vertices must not both have vertex operators that include red nodes.

Theorem 6: Any stabilizer state is equivalent to a rGS-LC state.

Proposition 7: A ZX-calculus representation of a graph state is given by substituting

• each node of the graph state with a green node with one output, and
• each edge with a Hadamard gate connected to both green nodes of the edge.

For instance,

Definition 8: A pair of rGS-LC diagrams on the same number of qubits is called simplified if there are no pairs of qubits $p$,$q$ such that $p$ has a red node in its vertex operator in the first diagram but not in the second, $q$ has a red node in the second diagram but not in the first, and $p$ and $q$ are adjacent in at least one of the diagrams.

Proposition 9: Any pair of rGS-LC diagrams on $n$ qubits can be simplified.

Theorem 10: The two diagrams making up a simplified pair of rGS-LC diagrams are equal, i.e. they correspond to the same quantum state, if and only if they are identical.

Theorem 11: The ZX-calculus is complete for stabilizer quantum mechanics.

Proof: Given two different diagrams corresponding to the same matrix:

1. We first transform them into two states, using the fact that the category has duals.

1. Then we simplify diagrams to get two simplified rGS-LC diagrams in ZX-representation.
2. Again, it uses the trick of the first step to transform them back into two operators.
3. Using Theorem 10, if they represent the same matrix, they should be two identical graphs. For example, the CZ operator,$CZ=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix}$ , is given by two distinct diagrams as below:

For the first step, we bend the lower part of two diagrams and replace the cup with an equivalent ZX graph state.

Then by applying $\pi$ rules and changing colors we get:

The final simplified diagram, by application of rewrite rules and other lemmas presented in this paper, will be:

For the right-hand side diagram, again we bend the diagrams, substitute the cup with ZX-calculus representation and finally apply the color change rule to get:

After applying rewrite rules, the above digram turns into the same final simplified diagram of the left-hand side and this completes the proof.

## Conclusion

During the 5 years since Backens’ paper was released, a lot has happened in the world of graphical calculi for quantum computation. The above rule-set was only complete for Clifford computation. We now have extensions of the rule-set that are complete for Clifford+T quantum computing , and universal quantum computing, this latter one only requiring a single additional rule. There are also other graphical calculi being used. The ZW-calculus allows a natural description of Fermionic quantum computing, while the ZH-calculus can describe Toffoli–Hadamard circuits easily.

The ZX-calculus with the rule-set presented above has been used for a variety of applications. The representation of the graph-states is helpful in describing measurement-based quantum computation, while the rules allow one to easily reason about this model of computation. ZX-diagrams also seem to be a very natural language for describing lattice surgery on surface code quantum computers. Finally, recently the ZX-calculus has seen some use in circuit optimisation. In particular, using the rules described above, new ways to minimize the amount of T gates, an important cost metric for fault-tolerant quantum computation, have been found in this paper. Some extensions of the rule-set seem promising for circuit optimisation as well. In particular, an extended Euler Decomposition rule has been used to prove non-trivial circuit identities.

## April 12, 2019

### Terence Tao — Nominations for 2020 Doob Prize now open

Just a brief announcement that the AMS is now accepting (until June 30) nominations for the 2020 Joseph L. Doob Prize, which recognizes a single, relatively recent, outstanding research book that makes a seminal contribution to the research literature, reflects the highest standards of research exposition, and promises to have a deep and long-term impact in its area. The book must have been published within the six calendar years preceding the year in which it is nominated. Books may be nominated by members of the Society, by members of the selection committee, by members of AMS editorial committees, or by publishers.  (I am currently on the committee for this prize.)  A list of previous winners may be found here.  The nomination procedure may be found at the bottom of this page.

### Terence Tao — Value patterns of multiplicative functions and related sequences

Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function ${\lambda}$. For instance, with regards to length 5 sign patterns

$\displaystyle (\lambda(n+1),\dots,\lambda(n+5)) \in \{-1,+1\}^5$

of the Liouville function, we can now show that at least ${24}$ of the ${32}$ possible sign patterns in ${\{-1,+1\}^5}$ occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately ${24}$ seems to be the limitation of our methods.)

The Liouville function can be written as ${\lambda(n) = e^{2\pi i \Omega(n)/2}}$, where ${\Omega(n)}$ is the number of prime factors of ${n}$ (counting multiplicity). One can also consider the variant ${\lambda_3(n) = e^{2\pi i \Omega(n)/3}}$, which is a completely multiplicative function taking values in the cube roots of unity ${\{1, \omega, \omega^2\}}$. Here we are able to show that all ${27}$ sign patterns in ${\{1,\omega,\omega^2\}}$ occur with positive lower density as sign patterns ${(\lambda_3(n+1), \lambda_3(n+2), \lambda_3(n+3))}$ of this function. The analogous result for ${\lambda}$ was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density ${1/8}$ (from this paper of myself and Teräväinen), but these techniques barely fail to handle the ${\lambda_3}$ case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control three-point correlations in the ${\lambda_3}$ case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns ${a \in A_1, a+r \in A_2, a+2r \in A_3}$ for a certain partition ${G = A_1 \cup A_2 \cup A_3}$ of a compact abelian group ${G}$ (think for instance of the unit circle ${G={\bf R}/{\bf Z}}$, although the general case is a bit more complicated, in particular if ${G}$ is disconnected then there is a certain “coprimality” constraint on ${r}$, also we can allow the ${A_1,A_2,A_3}$ to be replaced by any ${A_{c_1}, A_{c_2}, A_{c_3}}$ with ${c_1+c_2+c_3}$ divisible by ${3}$), with each of the ${A_i}$ having measure ${1/3}$. An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an ad hoc method.

The same techniques turn out to also make progress on some conjectures of Erdös-Pomerance and Hildebrand regarding patterns of the largest prime factor ${P^+(n)}$ of a natural number ${n}$. For instance, we improve results of Erdös-Pomerance and of Balog demonstrating that the inequalities

$\displaystyle P^+(n+1) < P^+(n+2) < P^+(n+3)$

and

$\displaystyle P^+(n+1) > P^+(n+2) > P^+(n+3)$

each hold for infinitely many ${n}$, by demonstrating the stronger claims that the inequalities

$\displaystyle P^+(n+1) < P^+(n+2) < P^+(n+3) > P^+(n+4)$

and

$\displaystyle P^+(n+1) > P^+(n+2) > P^+(n+3) < P^+(n+4)$

each hold for a set of ${n}$ of positive lower density. As a variant, we also show that we can find a positive density set of ${n}$ for which

$\displaystyle P^+(n+1), P^+(n+2), P^+(n+3) > n^\gamma$

for any fixed ${\gamma < e^{-1/3} = 0.7165\dots}$ (this improves on a previous result of Hildebrand with ${e^{-1/3}}$ replaced by ${e^{-1/2} = 0.6065\dots}$. A number of other results of this type are also obtained in this paper.

In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets ${A}$ which have some multiplicative structure, in the sense that (roughly speaking) there is a set ${B}$ such that for all small primes ${p}$, the statements ${n \in A}$ and ${pn \in B}$ are roughly equivalent to each other. For instance, if ${A}$ is a level set ${A = \{ n: \omega(n) = 0 \hbox{ mod } 3 \}}$, one would take ${B = \{ n: \omega(n) = 1 \hbox{ mod } 3 \}}$; if instead ${A}$ is a set of the form ${\{ n: P^+(n) \geq n^\gamma\}}$, then one can take ${B=A}$. When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a one-parameter correlation such as

$\displaystyle {\bf E}_n 1_A(n+1) 1_A(n+2) 1_A(n+3)$

with a two-parameter correlation such as

$\displaystyle {\bf E}_n {\bf E}_p 1_B(n+p) 1_B(n+2p) 1_B(n+3p)$

(where we will be deliberately vague as to how we are averaging over ${n}$ and ${p}$), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like

$\displaystyle {\bf E}_n {\bf E}_r 1_B(n+r) 1_B(n+2r) 1_B(n+3r)$

where ${r}$ is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).

### Matt von Hippel — Still Traveling, and a Black Hole

I’m still at the conference in Natal this week, so I don’t have time for a long post. The big news this week was the Event Horizon Telescope’s close-up of the black hole at the center of galaxy M87. If you’re hungry for coverage of that, Matt Strassler has some of his trademark exceptionally clear posts on the topic, while Katie Mack has a nice twitter thread.

## April 11, 2019

### Terence Tao — Conversions between standard polynomial bases

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let ${\mathrm{Poly}_{\leq n}}$ denote the vector space of polynomials ${P:{\bf R} \rightarrow {\bf R}}$ of one variable ${x}$ with real coefficients of degree at most ${n}$. This is a vector space of dimension ${n+1}$, and the sequence of these spaces form a filtration:

$\displaystyle \mathrm{Poly}_{\leq 0} \subset \mathrm{Poly}_{\leq 1} \subset \mathrm{Poly}_{\leq 2} \subset \dots$

A standard basis for these vector spaces are given by the monomials ${x^0, x^1, x^2, \dots}$: every polynomial ${P(x)}$ in ${\mathrm{Poly}_{\leq n}}$ can be expressed uniquely as a linear combination of the first ${n+1}$ monomials ${x^0, x^1, \dots, x^n}$. More generally, if one has any sequence ${Q_0(x), Q_1(x), Q_2(x)}$ of polynomials, with each ${Q_n}$ of degree exactly ${n}$, then an easy induction shows that ${Q_0(x),\dots,Q_n(x)}$ forms a basis for ${\mathrm{Poly}_{\leq n}}$.

In particular, if we have two such sequences ${Q_0(x), Q_1(x), Q_2(x),\dots}$ and ${R_0(x), R_1(x), R_2(x), \dots}$ of polynomials, with each ${Q_n}$ of degree ${n}$ and each ${R_k}$ of degree ${k}$, then ${Q_n}$ must be expressible uniquely as a linear combination of the polynomials ${R_0,R_1,\dots,R_n}$, thus we have an identity of the form

$\displaystyle Q_n(x) = \sum_{k=0}^n c_{QR}(n,k) R_k(x)$

for some change of basis coefficients ${c_{QR}(n,k) \in {\bf R}}$. These coefficients describe how to convert a polynomial expressed in the ${Q_n}$ basis into a polynomial expressed in the ${R_k}$ basis.

Many standard combinatorial quantities ${c(n,k)}$ involving two natural numbers ${0 \leq k \leq n}$ can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients ${\binom{n}{k}}$, which measures the conversion from the shifted monomial basis ${(x+1)^n}$ to the monomial basis ${x^k}$, thanks to (a special case of) the binomial formula:

$\displaystyle (x+1)^n = \sum_{k=0}^n \binom{n}{k} x^k,$

thus for instance

$\displaystyle (x+1)^3 = \binom{3}{0} x^0 + \binom{3}{1} x^1 + \binom{3}{2} x^2 + \binom{3}{3} x^3$

$\displaystyle = 1 + 3x + 3x^2 + x^3.$

More generally, for any shift ${h}$, the conversion from ${(x+h)^n}$ to ${x^k}$ is measured by the coefficients ${h^{n-k} \binom{n}{k}}$, thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

$\displaystyle (x)_n := x (x-1) \dots (x-n+1)$

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind ${s(n,k)}$:

$\displaystyle (x)_n = \sum_{k=0}^n s(n,k) x^k,$

thus for instance

$\displaystyle (x)_3 = s(3,0) x^0 + s(3,1) x^1 + s(3,2) x^2 + s(3,3) x^3$

$\displaystyle = 0 + 2 x - 3x^2 + x^3$

and the conversion back is given by the Stirling numbers of the second kind ${S(n,k)}$:

$\displaystyle x^n = \sum_{k=0}^n S(n,k) (x)_k$

thus for instance

$\displaystyle x^3 = S(3,0) (x)_0 + S(3,1) (x)_1 + S(3,2) (x)_2 + S(3,3) (x)_3$

$\displaystyle = 0 + x + 3 x(x-1) + x(x-1)(x-2).$

If one uses the binomial functions ${\binom{x}{n} = \frac{1}{n!} (x)_n}$ as a basis instead of the falling factorials, one of course can rewrite these conversions as

$\displaystyle \binom{x}{n} = \sum_{k=0}^n \frac{1}{n!} s(n,k) x^k$

and

$\displaystyle x^n = \sum_{k=0}^n k! S(n,k) \binom{x}{k}$

thus for instance

$\displaystyle \binom{x}{3} = 0 + \frac{1}{3} x - \frac{1}{2} x^2 + \frac{1}{6} x^3$

and

$\displaystyle x^3 = 0 + \binom{x}{1} + 6 \binom{x}{2} + 6 \binom{x}{3}.$

As a slight variant, if one instead uses rising factorials

$\displaystyle (x)^n := x (x+1) \dots (x+n-1)$

then the conversion to monomials yields the unsigned Stirling numbers ${|s(n,k)|}$ of the first kind:

$\displaystyle (x)^n = \sum_{k=0}^n |s(n,k)| x^k$

thus for instance

$\displaystyle (x)^3 = 0 + 2x + 3x^2 + x^3.$

One final basis comes from the polylogarithm functions

$\displaystyle \mathrm{Li}_{-n}(x) := \sum_{j=1}^\infty j^n x^j.$

For instance one has

$\displaystyle \mathrm{Li}_1(x) = -\log(1-x)$

$\displaystyle \mathrm{Li}_0(x) = \frac{x}{1-x}$

$\displaystyle \mathrm{Li}_{-1}(x) = \frac{x}{(1-x)^2}$

$\displaystyle \mathrm{Li}_{-2}(x) = \frac{x}{(1-x)^3} (1+x)$

$\displaystyle \mathrm{Li}_{-3}(x) = \frac{x}{(1-x)^4} (1+4x+x^2)$

$\displaystyle \mathrm{Li}_{-4}(x) = \frac{x}{(1-x)^5} (1+11x+11x^2+x^3)$

and more generally one has

$\displaystyle \mathrm{Li}_{-n-1}(x) = \frac{x}{(1-x)^{n+2}} E_n(x)$

for all natural numbers ${n}$ and some polynomial ${E_n}$ of degree ${n}$ (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers

$\displaystyle E_n(x) = \sum_{k=0}^n A(n+1,k) x^k.$

For instance

$\displaystyle E_3(x) = A(4,0) x^0 + A(4,1) x^1 + A(4,2) x^2 + A(4,3) x^3$

$\displaystyle = 1 + 11x + 11x^2 + x^3.$

These particular coefficients also have useful combinatorial interpretations. For instance:

• The binomial coefficient ${\binom{n}{k}}$ is of course the number of ${k}$-element subsets of ${\{1,\dots,n\}}$.
• The unsigned Stirling numbers ${|s(n,k)|}$ of the first kind are the number of permutations of ${\{1,\dots,n\}}$ with exactly ${k}$ cycles. The signed Stirling numbers ${s(n,k)}$ are then given by the formula ${s(n,k) = (-1)^{n-k} |s(n,k)|}$.
• The Stirling numbers ${S(n,k)}$ of the second kind are the number of ways to partition ${\{1,\dots,n\}}$ into ${k}$ non-empty subsets.
• The Eulerian numbers ${A(n,k)}$ are the number of permutations of ${\{1,\dots,n\}}$ with exactly ${k}$ ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients ${\binom{n}{k}}$ obey the well known Pascal identity

$\displaystyle \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}$

(with the convention that ${\binom{n}{k}}$ vanishes outside of the range ${0 \leq k \leq n}$). In a similar spirit, the unsigned Stirling numbers ${|s(n,k)|}$ of the first kind obey the identity

$\displaystyle |s(n+1,k)| = n |s(n,k)| + |s(n,k-1)|$

and the signed counterparts ${s(n,k)}$ obey the identity

$\displaystyle s(n+1,k) = -n s(n,k) + s(n,k-1).$

The Stirling numbers of the second kind ${S(n,k)}$ obey the identity

$\displaystyle S(n+1,k) = k S(n,k) + S(n,k-1)$

and the Eulerian numbers ${A(n,k)}$ obey the identity

$\displaystyle A(n+1,k) = (k+1) A(n,k) + (n-k+1) A(n,k-1).$

## April 10, 2019

### Clifford Johnson — It’s a Black Hole!

Yes, it’s a black hole all right. Following on from my reflections from last night, I can report that the press conference revelations were remarkable indeed. Above you see the image they revealed! It is the behemoth at the centre of the galaxy M87! This truly groundbreaking image is the … Click to continue reading this post

The post It’s a Black Hole! appeared first on Asymptotia.

### Tommaso Dorigo — The Plot Of The Week - Multiple Pentaquark Candidates

It is a bit embarrassing to post here a graph of boring elementary particle signals, when the rest of the blogosphere is buzzing after the release of the first real black hole image from the Event Horizon collaboration. So okay, before going into pentaquarks, below is the image of the black hole at the center of M87, a big elliptical galaxy 54 million light years away.

### Matt Strassler — A Black Day (and a Happy One) In Scientific History

Wow.

Twenty years ago, astronomers Heino Falcke, Fulvio Melia and Eric Agol (a former colleague of mine at the University of Washington) pointed out that the black hole at the center of our galaxy, the Milky Way, was probably big enough to be observed — not with a usual camera using visible light, but using radio waves and clever techniques known as “interferometry”.  Soon it was pointed out that the black hole in M87, further but larger, could also be observed.  [How? I explained this yesterday in this post.]

And today, an image of the latter, looking quite similar to what we expected, was presented to humanity.  Just as with the discovery of the Higgs boson, and with LIGO’s first discovery of gravitational waves, nature, captured by the hard work of an international group of many scientists, gives us something definitive, uncontroversial, and spectacularly in line with expectations.

An image of the dead center of the huge galaxy M87, showing a glowing ring of radio waves from a disk of rapidly rotating gas, and the dark quasi-silhouette of a solar-system-sized black hole.  Congratulations to the Event Horizon Telescope team

I’ll have more to say about this later [have to do non-physics work today ] and in particular about the frustration of not finding any helpful big surprises during this great decade of fundamental science — but for now, let’s just enjoy this incredible image for what it is, and congratulate those who proposed this effort and those who carried it out.

### Richard Easther — Dark Stars

The media is full of stories about the impending release of the first ever images of a black hole – images that not only represent cutting edge science, but are the culmination of 200 years of speculation and theory moving ever closer to observation.

This is huge for astrophysics, and a stunning example of how the mundane rules of our tangible, everyday world give physicists the ability to make intellectual leaps into the unknown – and the long-thought unknowable.

## Dark Stars

The idea of a black hole can be traced to a calculation by English “natural philosopher” John Michell in 1783.

Along with his contemporaries, he imagined light as a stream of particles. Like balls thrown with superhuman force, these corpuscles emitted by the sun had no trouble reaching escape velocity and sailing away into space.

Michell pulled on a loose intellectual thread, asking how big a star would need to be before its own light could not escape its clutches.That is, when would the gravitational field of a star become so strong that, rather than setting off into the distance, light would fall back to the stellar surface?

Using no more than today’s high-school physics, Michell worked out that you could reach that point of no return by scaling up the sun by a factor of 500. A star 500 times greater in radius than the sun would be 125,000,000 (500x500x500) bigger by volume, and thus would “weigh” 125 million solar masses.

At that size, such a giant would become a dark star – vanishing from view.

## From Newton to Einstein

Michell was working with the Newtonian model of gravity. In 1915, Einstein wove space, time and gravity into a single fabric with his General Theory of Relativity. Working in Berlin, at the heart of an empire locked in a global conflict, he hammered out a set of equations that explained gravity as the stretching of spacetime by massive objects. In the Einsteinian framework. the gravitational field of single, stationary object is described by the Schwarzschild metric, named for artillery officer Karl Schwarzschild, who did the maths while being treated for a painful (and eventually fatal) disease in a hospital near the Russian front, in the months after Einstein published his original equations.

To derive the Schwarzschild metric, you first imagine the object as a single idealised point. In the vicinity of this point, the math tells us that space and time turn inside out. This makes it impossible for light – or anything else – to escape past the event horizon into the world beyond. Beyond the event horizon normality is restored, and to the outside observer, it marks the external boundary of the black hole.

Mathematically, the black hole itself consists solely of empty space — it is a pucker in the fabric of spacetime. It is not so much that a black hole has mass; rather, the black hole is a memory of where mass once was.

The event horizons you would have calculated for all known astronomical objects in 1915 were far smaller than their actual size, in contrast to the vast dark stars extrapolated by Michell. In other words, after Einstein, black holes came to be understood as the endpoint of a collapse.

## Relativistic Astrophysics

By the 1930s the pieces of what we now call “modern physics” – relativity, quantum mechanics, nuclear physics, along with a nascent understanding of particle physics – had been assembled. Armed with these tools, astrophysicists set about exploring the inner workings of stars. In the 1950s and 1960s we first observed pulsars and quasars (powered, respectively, by neutron stars and black holes) whose very existence is contingent upon these newly discovered laws of physics and these discoveries inspired that new field of “relativistic astrophysics”.

Texas became the epicentre of this emerging field – and two New Zealanders had ringside seats in the early 1960s. One of them, Beatrice Tinsley, went on to do fundamental work on the evolving universe; the other was mathematician, Roy Kerr, who took the step beyond Schwarzschild by solving Einstein’s equations for a spinning black hole, giving us what is now known as the Kerr metric.

## SEEING IS BELIEVING

The first black holes to be discovered were 10 or 20 times the mass of the sun; the husks of massive stars that burn fast and quickly die. But there’s another category of black hole in our universe — the supermassive black holes that live at the hearts of galaxies.

And it is these giants that we are about to see, as it were, in person.

The coming images have been captured by the Event Horizon Telescope, a network of radio telescopes stretching from the South Pole to Greenland that can operate as a single instrument. This composite telescope has been trained on the centre of our own Milky Way galaxy, and the much more distant heart of the giant elliptical galaxy M87.

The Milky Way’s black heart corresponds to an object weighing millions as much as our sun; and that of the M87 galaxy amounts to billions of times the weight of our sun. Perhaps poetically, they sit at either end of a similar scale to the dark stars originally imagined by Michell.

And, since astrophysical black holes are likely born spinning (since they inherit the rotation of whatever produced them), the images we are about to see may reveal not just a black hole, but a black hole for which the Kerr metric is a key part of our ability to understand it.

Of course, “seeing” a black hole is a paradox. What we see is the matter circling its event horizon, caught in the grip of the black hole’s gravitational field, a spinning vortex around a cosmic plughole. Detailed images of this accretion disk alone would be spectacular enough, but on top of that, we will it apparently twisted into fantastical shapes as the light is pulled and twisted along complex paths in the vicinity of the event horizon.

These images will test Einstein’s understanding of gravity, and provide unprecedented proof that black holes truly exist in our universe. Because seeing really is believing. Even for scientists.

Header image: Oliver James et al 2015 Class. Quantum Grav. 32 065001

### Clifford Johnson — Event!

Well, I’m off to get six hours of sleep before the big announcement tomorrow! The Event Horizon Telescope teams are talking about an announcement of “groundbreaking” results tomorrow at 13:00 CEST. Given that they set out to “image” the event horizon of a black hole, this suggests (suggests) that they … Click to continue reading this post

The post Event! appeared first on Asymptotia.

## April 09, 2019

### n-Category CaféPostdoctoral Researcher Position in Lisbon

Applications are invited for a postdoctoral researcher position in the “Higher Structures and Applications” research team, funded by the Portuguese funding body FCT.

The Higher Structures and Applications project uses higher algebraic structures to obtain new results in topology, geometry and algebra, and to develop applications in related areas of physics and in topological quantum computation.

The current team members are:

Pedro Boavida, Joao Esteves, Bjorn Gohla, Joao Faria Martins (U. Leeds), John Huerta, Pedro Lopes, Marco Mackaay (U. Algarve), Aleksandar Mikovic (U. Lusofona), Roger Picken, Marko Stosic. (IST, Lisbon, unless specified otherwise.)

Further details:

• location: Instituto Superior Tecnico, University of Lisbon.
• duration: 2 years with a possible 6-month extension depending on funding.
• annual gross salary: 29796,76 euros, with meal subsidy and social security benefits.
• starting date: 1st July, 2019.

Candidates should send the following documents as pdf files at the latest by 30th April, 2019:

1. A CV with the names of at least two people willing to write a recommendation.
2. A research statement.
3. Copies of their most relevant publications
4. PhD diploma.
5. PhD thesis.

by email to Roger Picken with copy to Marko Stosic and await an email acknowledgement.

### Noncommutative Geometry — Upcoming NCG meetings

Noncommutative Geometry Festival 2019 NCG and Representation Theory Noncommutative Geometry Spring Institue 2019, Nashville

### Clifford Johnson — Chutney Time!

It was Chutney time here! Well this was a few weekends back. I’m posting late. I used up some winter-toughened tomatoes from the garden that remained and slowly developed on last year’s vines. It was time to clear them for new planting, and so off all these tomatoes came, red, … Click to continue reading this post

The post Chutney Time! appeared first on Asymptotia.

### John Baez — Complex Adaptive System Design (Part 9)

Here’s our latest paper for the Complex Adaptive System Composition and Design Environment project:

• John Baez, John Foley and Joe Moeller, Network models from Petri nets with catalysts.

Check it out! And please report typos, mistakes, or anything you have trouble understanding! I’m happy to answer questions here.

### The idea

Petri nets are a widely studied formalism for describing collections of entities of different types, and how they turn into other entities. I’ve written a lot about them here. Network models are a formalism for designing and tasking networks of agents, which our team invented for this project. Here we combine these ideas! This is worthwhile because while both formalisms involve networks, they serve a different function, and are in some sense complementary.

A Petri net can be drawn as a bipartite directed graph with vertices of two kinds: places, drawn as circles, and transitions drawn as squares:

When we run a Petri net, we start by placing a finite number of dots called tokens in each place:

This is called a marking. Then we repeatedly change the marking using the transitions. For example, the above marking can change to this:

and then this:

Thus, the places represent different types of entity, and the transitions are ways that one collection of entities of specified types can turn into another such collection.

Network models serve a different function than Petri nets: they are a general tool for working with networks of many kinds. Mathematically a network model is a lax symmetric monoidal functor $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ where $\mathsf{S}(C)$ is the free strict symmetric monoidal category on a set $C.$ Elements of $C$ represent different kinds of ‘agents’. Unlike in a Petri net, we do not usually consider processes where these agents turn into other agents. Instead, we wish to study everything that can be done with a fixed collection of agents. Any object $x \in \mathsf{S}(C)$ is of the form $c_1 \otimes \cdots \otimes c_n$ for some $c_i \in C;$ thus, it describes a collection of agents of various kinds. The functor $G$ maps this object to a category $G(x)$ that describes everything that can be done with this collection of agents.

In many examples considered so far, $G(x)$ is a category whose morphisms are graphs of some sort whose nodes are agents of types $c_1, \dots, c_n.$ Composing these morphisms corresponds to ‘overlaying’ graphs. Network models of this sort let us design networks where the nodes are agents and the edges are communication channels or shared commitments. In our first paper the operation of overlaying graphs was always commutative:

• John Baez, John Foley, Joe Moeller and Blake Pollard, Network models.

Subsequently Joe introduced a more general noncommutative overlay operation:

• Joe Moeller, Noncommutative network models.

This lets us design networks where each agent has a limit on how many communication channels or commitments it can handle; the noncommutativity lets us take a ‘first come, first served’ approach to resolving conflicting commitments.

Here we take a different tack: we instead take $G(x)$ to be a category whose morphisms are processes that the given collection of agents, $x,$ can carry out. Composition of morphisms corresponds to carrying out first one process and then another.

This idea meshes well with Petri net theory, because any Petri net $P$ determines a symmetric monoidal category $FP$ whose morphisms are processes that can be carried out using this Petri net. More precisely, the objects in $FP$ are markings of $P,$ and the morphisms are sequences of ways to change these markings using transitions, e.g.:

Given a Petri net, then, how do we construct a network model $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ and in particular, what is the set $C$? In a network model the elements of $C$ represent different kinds of agents. In the simplest scenario, these agents persist in time. Thus, it is natural to take $C$ to be some set of ‘catalysts’. In chemistry, a reaction may require a catalyst to proceed, but it neither increases nor decrease the amount of this catalyst present. In everyday life, a door serves as a catalyst: it lets you walk though a wall, and it doesn’t get used up in the process!

For a Petri net, ‘catalysts’ are species that are neither increased nor decreased in number by any transition. For example, in the following Petri net, species $a$ is a catalyst:

but neither $b$ nor $c$ is a catalyst. The transition $\tau_1$ requires one token of type $a$ as input to proceed, but it also outputs one token of this type, so the total number of such tokens is unchanged. Similarly, the transition $\tau_2$ requires no tokens of type $a$ as input to proceed, and it also outputs no tokens of this type, so the total number of such tokens is unchanged.

In Theorem 11 of our paper, we prove that given any Petri net $P,$ and any subset $C$ of the catalysts of $P,$ there is a network model

$G \colon \mathsf{S}(C) \to \mathsf{Cat}$

An object $x \in \mathsf{S}(C)$ says how many tokens of each catalyst are present; $G(x)$ is then the subcategory of $FP$ where the objects are markings that have this specified amount of each catalyst, and morphisms are processes going between these.

From the functor $G \colon \mathsf{S}(C) \to \mathsf{Cat}$ we can construct a category $\int G$ by ‘gluing together’ all the categories $G(x)$ using the Grothendieck construction. Because $G$ is symmetric monoidal we can use an enhanced version of this construction to make $\int G$ into a symmetric monoidal category. We already did this in our first paper on network models, but by now the math has been better worked out here:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The tensor product in $\int G$ describes doing processes ‘in parallel’. The category $\int G$ is similar to $FP,$ but it is better suited to applications where agents each have their own ‘individuality’, because $FP$ is actually a commutative monoidal category, where permuting agents has no effect at all, while $\int G$ is not so degenerate. In Theorem 12 of our paper we make this precise by more concretely describing $\int G$ as a symmetric monoidal category, and clarifying its relation to $FP.$

There are no morphisms between an object of $G(x)$ and an object of $G(x')$ when $x \not\cong x',$ since no transitions can change the amount of catalysts present. The category $FP$ is thus a ‘disjoint union’, or more technically a coproduct, of subcategories $FP_i$ where $i,$ an element of free commutative monoid on $C,$ specifies the amount of each catalyst present.

The tensor product on $FP$ has the property that tensoring an object in $FP_i$ with one in $FP_j$ gives an object in $FP_{i+j},$ and similarly for morphisms. However, in Theorem 14 we show that each subcategory $FP_i$ also has its own tensor product, which describes doing one process after another while reusing catalysts.

This tensor product is a very cool thing. On the one hand it’s quite obvious: for example, if two people want to walk through a door, they can both do it, one at a time, because the door doesn’t get used up when someone walks through it. On the other hand, it’s mathematically interesting: it turns out to give a lot of examples of monoidal categories that can’t be made symmetric or even braided, even though the tensor product of objects is commutative! The proof boils down to this:

Here $i$ represents the catalysts, and $f$ and $f'$ are two processes which we can carry out using these catalysts. We can do either one first, but we get different morphisms as a result.

The paper has lots of pictures like this—many involving jeeps and boats, which serve as catalysts to carry people first from a base to the shore and then from the shore to an island. I think these make it clear that the underlying ideas are quite commonsensical. But they need to be formalized to program them into a computer—and it’s nice that doing this brings in some classic themes in category theory!

Some posts in this series:

Part 2. Metron’s software for system design.

Part 3. Operads: the basic idea.

Part 4. Network operads: an easy example.

Part 5. Algebras of network operads: some easy examples.

Part 6. Network models.

Part 7. Step-by-step compositional design and tasking using commitment networks.

Part 8. Compositional tasking using category-valued network models.

Part 9 – Network models from Petri nets with catalysts.

## April 08, 2019

### Noncommutative Geometry — Noncommutative manifolds and their symmetries

A conference dedicated to Giovanni Landi on the occasion of his 60th Birthday,

### Doug Natelson — Brief items

A few brief items as I get ready to write some more about several issues:

• The NY Times posted this great video about using patterned hydrophobic/hydrophilic surfaces to get bouncing water droplets to spin.  Science has their own video, and the paper itself is here.
• Back in January Scientific American had this post regarding graduate student mental health.  This is a very serious, complex issue, thankfully receiving increased attention.
• The new Dark Energy Spectroscopic Instrument has had "first light."
• Later this week the Event Horizon Telescope will be releasing its first images of the supermassive black hole at the galactic center.
• SpaceX is getting ready to launch a Falcon Heavy carrying a big communications satellite.  The landing for these things is pretty science-fiction-like!

## April 07, 2019

### Tommaso Dorigo — Accelerating The Search For Dark Matter With Machine Learning

Yes, this is supposedly a particle physics blog, not a machine learning one - and yet, I have been finding myself blogging a lot more about machine learning than particle physics as of late. Why is that?
Well, of course the topic of algorithms that may dramatically improve our statistical inference from collider data is of course dear to my heart, and has been so since at least two decades (my first invention, the "inverse bagging" algorithm, is dated 1992, when nobody even knew what bagging was). But the more incidental reason is that now _everybody_ is interested in the topic, and that means all of my particle physics and astroparticle physics colleagues.

## April 06, 2019

### Terence Tao — Prismatic cohomology

Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two notetakers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology ${H^i(X/\overline{A}, A)}$ for varieties ${X}$ (or similar objects) defined over arbitrary commutative rings ${\overline{A}}$, and with coefficients in another arbitrary commutative ring ${A}$. Currently, we have various flavours of cohomology that only work for certain types of domain rings ${\overline{A}}$ and coefficient rings ${A}$:

• Singular cohomology, which roughly speaking works when the domain ring ${\overline{A}}$ is a characteristic zero field such as ${{\bf R}}$ or ${{\bf C}}$, but can allow for arbitrary coefficients ${A}$;
• de Rham cohomology, which roughly speaking works as long as the coefficient ring ${A}$ is the same as the domain ring ${\overline{A}}$ (or a homomorphic image thereof), as one can only talk about ${A}$-valued differential forms if the underlying space is also defined over ${A}$;
• ${\ell}$-adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring ${A = {\bf Z}_\ell}$ is localised around a prime ${\ell}$ that is different from the characteristic ${p}$ of the domain ring ${\overline{A}}$; and
• Crystalline cohomology, in which the domain ring is a field ${k}$ of some finite characteristic ${p}$, but the coefficient ring ${A}$ can be a slight deformation of ${k}$, such as the ring of Witt vectors of ${k}$.

There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case ${A=\overline{A} = {\bf R}}$. The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:

The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point ${(p,p)}$ in the above diagram, in which the domain ring ${\overline{A}}$ and the coefficient ring ${A}$ are both thought of as being “close to characteristic ${p}$” in some sense, so that the dilates ${pA, pA'}$ of these rings is either zero, or “small”. For instance, the ${p}$-adic ring ${{\bf Z}_p}$ is technically of characteristic ${0}$, but ${p {\bf Z}_p}$ is a “small” ideal of ${{\bf Z}_p}$ (it consists of those elements of ${{\bf Z}_p}$ of ${p}$-adic valuation at most ${1/p}$), so one can think of ${{\bf Z}_p}$ as being “close to characteristic ${p}$” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings ${A,A'}$ for which prismatic cohomology is effective:

To define prismatic cohomology rings ${H^i_\Delta(X/\overline{A}, A)}$ one needs a “prism”: a ring homomorphism from ${A}$ to ${\overline{A}}$ equipped with a “Frobenius-like” endomorphism ${\phi: A \to A}$ on ${A}$ obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:

(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)

There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “${q}$-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators ${d}$ that for instance applied to monomials ${t^n}$ by the usual formula

$\displaystyle d(t^n) = n t^{n-1} dt,$

prismatic cohomology in coordinates can be computed using a “${q}$-derivative” operator ${d_q}$ that for instance applies to monomials ${t^n}$ by the formula

$\displaystyle d_q (t^n) = [n]_q t^{n-1} d_q t$

where

$\displaystyle [n]_q = \frac{q^n-1}{q-1} = 1 + q + \dots + q^{n-1}$

is the “${q}$-analogue” of ${n}$ (a polynomial in ${q}$ that equals ${n}$ in the limit ${q=1}$). (The ${q}$-analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.

### Andrew Jaffe —

@TheMekons make the world alright, briefly, at the 100 Club, London.

### Backreaction — Away Note/Travel Update/Interna

I will be away next week, giving three talks at Brookhaven National Lab on Tuesday, April 9, and one at Yale April 10. Next upcoming lectures are Stuttgart on April 29 (in Deutsch), Barcelona on May 23, Mainz on June 11, (probably) Groningen on June 22, and Hamburg on July 5th. I may or may not attend this year’s Lindau Nobel Laureate meeting, and have a hard time making up my mind about

### Backreaction — Does the world need a larger particle collider? [video]

Another attempt to explain myself. Transcript below. I know you all wanted me to say something about the question of whether or not to build a new particle collider, one that is larger than even the Large Hadron Collider. And your wish is my command, so here we go. There seem to be a lot of people who think I’m an enemy of particle physics. Most of those people happen to be particle

### John Baez — Hidden Symmetries of the Hydrogen Atom

Here’s the math colloquium talk I gave at Georgia Tech this week:

Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.

I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way.

In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:

• Greg Egan, The ellipse and the atom.

He says:

Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.

But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.

Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.

Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?

Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?

I have a bunch more questions, but this one is the simplest!

## April 05, 2019

### Scott Aaronson — Congratulations!

Congrats to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, who won the 2018 Turing Award for their work on deep learning (i.e., what used to be called neural nets). This might be the first Turing Award ever given for something where no one really understands why it works … and it’s years overdue.

Congrats to Avi Wigderson for winning the Knuth Prize. When I was asked to write a supporting nomination letter, my first suggestion was to submit a blank sheet of paper—since for anyone in theoretical computer science, there’s nothing that needs to be said about why Avi should win any awards we have. I hope Avi remains a guiding light of our community for many years to come.

And congrats to Mark Braverman for winning the Alan T. Waterman Award, one that I have some personal fondness for, along with materials scientist Jennifer Dionne. As Sasha Razborov once put it, after he (Sasha), I, and others recoiled from the task of proving the Linial-Nisan Conjecture, that polylog-wise independent distributions are indistinguishable from uniform by AC0 circuits, a “braver man” stepped in to do the job.

### Matt von Hippel — Nonperturbative Methods for Conformal Theories in Natal

I’m at a conference this week, on Nonperturbative Methods for Conformal Theories, in Natal on the northern coast of Brazil.

“Nonperturbative” means that most of the people at this conference don’t use the loop-by-loop approximation of Feynman diagrams. Instead, they try to calculate things that don’t require approximations, finding formulas that work even for theories where the forces involved are very strong. In practice this works best in what are called “conformal” theories, roughly speaking these are theories that look the same whichever “scale” you use. Sometimes these theories are “integrable”, theories that can be “solved” exactly with no approximation. Sometimes these theories can be “bootstrapped”, starting with a guess and seeing how various principles of physics constrain it, mapping out a kind of “space of allowed theories”. Both approaches, integrability and bootstrap, are present at this conference.

This isn’t quite my community, but there’s a fair bit of overlap. We care about many of the same theories, like N=4 super Yang-Mills. We care about tricks to do integrals better, or to constrain mathematical guesses better, and we can trade these kinds of tricks and give each other advice. And while my work is typically “perturbative”, I did have one nonperturbative result to talk about, one which turns out to be more closely related to the methods these folks use than I had appreciated.

## April 04, 2019

### n-Category CaféCategory Theory 2019

I’ve announced this before, but registration is now open! Here we go…

Third announcement and call for contributions

Category Theory 2019

University of Edinburgh, 7–13 July 2019

Invited speakers:

plus an invited tutorial lecture on graphical linear algebra by

and a public event on Inclusion-exclusion in mathematics and beyond by

Submission for CT2019 is handled by EasyChair through this link

The deadline for submissions is May 1, with notification of acceptance on June 1. To submit, you will need to make an EasyChair account, which is a simple process. Submissions should be in the form of a brief (1 page) abstract.

Registration is independent of paper submission and can be done through the conference website.

The registration fee is £90 for early career participants, and £180 for established career participants, and will include tea breaks, lunches, and the conference dinner.

When you register, you may also apply for funding support. Successful applicants will receive funding for accommodation, but not for travel or the registration fee. Decisions on funding support will be given in mid-May.

Registration is open until May 1 if you are applying for funding support, or June 14 if you are not.

Enquiries may be directed by email to categorytheory2019@gmail.com.

We look forward to seeing you in Edinburgh!

The Organizing Committee: Steve Awodey, Richard Garner, Chris Heunen (chair), Tom Leinster, Christina Vasilakopoulou

The Scientific Committee: Steve Awodey (chair), Julie Bergner, Gabriella Bohm, John Bourke, Eugenia Cheng, Robin Cockett, Chris Heunen, Zurab Janelidze, Dominic Verity

### Tommaso Dorigo — The Plot Of The Week - ATLAS Dilepton Resonance Search

For the tenth anniversary of this blog being hosted by Science 2.0, which is coming in a few days, I decided to reinstall the habit I once had of weekly picking and commenting on a result from high-energy physics research, a series I called "The Plot Of The Week". These days I am busier than I used to be when this blog started being published here, so I am not sure I will be able to keep a weekly pace for this series; on the other hand I want to make an attempt, and the first step in that direction is this article.

## April 03, 2019

### Scott Aaronson — Beware of fake FOCS site!

As most of you in theoretical computer science will know, the submission deadline for the 2019 FOCS conference is this Friday, April 5. The FOCS’2019 program committee chair, my UT Austin colleague David Zuckerman, has asked me to warn everyone that a fake submission site was set up at aconf.org—apparently as a phishing scam—and is one of the first results to come up when you google “FOCS 2019.” Do not submit there! The true URL is focs2019.cs.jhu.edu; accept no substitutes!

Anyway, I’ve been thrashing for several weeks—just barely escaping spaghettification at the Email Event Horizon—but I hope to be back shortly with your regularly scheduled programming.

### Doug Natelson — The physics of vision

We had another great colloquium last week, this one by Stephanie Palmer of the University of Chicago.  One aspect of her research looks at the way visual information is processed.  In particular, not all of the visual information that hits your retina is actually passed along to your brain.  In that sense, your retina is doing a kind of image compression.

Your retina and brain are actually anticipating, effectively extrapolating the predictable parts of motion.  This makes sense - it takes around 50 ms for the neurons in your retina to spike in response to a visual stimulus like a flash of light.  That kind of delay would make it nearly impossible to do things like catch a hard-thrown ball or return a tennis serve.  You are able to do these things because your brain is telling you ahead of time where some predictably moving object should be.  A great demonstration of this is here.  It looks like the flashing radial lines are lagging behind the rotating "second hand", but they're not.  Instead, your brain is telling you predictive information about where the second hand should be.

People are able to do instrumented measurements of retinal tissue, looking at the firing of individual neurons in response to computer-directed visual stimuli.  Your retina has evolved both to do the anticipation, and to do a very efficient job of passing along the predictable part of visualized motion while not bothering to pass along much noise that might be on top of this.  Here is a paper that talks about how one can demonstrate this quantitatively, and here (sorry - can't find a non-pay version) is an analysis about how optimized the compression is at tossing noise and keeping predictive power.  Very cool stuff.

## March 29, 2019

### John Baez — Social Contagion Modeled on Random Networks

Check out the video of Daniel Cicala’s talk, the fourth in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. A social contagion may manifest as a cultural trend, a spreading opinion or idea or belief. In this talk, we explore a simple model of social contagion on a random network. We also look at the effect that network connectivity, edge distribution, and heterogeneity has on the diffusion of a contagion.

The talk slides are here.

• Mason A. Porter and James P. Gleeson, Dynamical systems on networks: a tutorial.

• Duncan J. Watts, A simple model of global cascades on random networks.

### John Baez — The Pi Calculus: Towards Global Computing

Check out the video of Christian Williams’’s talk in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. Historically, code represents a sequence of instructions for a single machine. Each computer is its own world, and only interacts with others by sending and receiving data through external ports. As society becomes more interconnected, this paradigm becomes more inadequate – these virtually isolated nodes tend to form networks of great bottleneck and opacity. Communication is a fundamental and integral part of computing, and needs to be incorporated in the theory of computation.

To describe systems of interacting agents with dynamic interconnection, in 1980 Robin Milner invented the pi calculus: a formal language in which a term represents an open, evolving system of processes (or agents) which communicate over names (or channels). Because a computer is itself such a system, the pi calculus can be seen as a generalization of traditional computing languages; there is an embedding of lambda into pi – but there is an important change in focus: programming is less like controlling a machine and more like designing an ecosystem of autonomous organisms.

We review the basics of the pi calculus, and explore a variety of examples which demonstrate this new approach to programming. We will discuss some of the history of these ideas, called “process algebra”, and see exciting modern applications in blockchain and biology.

“… as we seriously address the problem of modelling mobile communicating systems we get a sense of completing a model which was previously incomplete; for we can now begin to describe what goes on outside a computer in the same terms as what goes on inside – i.e. in terms of interaction. Turning this observation inside-out, we may say that we inhabit a global computer, an informatic world which demands to be understood just as fundamentally as physicists understand the material world.” — Robin Milner

The talks slides are here.

• Robin Milner, The polyadic pi calculus: a tutorial.

• Robin Milner, Communicating and Mobile Systems.

• Joachim Parrow, An introduction to the pi calculus.

### Matt von Hippel — Hexagon Functions V: Seventh Heaven

I’ve got a new paper out this week, a continuation of a story that has threaded through my career since grad school. With a growing collaboration (now Simon Caron-Huot, Lance Dixon, Falko Dulat, Andrew McLeod, and Georgios Papathanasiou) I’ve been calculating six-particle scattering amplitudes in my favorite theory-that-does-not-describe-the-real-world, N=4 super Yang-Mills. We’ve been pushing to more and more “loops”: tougher and tougher calculations that approximate the full answer better and better, using the “word jumble” trick I talked about in Scientific American. And each time, we learn something new.

Now we’re up to seven loops for some types of particles, and six loops for the rest. In older blog posts I talked in megabytes: half a megabyte for three loops, 15 MB for four loops, 300 MB for five loops. I don’t have a number like that for six and seven loops: we don’t store the result in that way anymore, it just got too cumbersome. We have to store it in a simplified form, and even that takes 80 MB.

Some of what we learned has to do with the types of mathematical functions that we need: our “guess” for the result at each loop. We’ve honed that guess down a lot, and discovered some new simplifications along the way. I won’t tell that story here (except to hint that it has to do with “cosmic Galois theory”) because we haven’t published it yet. It will be out in a companion paper soon.

This paper focused on the next step, going from our guess to the correct six- and seven-loop answers. Here too there were surprises. For the last several loops, we’d observed a surprisingly nice pattern: different configurations of particles with different numbers of loops were related, in a way we didn’t know how to explain. The pattern stuck around at five loops, so we assumed it was the real deal, and guessed the new answer would obey it too.

Usually when scientists tell this kind of story, the pattern works, it’s a breakthrough, everyone gets a Nobel prize, etc. This time? Nope!

The pattern failed. And it failed in a way that was surprisingly difficult to detect.

The way we calculate these things, we start with a guess and then add what we know. If we know something about how the particles behave at high energies, or when they get close together, we use that to pare down our guess, getting rid of pieces that don’t fit. We kept adding these pieces of information, and each time the pattern seemed ok. It was only when we got far enough into one of these approximations that we noticed a piece that didn’t fit.

That piece was a surprisingly stealthy mathematical function, one that hid from almost every test we could perform. There aren’t any functions like that at lower loops, so we never had to worry about this before. But now, in the rarefied land of six-loop calculations, they finally start to show up.

We have another pattern, like the old one but that isn’t broken yet. But at this point we’re cautious: things get strange as calculations get more complicated, and sometimes the nice simplifications we notice are just accidents. It’s always important to check.

This result was a long time coming. Coordinating a large project with such a widely spread collaboration is difficult, and sometimes frustrating. People get distracted by other projects, they have disagreements about what the paper should say, even scheduling Skype around everyone’s time zones is a challenge. I’m more than a little exhausted, but happy that the paper is out, and that we’re close to finishing the companion paper as well. It’s good to have results that we’ve been hinting at in talks finally out where the community can see them. Maybe they’ll notice something new!

### Robert Helling — Proving the Periodic Table

The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.

In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$a^2+b^2=c^2$$

$$H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)

 Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

## March 26, 2019

David Brooks writes in the New York Times that we should figure out how to bottle the civic health southwest Nebraska enjoys:

Everybody says rural America is collapsing. But I keep going to places with more moral coherence and social commitment than we have in booming urban areas. These visits prompt the same question: How can we spread the civic mind-set they have in abundance?

For example, I spent this week in Nebraska, in towns like McCook and Grand Island. These places are not rich. At many of the schools, 50 percent of the students receive free or reduced-cost lunch. But they don’t have the pathologies we associate with poverty.

Maybe that’s because those places aren’t high in poverty! The poverty rate in McCook is 9.6%; in Grand Island it’s 15%. The national rate is 12.3%. Here’s a Census page with those numbers. What about the lunches? 50 percent of students receiving free or reduced-price lunch sounds like a lot, unless you know that slightly more than half of all US public school students are eligible for free and reduced-price lunch. (Brooks says “receive,” not “are eligible for,” but it’s the latter statistics that are widely reported and I’m guessing that’s what he means; apologies if I’m wrong.)

Crime is low. Many people leave their homes and cars unlocked.

Is it? And do they? I didn’t immediately find city-level crime data that looked rock solid to me, but if you trust city-data.com, crime in Grand Island roughly tracks national levels while crime in McCook is a little lower. And long-time Grand Island resident Gary Christensen has a different take than Brooks does:

Gary Christensen, a Grand Island resident for over 68 years says times are changing.
“It was a community that you could leave you doors open leave the keys in your car and that kind of thing, and nobody ever bothered it. But those days are long gone,” said Gary Christensen, resident.

One way you can respond to this is to say I’m missing the point of Brooks’s article. Isn’t he just saying civic involvement is important and it’s healthy when people feel a sense of community with their neighbors? Are the statistics really that important?

Yes. They’re important. Because what Brooks is really doing here is inviting us to lower ourselves into a warm comfortable stereotype; that where the civic virtues are to be found in full bloom, where people are “just folks,” are in the rural parts of Nebraska, not in New Orleans, or Seattle, or Laredo, or Madison, and most definitely not in Brooklyn or Brookline or Bethesda. But he can’t just say “you know how those people are.” There needs to be some vaguely evidentiary throat-clearing before you launch into what you were going to say anyway.

Which is that Nebraska people are simple dewy real Americans, not like you, urbanized coastal reader of the New York Times. I don’t buy it. McCook, Nebraska sounds nice; but it sounds nice in the same way that urbanized coastal communities are nice. You go someplace and talk to a guy who’s on the city council, you’re gonna be talking to a guy who cares about his community and thinks a lot about how to improve it. Even in Bethesda.

Constantly they are thinking: Does this help my town or hurt it? And when you tell them that this pervasive civic mind-set is an unusual way to be, they look at you blankly because they can’t fathom any other.

There’s Brooks in a nutshell. The only good people are the people who don’t know any better than to be good. By saying so, he condescends to his subjects, his readers, and himself all at once. I don’t buy it. I’ll bet people in southwest Nebraska can fathom a lot more than Brooks thinks they can. I think they probably fathom David Brooks better than he fathoms them.