Planet Musings

September 28, 2022

Doug NatelsonNews items, Nobel speculation

 Some news items of interest:

  • Three weeks old now, but this story about IBM cooling down their enormous dilution refrigerator setup got my attention (as someone with ultralow temperature scientific roots).  IBM did this to demonstrate that this kind of large-scale cooling is possible, since it may be necessary for some implementations (whether superconducting or spin-based) of quantum computing.  To give a sense of scale, Oxford Instruments used to rate their dilution refrigerators based on their cooling power at 100 mK (how much heat could you dump into the system and still have it maintain a steady 100 mK temperature).  The system I worked on in grad school was pretty large, a model 400, meaning it had 400 microwatts of cooling power at 100 mK.  The new IBM setup can handle six dilution refrigerator units with a total cooling power of 10 mW (25 times more cooling capacity) at 100 mK, and with plenty of room for lots of electronic hardware.  Dil fridges are somewhat miraculous, in that they give access to temperatures far below what is readily available in nature thanks to the peculiarities of the 3He/4He mixture phase diagram. 
  • This retraction and the related news article are quite noteworthy.  The claim of room temperature superconductivity in carbon-containing hydrogen-rich material at very high pressures (written about here) has been retracted by the editors of Nature over the objection of the authors.    The big issue, as pointed out by Hirsch and van der Marel, is about the magnetic susceptibility data, the subtraction of a temperature-dependent background, and concerns whether this was done incorrectly (or misleadingly/fraudulently).  
  • Can we all agree, after looking at images like the one in this release, that STM and CO-functionalized-tip AFM are truly amazing techniques that show molecules really do look like chemistry structural diagrams from high school?
  • Quanta magazine has a characteristically excellent article about patterns arising when curved elastic surfaces are squished flat.  
  • They also have an article about this nice experiment (which I have not read in detail).  I need to look at this further, but it's a safe bet to say that many will disagree with the claim (implied by the article headline) that this has now solved the high temperature superconductivity problem to completion.  
And it's that time of year again to speculate about the Nobel prizes.  It would seem that condensed matter is probably due, given the cyclic history of the physics prize.  There are many candidates that have been put forward in previous years (topological insulators; metamaterials; qc lasers; twisted materials; multiferroics; anyons; my always-wrong favorite of geometric phases) as well as quantum optics (Bell's Inequalities).  I suspect the odds-on favorite for the medicine prize would be mRNA-based vaccines, but I don't know the field at all.  Feel free to gossip in the comments.

September 27, 2022

David Hoggsignal processing vs forward modeling

Abby Shaum (CUNY) and I are trying to write up a paper about our work treating oscillating stars as something like FM radios: We use the oscillation modes as carrier frequencies and find any orbital companions through phase or frequency variations of that carrier signal. Today we discussed the difference between doing that and forward modeling the signal. The former is signal processing. The latter is a generative model. Very different! And in many senses forward modeling is more principled. But I still think (and hope) that signal processing has a place in astronomy.

John BaezThis Week’s Finds – Lecture 1

 

Young diagrams are combinatorial structures that show up in a myriad of applications. In this talk I explain how Young diagrams classify conjugacy classes in the symmetric groups, introduce the representation theory of finite groups, and start to explain how Young diagrams classify irreducible representations of the symmetric groups.

For more details, read my paper “Young diagrams and classical groups” here:

http://math.ucr.edu/home/baez/twf/

To attend the talks on Zoom go here.

John BaezYoung Diagrams and Classical Groups

Young diagrams can be used to classify an enormous number of things.   My first one or two This Week’s Finds seminars will be on Young diagrams and classical groups. Here are some lecture notes:

Young diagrams and classical groups.

I probably won’t cover all this material in the seminar. The most important part is the stuff up to and including the classification of irreducible representations of the “classical monoid” End(Cn). (People don’t talk about classical monoids, but they should.)

Just as a reminder: my talks will be on Thursdays at 3:00 pm UK time in Room 6206 of the James Clerk Maxwell Building at the University of Edinburgh. The first will be on September 22nd, and the last on December 1st.

To attend on Zoom, go here:

https://ed-ac-uk.zoom.us/j/82270325098
Meeting ID: 822 7032 5098
Passcode: XXXXXX36

Here the X’s stand for the name of a famous lemma in category theory.

You can see videos of my talks here.

Also, you can discuss them on the Category Theory Community Server if you go here.

Tommaso DorigoK-Capture Muon Tomography: A Proof Of Principle

In a recent post in this blog I discussed the idea of exploiting the properties of negative muons for a new kind of imaging technique of unknown volumes of material. The idea is based on the fact that negative muons stopped inside matter have a lifetime that is modified by nuclear interactions, so that a precise detection of their lifetime and point of decay becomes a means of inferring the composition of unknown volumes. Here, I want to offer the results of a quick simulation of the processes, to show that the idea is not so far-fetched.

Different techniques for muon tomography

read more

September 26, 2022

Mark GoodsellPointless particle theorists?

The latest (twitter) storm in a teacup on this subject nearly prompted me to return to my soapbox, but now my favoured news source (not the Onion but the Grauniad) has an arcicle attacking particle theorists. Yet again, the usual suspect with books to sell and a career to save is dunking on particle physics, the supposedly finished branch of science that should give up and go home now that we found the Higgs boson because we can't prove there's a new particle just round the corner and isn't the Standard Model great anyway? The problem is, of course, that maverick (Top Gun Maverick!) outsiders revealing the "truth" that elites in an ivory tower are spending oodles of public money on failed ideas, and that we need a whistleblower to expose the fraud, gains such traction with the taxpaying public.

I even see other physicists defending this public anti-vax-style conspiracy-theory propagation as good for the field to rattle peoples cages and get people to question their assumptions. The sociology of this is quite interesting, because there are people within the field who either work on niche theories or just want to take down adjacent fields, and would like to see the popular paradigms brought down a peg or two, presumably naively believing that this will lead to more resources (human, cash or just attention) to be sent in their direction. But of course public disparaging of scientists can only ever lead to a general reduction of public trust and a shrinking of the pie for everyone. There exist so many internal mechanisms for theorists to (re)consider what they are working on, e.g.:

  • Grant proposals. The good thing about writing them is that they give you an opportunity to think deeply about what you really want to work on. Boring or uninnovative things just don't get funded. Of course, the evaluation systems may be terrible and biased and reward things not being too far from the reviewer's/panel's interests ... there is much room for improvement; but at least the writing part can be useful.
  • Researcher evaluations. At least here in France we must make a declaration of our activities and plans every year, and write a longer report every few years. This serves a similar purpose to the above.
  • New hires/promotions. Groups want to hire people who are working on interesting stuff. Hiring someone permanently is an endorsement of a field.
  • Citations, talk invitations etc. While people citing your work may just be because they are working on something similar, and people like to invite their friends for talks, sufficiently interesting or new work will persuade people to follow it up and garner attention.
  • These are all group mechanisms whereby scientists evaluate each other and what they are doing themselves. I am sure someone has studied the game theory of it; indeed as individual researchers trying to succeed in our careers we all have to adopt strategies to "win" and it is a shockingly competitive system at every stage. Of course, promoting a new idea can be difficult -- we are in a constant battle for attention (maybe writing a blog is a good strategy?) -- but if there is something really promising people will not ignore it. Ambulance chasing (where tens or hundreds of papers follow a new result) is a sign that plenty of people are ready to do exactly that. If a maverick outsider really had a great idea there would not be a shortage of people willing to follow. To take an example, if "the foundations of physics" really offered opportunities for rapid important progress, people would vote with their feet. I see examples of this all the time with people trying out Quantum Computing, Machine Learning, etc.

    I'll let you in on a secret, therefore: the target of the bile is a straw man. I don't know anyone hired as a BSM model builder in recent years. People became famous for it in the 90s/early 00s because there was no big experiment running and the field was dreaming big. Now we have the LHC and that has focussed imaginations much more. People now hired as phenomenologists may also do some ambulance chasing on the side, but it is not their bread and butter. Inventing models in theory is usually a difficult and imaginative task, aimed at connecting often disparate ideas, but it's not the only task of a phenomenologist: the much bigger ones are understanding existing ones, and trying to connect theory to experiments!


    In defence of ambulance chasing (retch)

    When an experiment announces something unexpected (as happens quite frequently!) what is the correct response? According to our outsider, presumably we should just wait for it to go away and for the Standard Model to be reaffirmed. People in the field instead take the view that we could be curious and try to explain it; the best ideas come with new features or explain more than one anomaly. What should we do with wrong explanations? Should we be punished for not coming up with proven theories? Do we need external policing of our curiosity? What does ambulance chasing really cost? The attraction for many departments to form a theory group is that they are cheap -- theorists don't need expensive experiments or engineers/technicians/people to wash the test tubes. The reward for coming up with a failed theory is usually nothing; but it costs almost nothing too. So why the bitterness? Of course, we can begrudge people becoming famous for coming up with fanciful science fictions -- the mechanisms for identifying promising ideas are far from perfect -- but usually they have come up with something with at least some degree of novelty.

    When looking at CVs, it's very easy to spot and discount 'ambulance citations.' By the way, another phenomenon is to sign 'community papers' where tens or hundreds of authors group-source a white paper on a popular topic; and a third is to write a review of a hot subject. Both of these work very well to generate citations. Should we stop doing them too? In the end, the papers that count are ones with an interesting result or idea and there is no sure mechanism to writing them. In the aftermath of every ambulance-chasing cycle there are almost always papers that have some interesting nugget of an idea in, something that remains that would not have been suggested otherwise or computatations done that would otherwise not have been thought of, and hopefully brings us closer to discoveries.

    Recent progress

    We have an amazing collider experiment -- the LHC -- which will run for another ten years or so at high luminosities. We can either take the view in advance that it will tell us nothing about the energy frontier, or we can try to make the most of it. The fundamental problems with our understanding of physics have not been solved; I wrote a response to a similar article in 2020 and I stand by my opinion of the state of the field, and you can look there for my laundry list of problems that we are trying to make sense of. What has changed since then? Here are just a few things, biased by my own interests:

    Muon g-2

    The measurement of the muon g-2 by Fermilab confirmed the earlier anomalous measurement. Instead, now we have the problem that a series of lattice QCD groups have a calculation that would imply that the Standard Model prediction is closer to the measurement, in contradiction with the R-ratio method. Someone has underestimated their uncertainties, but we don't know who! This is a problem for theorists working with the experiments; perhaps the new experiment "mu on E" will help resolve it?

    CDF measurement of the W mass

    As reported everywhere, the CDF experiment at the Tevatron (the previous energy frontier collider that shut down ten years ago) analysed its data and found a measurement of the mass of the W boson with an enormous disagreement with the Standard Model of 7 standard deviations. If confirmed, it would signal new physics around the TeV scale. Since the W boson mass is just about the most generic thing that can be modified by new particles near the electroweak scale, there are any number of new theories that can explain it (as the arXiv this year will attest). Here there is a 4 standard deviation tension with a measurement at the LHC which has a much larger uncertainty. Another LHC measurement is now needed to settle the issue, but this may take a long time as it is a difficult measurement to make at the LHC. (Maybe we should just not bother?). Other than lots of fanciful (and dull) model building, this has recentred theory efforts on how to extract information from the W boson mass in new theories, which is a problem of precision calculations and hugely interesting ...

    The Xenon-1T anomaly disappeared

    Xenon released new results this summer showing that the anomaly at low recoils they found had disappeared with more data. While this immediately killed many theories to explain it, the lasting effects are that people have given serious thought to low-mass dark matter models that could have explained, and come up with new ways to search for them. Without looking, we don't know if they are there!

    An anomaly in extragalactic background light was found

    A four standard deviation anomaly was reported in the extra-galactic background light (EBL), i.e. there is too much light coming from outside galaxies in every direction! This would naturally be explained by an axion-like particle decaying -- indeed, measurements of the EBL have long been used as constraints. (Maybe we should never have come up with axions?)

    The LHC reported three other anomalies

    In analysing the data of run 2, three different searches reported anomalies of three standard deviations. Explanations for them have been suggested; perhaps we should see if they are correlations with other searches, or new ways of corroborating the possible signals? Or just not bother?

    Run 3 has started

    Run 3 of the LHC has started with a slightly higher energy and then stopped due to technical problems. It will be some time before significant luminosity is collected and our experimentalists are looking at new types of searches that might lead to discoveries. Their main motivation is that new signatures equals new reach. Our colleagues certaintly need justification or interpretations for their results, but whether the models really offer explanations of other types of new physics (e.g. dark matter) is of course a concern it is not the main one. The reason to do an experiment at the LHC is curiosity based -- experimentalists are not children looking for theorists' latest whims. The point is that we should test the theories every which way we can because we don't know what we will find. A good analogy might be zoologists looking for new species might want to got o a previously unexplored region of the earth, or they might find a new way of looking at ones they've been to before, e.g. by turning over rocks that they would otherwise have stepped over.

    Long Lived Particles

    One of these classes is long lived particles (LLPs)-- that I have written about before on here -- and they have also caught the imagination of theorists. In fact, I'm working with experimentalists with the aim of making their searches more widely applicable.

    SMEFT

    Two years ago I wrote that I thought the field was less inclined to follow hot topics and that this is healthy. This is still the case. However, some hot topics do exist and one of these is the theory of the Standard Model Effective Field Theory. There is now rapid development of all manner of aspects, from phenomenology to exploration of the higher-order version to matching etc.

    Machine Learning

    Another example is machine learning, which is becoming more prevalent and interesting, especially its interface between theory and experiments.

    Of course, there are many more developments and I'm sure many I'm not aware of. Obviously this is a sign of a field in big trouble!

    September 23, 2022

    Matt von HippelCabinet of Curiosities: The Nested Toy

    I had a paper two weeks ago with a Master’s student, Alex Chaparro Pozo. I haven’t gotten a chance to talk about it yet, so I thought I should say a few words this week. It’s another entry in what I’ve been calling my cabinet of curiosities, interesting mathematical “objects” I’m sharing with the world.

    I calculate scattering amplitudes, formulas that give the probability that particles scatter off each other in particular ways. While in principle I could do this with any particle physics theory, I have a favorite: a “toy model” called N=4 super Yang-Mills. N=4 super Yang-Mills doesn’t describe reality, but it lets us figure out cool new calculation tricks, and these often end up useful in reality as well.

    Many scattering amplitudes in N=4 super Yang-Mills involve a type of mathematical functions called polylogarithms. These functions are especially easy to work with, but they aren’t the whole story. One we start considering more complicated situations (what if two particles collide, and eight particles come out?) we need more complicated functions, called elliptic polylogarithms.

    A few years ago, some collaborators and I figured out how to calculate one of these elliptic scattering amplitudes. We didn’t do it as well as we’d like, though: the calculation was “half-done” in a sense. To do the other half, we needed new mathematical tools, tools that came out soon after. Once those tools were out, we started learning how to apply them, trying to “finish” the calculation we started.

    The original calculation was pretty complicated. Two particles colliding, eight particles coming out, meant that in total we had to keep track of ten different particles. That gets messy fast. I’m pretty good at dealing with six particles, not ten. Luckily, it turned out there was a way to pretend there were six particles only: by “twisting” up the calculation, we found a toy model within the toy model: a six-particle version of the calculation. Much like the original was in a theory that doesn’t describe the real world, these six particles don’t describe six particles in that theory: they’re a kind of toy calculation within the toy model, doubly un-real.

    Not quintuply-unreal though

    With this nested toy model, I was confident we could do the calculation. I wasn’t confident I’d have time for it, though. This ended up making it perfect for a Master’s thesis, which is how Alex got into the game.

    Alex worked his way through the calculation, programming and transforming, going from one type of mathematical functions to another (at least once because I’d forgotten to tell him the right functions to use, oops!) There were more details and subtleties than expected, but in the end everything worked out.

    Then, we were scooped.

    Another group figured out how to do the full, ten-particle problem, not just the toy model. That group was just “down the hall”…or would have been “down the hall” if we had been going to the office (this was 2021, after all). I didn’t hear about what they were working on until it was too late to change plans.

    Alex left the field (not, as far as I know, because of this). And for a while, because of that especially thorough scooping, I didn’t publish.

    What changed my mind, in part, was seeing the field develop in the meantime. It turns out toy models, and even nested toy models, are quite useful. We still have a lot of uncertainty about what to do, how to use the new calculation methods and what they imply. And usually, the best way to get through that kind of uncertainty is with simple, well-behaved toy models.

    So I thought, in the end, that this might be useful. Even if it’s a toy version of something that already exists, I expect it to be an educational toy, one we can learn a lot from. So I’ve put it out into the world, as part of this year’s cabinet of curiosities.

    Terence TaoA counterexample to the periodic tiling conjecture

    Rachel Greenfeld and I have just uploaded to the arXiv our announcement “A counterexample to the periodic tiling conjecture“. This is an announcement of a longer paper that we are currently in the process of writing up (and hope to release in a few weeks), in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. This conjecture can be formulated in both discrete and continuous settings:

    Conjecture 1 (Discrete periodic tiling conjecture) Suppose that {F \subset {\bf Z}^d} is a finite set that tiles {{\bf Z}^d} by translations (i.e., {{\bf Z}^d} can be partitioned into translates of {F}). Then {F} also tiles {{\bf Z}^d} by translations periodically (i.e., the set of translations can be taken to be a periodic subset of {{\bf Z}^d}).

    Conjecture 2 (Continuous periodic tiling conjecture) Suppose that {\Omega \subset {\bf R}^d} is a bounded measurable set of positive measure that tiles {{\bf R}^d} by translations up to null sets. Then {\Omega} also tiles {{\bf R}^d} by translations periodically up to null sets.

    The discrete periodic tiling conjecture can be easily established for {d=1} by the pigeonhole principle (as first observed by Newman), and was proven for {d=2} by Bhattacharya (with a new proof given by Greenfeld and myself). The continuous periodic tiling conjecture was established for {d=1} by Lagarias and Wang. By an old observation of Hao Wang, one of the consequences of the (discrete) periodic tiling conjecture is that the problem of determining whether a given finite set {F \subset {\bf Z}^d} tiles by translations is (algorithmically and logically) decidable.

    On the other hand, once one allows tilings by more than one tile, it is well known that aperiodic tile sets exist, even in dimension two – finite collections of discrete or continuous tiles that can tile the given domain by translations, but not periodically. Perhaps the most famous examples of such aperiodic tilings are the Penrose tilings, but there are many other constructions; for instance, there is a construction of Ammann, Grümbaum, and Shephard of eight tiles in {{\bf Z}^2} which tile aperiodically. Recently, Rachel and I constructed a pair of tiles in {{\bf Z}^d} that tiled a periodic subset of {{\bf Z}^d} aperiodically (in fact we could even make the tiling question logically undecidable in ZFC).

    Our main result is then

    Theorem 3 Both the discrete and continuous periodic tiling conjectures fail for sufficiently large {d}. Also, there is a finite abelian group {G_0} such that the analogue of the discrete periodic tiling conjecture for {{\bf Z}^2 \times G_0} is false.

    This suggests that the techniques used to prove the discrete periodic conjecture in {{\bf Z}^2} are already close to the limit of their applicability, as they cannot handle even virtually two-dimensional discrete abelian groups such as {{\bf Z}^2 \times G_0}. The main difficulty is in constructing the counterexample in the {{\bf Z}^2 \times G_0} setting.

    The approach starts by adapting some of the methods of a previous paper of Rachel and myself. The first step is make the problem easier to solve by disproving a “multiple periodic tiling conjecture” instead of the traditional periodic tiling conjecture. At present, Theorem 3 asserts the existence of a “tiling equation” {A \oplus F = {\bf Z}^2 \times G_0} (where one should think of {F} and {G_0} as given, and the tiling set {A} is known), which admits solutions, all of which are non-periodic. It turns out that it is enough to instead assert the existence of a system

    \displaystyle  A \oplus F^{(m)} = {\bf Z}^2 \times G_0, m=1,\dots,M

    of tiling equations, which admits solutions, all of which are non-periodic. This is basically because one can “stack” together a system of tiling equations into an essentially equivalent single tiling equation in a slightly larger group. The advantage of this reformulation is that it creates a “tiling language”, in which each sentence {A \oplus F^{(m)} = {\bf Z}^2 \times G_0} in the language expresses a different type of constraint on the unknown set {A}. The strategy then is to locate a non-periodic set {A} which one can try to “describe” by sentences in the tiling language that are obeyed by this non-periodic set, and which are “structured” enough that one can capture their non-periodic nature through enough of these sentences.

    It is convenient to replace sets by functions, so that this tiling language can be translated to a more familiar language, namely the language of (certain types of) functional equations. The key point here is that the tiling equation

    \displaystyle  A \oplus (\{0\} \times H) = G \times H

    for some abelian groups {G, H} is precisely asserting that {A} is a graph

    \displaystyle  A = \{ (x, f(x)): x \in G \}

    of some function {f: G \rightarrow H} (this sometimes referred to as the “vertical line test” in U.S. undergraduate math classes). Using this translation, it is possible to encode a variety of functional equations relating one or more functions {f_i: G \rightarrow H} taking values in some finite group {H} (such as a cyclic group).

    The non-periodic behaviour that we ended up trying to capture was that of a certain “{p}-adically structured function” {f_p: {\bf Z} \rightarrow ({\bf Z}/p{\bf Z})^\times} associated to some fixed and sufficiently large prime {p} (in fact for our arguments any prime larger than {48}, e.g., {p=53}, would suffice), defined by the formula

    \displaystyle  f_p(n) := \frac{n}{p^{\nu_p(n)}} \hbox{ mod } p

    for {n \neq 0} and {f_p(0)=1}, where {\nu_p(n)} is the number of times {p} divides {n}. In other words, {f_p(n)} is the last non-zero digit in the base {p} expansion of {n} (with the convention that the last non-zero digit of {0} is {1}). This function is not periodic, and yet obeys a lot of functional equations; for instance, one has {f_p(pn) = f_p(n)} for all {n}, and also {f_p(pn+j)=j} for {j=1,\dots,p-1} (and in fact these two equations, together with the condition {f_p(0)=1}, completely determine {f_p}). Here is what the function {f_p} looks like (for {p=5}):

    It turns out that we cannot describe this one-dimensional non-periodic function directly via tiling equations. However, we can describe two-dimensional non-periodic functions such as {(n,m) \mapsto f_p(An+Bm+C)} for some coefficients {A,B,C} via a suitable system of tiling equations. A typical such function looks like this:

    A feature of this function is that when one restricts to a row or diagonal of such a function, the resulting one-dimensional function exhibits “{p}-adic structure” in the sense that it behaves like a rescaled version of {f_p}; see the announcement for a precise version of this statement. It turns out that the converse is essentially true: after excluding some degenerate solutions in which the function is constant along one or more of the columns, all two-dimensional functions which exhibit {p}-adic structure along (non-vertical) lines must behave like one of the functions {(n,m) \mapsto f_p(An+Bm+C)} mentioned earlier, and in particular is non-periodic. The proof of this result is strongly reminiscent of the type of reasoning needed to solve a Sudoku puzzle, and so we have adopted some Sudoku-like terminology in our arguments to provide intuition and visuals. One key step is to perform a shear transformation to the puzzle so that many of the rows become constant, as displayed in this example,

    and then perform a “Tetris” move of eliminating the constant rows to arrive at a secondary Sudoku puzzle which one then analyzes in turn:

    It is the iteration of this procedure that ultimately generates the non-periodic {p}-adic structure.

    David HoggDagstuhl, day 2

    Today was day 2 of Machine Learning for Science: Bridging Data-driven and Mechanistic Modeling at Schloss Dagstuhl. Many great things happened. Here are two highlights:

    Bernhard Schölkopf (MPI-IS), in a discussion session, asked what the key questions were for machine learning as a field. I love this question! Astronomy and physics do, I think, have key questions, which guide research and contextualize choices. Machine learning does not really, or if it does, the questions are implicit. I want to work on this.

    Philipp Hennig (Tübingen) gave an energizing talk about the relationship between simulations of the world and observations of (or data about) the world. He argued (convincingly!) that we should not think of these as totally different things, and that learning from data and simulating a process could or even should always be integrated and done together. He demonstrated this with a simple model of infectious disease, but the point is extremely general.

    David Hoggthe chevron

    A research highlight today was the Flatiron Galactic dynamics internal group meeting. We discussed kinematic features in the Milky Way halo that have appeared in ESA Gaia DR3 in maybe this paper. We looked at data and (toy) simulations. I'm interested in whether the features appear in metallicity or abundances. The arguments that Neige Frankel (CITA) and I worked out this summer for The Snail looks like they maybe work for all phase-space overdensities caused by perturbations?

    September 22, 2022

    John PreskillWe’re founding a quantum-thermodynamics hub!

    We’re building a factory in Maryland. 

    It’ll tower over the University of Maryland campus, a behemoth of 19th-century brick and 21st-century glass across from the football field. Turbines will turn, and gears will grind, where students now sip lattes near the Stadium Drive parking lot. The factory’s fuel: steam, quantum physics, and ambition. Its goal: to create an epicenter for North American quantum thermodynamics.

    The factory is metaphorical, of course. Collaborators and I are establishing a quantum-thermodynamics hub centered at the University of Maryland. The hub is an abstraction—a community that’ll collaborate on research, coordinate gatherings, host visitors, and raise the public’s awareness of quantum thermodynamics. But I’d rather envision the hub as a steampunk factory that pumps out discoveries and early-career scientists.

    Quantum thermodynamics has burgeoned over the past decade, especially in Europe. At the beginning of my PhD, I read paper after paper that acknowledged COST, a funding agency established by the European Union. COST dedicated a grant to thermodynamics guided by the mathematics and concepts of quantum information theory. The grant funded students, travel, and the first iterations of an annual conference that continues today. Visit Germany, Finland, France, Britain (which belonged to the European Union when I began my PhD), or elsewhere across the pond, and you’ll stumble across quantum-thermodynamics strongholds. Hotspots burn also in Brazil, Israel, Singapore, and elsewhere.

    Inspired by our international colleagues, collaborators and I are banding together. Since I founded a research group last year, Maryland has achieved a critical mass of quantum thermodynamicists: Chris Jarzynski reigns as a king of the field of fluctuation relations, equalities that help us understand why time flows in only one direction. Sebastian Deffner, I regard as an academic older brother to look up to. And I run the Quantum-Steampunk Laboratory.

    We’ve built railroads to research groups across the continent and steamers to cross the ocean. Other members of the hub include Kanu Sinha, a former Marylander who studies open systems in Arizona; Steve Campbell, a Dublin-based prover of fundamental bounds; and two experts on quantum many-body systems: former Marylander Amir Kalev and current Marylander Luis Pedro García-Pintos. We’re also planning collaborations with institutions from Canada to Vienna.

    The hub will pursue a threefold mission of research, community building, and outreach. As detailed on our research webpage, “We aim to quantify how, thermodynamically, decoherence and the spread of information lead to emergent phenomena: classical objectivity and the flow of time.” To grow North America’s quantum-thermodynamics community, we’ll run annual symposia and an international conference. Our visitors’ program will create the atmosphere of a local watering hole. Outreach will include more posts on this blog—including by guest authors—a quantum-steampunk short-story contest (expect details this fall), and more.

    Come visit us by dirigible, train, or gyropter. Air your most thought-provoking quantum-thermodynamics discoveries in a seminar with us, and solicit feedback. Find collaborators, and learn about the latest. The factory wheels are beginning to turn.

    With thanks to the John Templeton Foundation for the grant to establish the hub.

    September 21, 2022

    Sean Carroll The Biggest Ideas in the Universe: Space, Time, and Motion

    Just in case there are any blog readers out there who haven’t heard from other channels: I have a new book out! The Biggest Ideas in the Universe: Space, Time, and Motion is Volume One of a planned three-volume series. It grew out of the videos that I did in 2020, trying to offer short and informal introductions to big ideas in physics. Predictably, they grew into long and detailed videos. But they never lost their informal charm, especially since I didn’t do that much in the way of research or preparation.

    For the book, by contrast, I actually did research and preparation! So the topics are arranged a bit more logically, the presentation is a bit more thorough and coherent, and the narrative is sprinkled with fun anecdotes about the philosophy and history behind the development of these ideas. In this volume, “these ideas” cover classical physics, from Aristotle through Newton up through Einstein.

    The gimmick, of course, is that we don’t shy away from using equations. The goal of this book is to fill the gap between what you generally get as a professional physics student, who the teacher can rely on to spend years of study and hours of doing homework problems, and what you get as an interested amateur, where it is assumed that you are afraid of equations or can’t handle them. I think equations are not so scary, and that essentially everyone can handle them, if they are explained fully along the way. So there are no prerequisites, but we will teach you about calculus and vectors and all that stuff along the way. Not enough to actually solve the equations and become a pro, but enough to truly understand what the equations are saying. If it all works, this will open up a new way of looking at the universe for people who have been denied it for a long time.

    The payoff at the end of the book is Einstein’s theory of general relativity and its prediction of black holes. You will understand what Einstein’s equation really says, and why black holes are an inevitable outcome of that equation. Something most people who get an undergraduate university degree in physics typically don’t get to.

    Table of contents:

    • Introduction
    • 1. Conservation
    • 2. Change
    • 3. Dynamics
    • 4. Space
    • 5. Time
    • 6. Spacetime
    • 7. Geometry
    • 8. Gravity
    • 9. Black Holes
    • Appendices

    Available wherever books are available: Amazon * Barnes and Noble * BAM * IndieBound * Bookshop.org * Apple Books.

    September 20, 2022

    David HoggDagstuhl, day 1

    Today was day 1 of Machine Learning for Science: Bridging Data-driven and Mechanistic Modeling at Schloss Dagstuhl. The first day was mainly about applications of machine learning, in Earth science, livestock management, astrophysics (dark matter), cells, and mechanical engineering. I had many thoughts and realizations. Here are a few random ones:

    The problems that appear in Earth science, and the data types, are very similar to those that appear in astrophysics! But in Earth science, biology is a big driver of global processes, and there is no good mechanistic model for (say) how plants grow and take up carbon. The world is filled with mobile phones, with good cameras, and the methods we could could be employing to be doing science in a distributed way are way, way under-used. Cells are incredibly complicated. The mechanistic model involves literally thousands of individual processes. Like our model for the cell is as complicated as our model for the entire Earth system (which, by the way, depends on cells!), or even more complicated.

    In the areas of the cell and the Earth, a theme was that the investigators want to preserve the causal structure we believe, and just use the machine learning to replace one tiny piece, with a data-driven model. Related: You can think of the machine learning as an effective theory for something (a sub-part of the problem) that doesn't work well from first principles. That's a good idea!

    David Hoggstellar noise as a physical process

    Today I was privileged to be part of a great and productive meeting between Jesse Cisewski (Wisconsin), Megan Bedell (Flatiron), and Lily Zhao (Flatiron) about noise sources in extreme-precision radial-velocity measurements. The conversation was inspired by the realization (obvious, really) that any physical effect on the surface of stars (spots, plages, convection pattern, p-modes, flares) that affect the radial-velocity measurement must (unless the Universe is truly adversarial) leave other imprints on the spectrum at the same signal-to-noise or even higher signal-to-noise. This means that any claim that RV measurements are affected by spots (say) should be backed up by an observation in the spectrum that is orthogonal to the RV signal that supports the claim. We discussed relevant research and decided to jointly read this paper before our next meeting!

    September 19, 2022

    John BaezSeminar on This Week’s Finds

    Here’s something new: I’ll be living in Edinburgh until January! I’m working with Tom Leinster at the University of Edinburgh, supported by a Leverhulme Fellowship.

    One fun thing I’ll be doing is running seminars on some topics from my column This Week’s Finds. They’ll take place on Thursdays at 3:00 pm UK time in Room 6206 of the James Clerk Maxwell Building, home of the Department of Mathematics. The first will be on September 22nd, and the last on December 1st.

    We’re planning to

    1) make the talks hybrid on Zoom so that people can participate online:

    https://ed-ac-uk.zoom.us/j/82270325098
    Meeting ID: 822 7032 5098
    Passcode: XXXXXX36

    Here the X’s stand for the name of a famous lemma in category theory.

    2) make lecture notes available on my website.

    3) record them and eventually make them publicly available on my YouTube channel.

    4) have a Zulip channel on the Category Theory Community Server dedicated to discussion of the seminars: it’s here.

    More details soon!

    The theme for these seminars is representation theory, interpreted broadly. The topics are:

    • Young diagrams
    • Dynkin diagrams
    • q-mathematics
    • The three-strand braid group
    • Clifford algebras and Bott periodicity
    • The threefold and tenfold way
    • Exceptional algebras

    Seven topics are listed, but there will be 11 seminars, so it’s not a one-to-one correspondence: each topic is likely to take one or two weeks. Here are more detailed descriptions:

    Young diagrams

    Young diagrams are combinatorial structures that show up in a myriad of applications. Among other things, they classify conjugacy classes in the symmetric groups Sn, irreducible representations of Sn, irreducible representations of the groups SL(n) over any field of characteristic zero, and irreducible unitary representations of the groups SU(n).

    Dynkin diagrams

    Coxeter and Dynkin diagrams classify a wide variety of structures, most notably Coxeter groups, lattices having such groups as symmetries, and simple Lie algebras. The simply laced Dynkin diagrams also classify the Platonic solids and quivers with finitely many indecomposable representations. This tour of Coxeter and Dynkin diagrams will focus on the connections between these structures.

    q-mathematics

    A surprisingly large portion of mathematics generalizes to something called q-mathematics, involving a parameter q. For example, there is a subject called q-calculus that reduces to ordinary calculus at q = 1. There are important applications of q-mathematics to the theory of quantum groups and also to algebraic geometry over Fq, the finite field with q elements. These seminars will give an overview of q-mathematics and its
    applications.

    The three-strand braid group

    The three-strand braid group has striking connections to the trefoil knot, rational tangles, the modular group PSL(2, Z), and modular forms. This group is also the simplest of the Artin–Brieskorn groups, a class of groups which map surjectively to the Coxeter groups. The three-strand braid group will be used as the starting point for a tour of these topics.

    Clifford algebras and Bott periodicity

    The Clifford algebra Cln is the associative real algebra freely generated by n anticommuting elements that square to -1. Iwill explain their role in geometry and superstring theory, and the origin of Bott periodicity in topology in facts about Clifford algebras.

    The threefold and tenfold way

    Irreducible real group representations come in three kinds, a fact arising from the three associative normed real division algebras: the real numbers, complex numbers and quaternions. Dyson called this the threefold way. When we generalize to superalgebras this becomes part of a larger classification, the tenfold way. We will examine these topics and their applications to representation theory, geometry and physics.

    Exceptional algebras

    Besides the three associative normed division algebras over the real numbers, there is a fourth one that is nonassociative: the octonions. They arise naturally from the fact that Spin(8) has three irreducible 8-dimensional representations. We will explain the octonions and sketch how the exceptional Lie algebras and the exceptional Jordan algebra can be constructed using octonions.

    September 17, 2022

    n-Category Café Seminar on This Week's Finds

    Here’s something new: I’m living in Edinburgh until January! I’ll be working with Tom Leinster at the University of Edinburgh, supported by a Leverhulme Fellowship.

    One fun thing I’ll be doing is running seminars on some topics from my column This Week’s Finds. They’ll take place on Thursdays at 3:00 pm UK time in Room 6206 of James Clerk Maxwell Building, home of the Department of Mathematics. The first will be on September 22nd, and the last on December 1st.

    We’re planning to

    1) make the talks hybrid on Zoom so that people can participate online:

    https://ed-ac-uk.zoom.us/j/82270325098
    Meeting ID: 822 7032 5098
    Passcode: XXXXXX36

    Here the X’s stand for the name of a famous lemma in category theory.

    2) record them and eventually make them publicly available on my YouTube channel.

    3) have a Zulip channel on the Category Theory Community Server dedicated to discussion of the seminars: it’s here.

    More details soon!

    I have the topics planned out….

    The theme for these seminars is representation theory, interpreted broadly. The topics are:

    • Young diagrams
    • Dynkin diagrams
    • q-mathematics
    • The three-strand braid group
    • Clifford algebras and Bott periodicity
    • The threefold and tenfold way
    • Exceptional algebras

    Seven topics are listed, but there will be 11 seminars, so it’s not a one-to-one correspondence: each topic is likely to take one or two weeks. Here are more detailed descriptions:

    Young diagrams

    Young diagrams are combinatorial structures that show up in a myriad of applications. Among other things, they classify conjugacy classes in the symmetric groups Sn, irreducible representations of Sn, irreducible representations of the groups SL(n) over any field of characteristic zero, and irreducible unitary representations of the groups SU(n).

    Dynkin diagrams

    Coxeter and Dynkin diagrams classify a wide variety of structures, most notably Coxeter groups, lattices having such groups as symmetries, and simple Lie algebras. The simply laced Dynkin diagrams also classify the Platonic solids and quivers with finitely many indecomposable representations. This tour of Coxeter and Dynkin diagrams will focus on the connections between these structures.

    q-mathematics

    A surprisingly large portion of mathematics generalizes to something called qq-mathematics, involving a parameter qq. For example, there is a subject called qq-calculus that reduces to ordinary calculus at q=1q = 1. There are important applications of qq-mathematics to the theory of quantum groups and also to algebraic geometry over 𝔽 q\mathbb{F}_q, the finite field with qq elements. These seminars will give an overview of qq-mathematics and its applications.

    The three-strand braid group

    The three-strand braid group has striking connections to the trefoil knot, rational tangles, the modular group PSL(2,\mathbb{Z}), and modular forms. This group is also the simplest of the Artin–Brieskorn groups, a class of groups which map surjectively to the Coxeter groups. The three-strand braid group will be used as the starting point for a tour of these topics.

    Clifford algebras and Bott periodicity

    The Clifford algebra Cl n\mathrm{Cl}_n is the associative real algebra freely generated by nn anticommuting elements that square to 1-1. Baez will explain their role in geometry and superstring theory, and the origin of Bott periodicity in topology in facts about Clifford algebras.

    The threefold and tenfold way

    Irreducible real group representations come in three kinds, a fact arising from the three associative normed real division algebras: the real numbers, complex numbers and quaternions. Dyson called this the threefold way. When we generalize to superalgebras this becomes part of a larger classification, the tenfold way. We will examine these topics and their applications to representation theory, geometry and physics.

    Exceptional algebras

    Besides the three associative normed division algebras over the real numbers, there is a fourth one that is nonassociative: the octonions. They arise naturally from the fact that Spin(8) has three irreducible 8-dimensional representations. We will explain the octonions and sketch how the exceptional Lie algebras and the exceptional Jordan algebra can be constructed using octonions.

    n-Category Café Young Diagrams and Classical Groups

    Young diagrams can be used to classify an enormous number of things. My first one or two This Week’s Finds seminars will be on Young diagrams and classical groups. Here are some lecture notes:

    Young diagrams and classical groups.

    I probably won’t cover all this material in the seminar. The most important part is the stuff up to and including the classification of irreducible representations of the “classical monoid” End( n)\mathrm{End}(\mathbb{C}^n). (People don’t talk about classical monoids, but they should.)

    Just as a reminder: my talks will be on Thursdays at 3:00 pm UK time in Room 6206 of the James Clerk Maxwell Building at the University of Edinburgh. The first will be on September 22nd, and the last on December 1st.

    If you’re actually in town, there’s a tea on the fifth floor that starts 15 minutes before my talk. If you’re not, you can attend on Zoom:

    https://ed-ac-uk.zoom.us/j/82270325098
    Meeting ID: 822 7032 5098
    Passcode: XXXXXX36

    Here the X’s stand for the name of a famous lemma in category theory.

    My talks should eventually show up on my YouTube channel.

    Also, you can discuss them on the Category Theory Community Server if you go here.

    September 16, 2022

    Matt von HippelAt Elliptic Integrals in Fundamental Physics in Mainz

    I’m at a conference this week. It’s named Elliptic Integrals in Fundamental Physics, but I think of it as “Elliptics 2022”, the latest in a series of conferences on elliptic integrals in particle physics.

    It’s in Mainz, which you can tell from the Gutenberg street art

    Elliptics has been growing in recent years, hurtling into prominence as a subfield of amplitudes (which is already a subfield of theoretical physics). This has led to growing lists of participants and a more and more packed schedule.

    This year walked all of that back a bit. There were three talks a day: two one-hour talks by senior researchers and one half-hour talk by a junior researcher. The rest, as well as the whole last day, are geared to discussion. It’s an attempt to go back to the subfield’s roots. In the beginning, the Elliptics conferences drew together a small group to sort out a plan for the future, digging through the often-confusing mathematics to try to find a baseline for future progress. The field has advanced since then, but some of our questions are still almost as basic. What relations exist between different calculations? How much do we value fast numerics, versus analytical understanding? What methods do we want to preserve, and which aren’t serving us well? To answer these questions, it helps to get a few people together in one place, not to silently listen to lectures, but to question and discuss and hash things out. I may have heard a smaller range of topics at this year’s Elliptics, but due to the sheer depth we managed to probe on those fewer topics I feel like I’ve learned much more.

    Since someone always asks, I should say that the talks were not recorded, but they are posting slides online, so if you’re interested in the topic you can watch there. A few people discussed new developments, some just published and some yet to be published. I discussed the work I talked about last week, and got a lot of good feedback and ideas about how to move forward.

    Doug NatelsonSurprising spin transport in insulating VO2

    Monoclinic VO2,
    adapted from here

    As I wrote last year, techniques have been developed in the last decade or two that use the inverse spin Hall effect as a tool for measuring the transport of angular momentum in insulators.  We just applied this approach to look at the thermally driven transport of spin-carrying excitations (the spin Seebeck effect) in thin films of vanadium dioxide at low temperatures.  VO2 is a strongly correlated transition metal oxide that has a transition at 65C between a high temperature (rutile structure) metallic state with 1D vanadium chains, and a low temperature (monoclinic structure) insulating state in which the vanadium atoms have formed dimers, as shown at right.  I circled one V-V dimer in purple.  

    The expectation, going back almost 50 years, is that in each dimer the unpaired d electrons on the vanadium atoms form a singlet, and thus the insulating state should be magnetically very boring.  That's why the result of our recently published paper are surprising.  In the "nonlocal" geometry (with a heater wire and a detector wire separated laterally on the surface of a VO2 film), we see a clear spin Seebeck signal that increases at temperatures below 30-40 K, indicating that some spin-carrying excitations are being thermally excited and diffusing from the heater to the detector.   One natural suspect would be thermally activated triplet excitations called triplons, and we are continuing to take data in other geometries and to try to nail down whether that is what is happening here.  

    This has been a fun project, in part because I get a real kick out of the fact that this measurement technique is so simple and first-year undergrad physics says that you should see nothing.  We are running ac current back and forth in one wire at a few Hz, and measuring the voltage across a neighboring wire at twice that frequency, on an insulating substrate.  Instead of seeing nothing, because of the hidden action of spin in the insulator and spin-orbit physics in the wires, we see a clear signal that depends nontrivially on magnetic field magnitude and direction as well as temperature.  Gets me every time.  

    September 14, 2022

    Scott Aaronson I had a dream

    As I slept fitfully, still recovering from COVID, I had one of the more interesting dreams of my life:

    I was desperately trying to finish some PowerPoint slides in time to give a talk. Uncharacteristically for me, one of the slides displayed actual code. This was a dream, so nothing was as clear as I’d like, but the code did something vaguely reminiscent of Rosser’s Theorem—e.g., enumerating all proofs in ZFC until it finds the lexicographically first proof or disproof of a certain statement, then branching into cases depending on whether it’s a proof or a disproof. In any case, it was simple enough to fit on one slide.

    Suddenly, though, my whole presentation was deleted. Everything was ruined!

    One of the developers of PowerPoint happened to be right there in the lecture hall (of course!), so I confronted him with my laptop and angrily demanded an explanation. He said that I must have triggered the section of Microsoft Office that tries to detect and prevent any discussion of logical paradoxes that are too dangerous for humankind—the ones that would cause people to realize that our entire universe is just an illusion, a sandbox being run inside an AI, a glitch-prone Matrix. He said it patronizingly, as if it should’ve been obvious: “you and I both know that the Paradoxes are not to be talked about, so why would you be so stupid as to put one in your presentation?”

    My reaction was to jab my finger in the guy’s face, shove him, scream, and curse him out. At that moment, I wasn’t concerned in the slightest about the universe being an illusion, or about glitches in the Matrix. I was concerned about my embarrassment when I’d be called in 10 minutes to give my talk and would have nothing to show.

    My last thought, before I woke with a start, was to wonder whether Greg Kuperberg was right and I should give my presentations in Beamer, or some other open-source software, and then I wouldn’t have had this problem.

    A coda: I woke a bit after 7AM Central and started to write this down. But then—this is now real life (!)—I saw an email saying that a dozen people were waiting for me in a conference room in Europe for an important Zoom meeting. We’d gotten the time zones wrong; I’d thought that it wasn’t until 8AM my time. If not for this dream causing me to wake up, I would’ve missed the meeting entirely.

    September 13, 2022

    Tommaso Dorigo20 Post-Doctoral Research Positions Open At INFN

    Are you a recently (<8 years) appointed Ph.D. graduate in fundamental physics, who wants to work in Italy? This post is for you. The INFN is opening 20 positions for foreigners who would like to join a group of research in one of the INFN sections (there are 25 across Italy). The positions are for one year, renewable, and the salary is competitive, given that it is roughly at the level of a starting associate professor in Italy. Note, also Ph.D. students who plan to graduate before November 1st 2023 can apply! 


    Also, the 8-years limit can be waived if you spent time in maternity, military service, or illness. The winning candidates are expected to start their contract before November 2023.

    read more

    September 12, 2022

    n-Category Café The Algebra of Grand Unified Theories

    Fans of the nn-Category Café might like The Cartesian Café, where Timothy Nguyen has long, detailed conversations with mathematicians. We recently talked about the fascinating mathematical patterns in the Standard Model that led people to invent grand unified theories:

    For more details, go here:

    John BaezThe Algebra of Grand Unified Theories

    In this long conversation, Timothy Nguyen and I talk about the fascinating mathematical patterns in the Standard Model that led people to grand unified theories:

    For more details go here:

    • John Baez and John Huerta, The algebra of grand unified theories, Bulletin of the American Mathematical Society 47 (2010), 483–552.

    September 11, 2022

    n-Category Café Joint Mathematics Meetings 2023

    This is the biggest annual meeting of mathematicians:

    • Joint Mathematical Meetings 2023, Wednesday January 4 - Saturday January 7, 2023, John B. Hynes Veterans Memorial Convention Center, Boston Marriott Hotel, and Boston Sheraton Hotel, Boston, Massachusetts.

    As part of this huge meeting, the American Mathematical Society is having a special session on Applied Category Theory on Thursday January 5th.

    I hear there will be talks by Eugenia Cheng and Olivia Caramello!

    You can submit an abstract to give a talk. The deadline is Tuesday, September 13, 2022.

    It should be lots of fun. There will also be tons of talks on other subjects.

    However, there’s a registration fee which is pretty big unless you’re a student, unemployed, or even better, a ‘non-mathematician guest’. (I assume you’re not allowed to give a talk if you’re a non-mathematician.)

    The special session is called SS 96 and it comes in two parts: one from 8 am to noon, and the other from 1 pm to 5 pm. It’s being run by these participants of this summer’s Mathematical Research Community on applied category theory:

    • Charlotte Aten, University of Denver
    • Pablo S. Ocal, University of California, Los Angeles
    • Layla H. M. Sorkatti, Southern Illinois University
    • Abigail Hickok, University of California, Los Angeles

    This Mathematical Research Community, in turn, was run by Daniel Cicala, Simon Cho, Nina Otter, Valeria de Paiva and me, and I think we’re all coming to the special session. At least I am.

    September 10, 2022

    Peter Rohde MoodSnap jetzt in Deutsch

    One of the goals of my free mood diary app MoodSnap is to make it accessible to as many people as possible. Today I’m pleased to announce that MoodSnap has been fully localised for the German language, with more languages in progress.

    You can get MoodSnap on the Apple AppStore and find out more at the MoodSnap homepage.

    The post MoodSnap jetzt in Deutsch appeared first on Peter Rohde.

    Tommaso DorigoMarek Karliner On Exotic Doubly-Heavy Hadrons

    In the final day of the ICNFP 2022 conference in Kolympari (Greece), we could listen to an enlightening presentation by Prof. Marek Karliner (Tel Aviv University), who is an absolute authority on the matter of the theory of hadron spectroscopy. 

    read more

    September 09, 2022

    Matt von HippelCabinet of Curiosities: The Coaction

    I had two more papers out this week, continuing my cabinet of curiosities. I’ll talk about one of them today, and the other in (probably) two weeks.

    This week, I’m talking about a paper I wrote with an excellent Master’s student, Andreas Forum. Andreas came to me looking for a project on the mathematical side. I had a rather nice idea for his project at first, to explain a proof in an old math paper so it could be used by physicists.

    Unfortunately, the proof I sent him off to explain didn’t actually exist. Fortunately, by the time we figured this out Andreas had learned quite a bit of math, so he was ready for his next project: a coaction for Calabi-Yau Feynman diagrams.

    We chose to focus on one particular diagram, called a sunrise diagram for its resemblance to a sun rising over the sea:

    This diagram

    Feynman diagrams depict paths traveled by particles. The paths are a metaphor, or organizing tool, for more complicated calculations: computations of the chances fundamental particles behave in different ways. Each diagram encodes a complicated integral. This one shows one particle splitting into many, then those many particles reuniting into one.

    Do the integrals in Feynman diagrams, and you get a variety of different mathematical functions. Many of them integrate to functions called polylogarithms, and we’ve gotten really really good at working with them. We can integrate them up, simplify them, and sometimes we can guess them so well we don’t have to do the integrals at all! We can do all of that because we know how to break polylogarithm functions apart, with a mathematical operation called a coaction. The coaction chops polylogarithms up to simpler parts, parts that are easier to work with.

    More complicated Feynman diagrams give more complicated functions, though. Some of them give what are called elliptic functions. You can think of these functions as involving a geometrical shape, in this case a torus.

    Other functions involve more complicated geometrical shapes, in some cases very complicated. For example, some involve the Calabi-Yau manifolds studied by string theorists. These sunrise diagrams are some of the simplest to involve such complicated geometry.

    Other researchers had proposed a coaction for elliptic functions back in 2018. When they derived it, though, they left a recipe for something more general. Follow the instructions in the paper, and you could in principle find a coaction for other diagrams, even the Calabi-Yau ones, if you set it up right.

    I had an idea for how to set it up right, and in the grand tradition of supervisors everywhere I got Andreas to do the dirty work of applying it. Despite the delay of our false start and despite the fact that this was probably in retrospect too big a project for a normal Master’s thesis, Andreas made it work!

    Our result, though, is a bit weird. The coaction is a powerful tool for polylogarithms because it chops them up finely: keep chopping, and you get down to very simple functions. Our coaction isn’t quite so fine: we don’t chop our functions into as many parts, and the parts are more mysterious, more difficult to handle.

    We think these are temporary problems though. The recipe we applied turns out to be a recipe with a lot of choices to make, less like Julia Child and more like one of those books where you mix-and-match recipes. We believe the community can play with the parameters of this recipe, finding new version of the coaction for new uses.

    This is one of the shiniest of the curiosities in my cabinet this year, I hope it gets put to good use.

    Matt Strassler Protons and Charm Quarks: A Lesson From Virtual Particles

    There’s been a lot of chatter lately about a claim that charm quarks are found in protons. While the evidence for this claim of “intrinsic charm” (a name that goes back decades) is by no means entirely convincing yet, it might in fact be true… sort of. But the whole idea sounds very confusing. A charm quark has a larger mass than a proton: about 1.2 GeV/c2 vs. 0.938 GeV/c2. On the face of it, suggesting there are charm quarks in protons sounds as crazy as suggesting that a football could have a lead brick inside it without you noticing any difference.

    What’s really going on? It’s a long story, and subtle even for experts, so it’s not surprising that most articles about it for lay readers haven’t been entirely clear. At some point I’ll write a comprehensive explanation, but that will require a longer post (or series of posts), and I don’t want to launch into that until my conceptual understanding of important details is complete.

    Feynman diagram suggesting a photon is sometimes an electron-positron pair.

    But in the meantime, here’s a related question: how can a particle with zero mass (zero rest mass, to be precise) spend part of its time as a combination of objects that have positive mass? For instance, a photon [a particle of light, including both visible and invisible forms of light] has zero rest mass. [Note, however, that it has non-zero gravitational mass]. Meanwhile electrons and positrons [the anti-particles of electrons] both have positive rest mass. So what do people mean when they say “A photon can be an electron-positron pair part of the time”? This statement comes with a fancy “Feynman diagram”, in which the photon is shown as the wavy line, time is running left to right, and the loop represents an electron and a positron created from the photon.

    Photons and Electron-Positron Pairs

    This phrase is often heard in particle physics classes, such as in this one, where you’ll see it explicitly stated that “A photon can be an e+ e− pair part of the time“. It’s also common to find it mentioned in scientific journalism and in most books about particle physics.

    Many non-experts have problems with this mysterious statement, and rightly so.

    • A photon, like any particle with zero rest mass, has to travel at the cosmic speed limit, a.k.a. “c“, a.k.a. 300,000 kilometers/second, a.k.a. the speed of light.
    • But an electron has a positive rest mass, and a positron has the same rest mass as an electron, so they must travel slower than the cosmic speed limit.

    So how can a photon, traveling at exactly the speed of light, spend any time at all as a pair of particles with non-zero rest mass? Wouldn’t this force it to slow down, or something?

    The well-meaning but incorrect answer, found in many books and articles, is that the electron and positron can come into existence as long as they do so only for a very short time, via the uncertainty principle. This implies that a massless particle can turn into particles with mass as long as the time involved is short enough. This would also then seem to imply that a photon, too, can travel below the speed limit, as long as it does so for a very short time. But if that were true, then either its overall speed would be slightly slower than c, or, to make up for lost time, it would occasionally have to travel above the speed limit!

    Huh??!??!??!??!??!?!!

    Fortunately, this distressing line of argument is wrong-headed. In fact, the statement “A photon can be an e+ e− pair part of the time” is shorthand. It’s harmless shorthand in a physics class, where the math quickly makes clear what is and isn’t meant by it, but it’s misleading otherwise. The precise statement is that, via the uncertainty principle, “a photon can be a virtual electron/virtual positron pair part of the time.” This word “virtual” makes a world of difference.

    As I’ve written elsewhere, a “virtual particle” is not a particle. It is a general disturbance in a field, and abides by far fewer rules than a true particle, which is a steady, regular ripple in a field. A true particle has a definite rest mass: every real electron, and every real positron, always has a rest mass of 0.000511 GeV/c2. But a virtual electron can have any mass — in fact it can have any mass-squared. That includes negative mass-squared, in which case it would have an imaginary mass! [From pre-college math: the square root of any negative number is, by definition, an “imaginary number.”]

    When the photon turns into a virtual electron and a virtual positron and back again, energy and momentum are conserved (i.e. the total amounts of momentum and energy don’t change during each stage of the process). This fact is hard-coded into Feynman diagrams, which is why professional physicists don’t get confused about this point. The conservation of these quantities would fail completely if a massless particle were to turn into two particles with positive rest mass. (In fact this is why massless particles cannot decay; see here, rule 2.) Instead, if the virtual electron had the electron’s usual mass of 0.000511 GeV/c2, the positron would have to have imaginary mass. And vice versa. More generally, the math quickly tells you that either

    • both of these virtual particles have zero mass, or
    • at least one has imaginary mass.

    So these are not in any sense normal electrons and positrons!

    Instead, a better way to understand the virtual electron/positron pair is as a general disturbance in the electron field, a disturbance whose total rest mass is zero. [Note: both electrons and positrons are ripples in the electron field; there’s no separate positron field.] The photon, usually a ripple in the electromagnetic field, can become occasionally a disturbance in the electron field, but when it does so it retains its zero rest mass, and travels always at the cosmic speed limit. It’s misleading, confusing and inaccurate to view it as a combination of a real electron and a real positron.

    The uncertainty principle prohibits such non-particle disturbances from existing for very long. Only true particles are able to exist indefinitely. That’s why these virtual disturbances come and go quickly.

    Protons and Charm Quark/Anti-quark Pairs

    Similarly, a proton may briefly contain a virtual charm quark and a virtual charm anti-quark, while always retaining its mass of 0.938 GeV/c2. Such a disturbance in the charm quark field need not, in this case, have zero rest mass, but it must have rest mass less than that of a proton. Nature is not somehow sneaking real charm quarks into a proton. That would violate basic rules, such as energy and momentum conservation.

    But is that really a big deal? The proton contains many gluons, which, like photons, have zero rest mass, and which, analogously to photons, spend a little of their time as a combination of a virtual charm quark and a virtual charm anti-quark. And so, simply because protons contain gluons, there are automatically virtual charm quark/anti-quark pairs in protons — i.e., it’s obvious that there are disturbances in the charm quark field inside a proton! So what’s up? Presumably the claim that “there is intrinsic charm in the proton” must mean something more significant than just this!

    Indeed, it does. The question is whether the majority of virtual charm quark/anti-quark pairs do not originate from individual gluons. In other words, are there other sources of disturbances in the charm quark field that cannot be accounted for using simple Feynman diagrams like the one we started with? Are these disturbances more common and more important within the proton than the mere presence of gluons would have led us to expect? The claim — still to be confirmed with better data and additional analysis — is that they are.

    Hopefully this gives you a better picture of what all the chatter is about. But I imagine that for some of you a question has arisen: which particles in a proton are actually real? That’s a subtle point, which I’ll have to address in a later post.

    September 06, 2022

    Matt Strassler Welcome!

    Hi all, and welcome! On this site, devoted to sharing the excitement and meaning of science, you’ll find a blog (posts begin below) and reference articles (accessible from the menus above.) The site is being upgraded, so you’ll see some ongoing changes; if you notice technical problems, please let me know.  [Sep. 6: I’m aware spam filtering has been too aggressive today.] Thanks, and enjoy!

    Don’t forget to leave (polite) comments, and keep an eye out for days when I take direct questions from readers. Oh, and for the moment you can follow me on Twitter and Facebook. 

    Matt Strassler Celebrating the Standard Model: Checking The Electric Charges of Quarks

    A post for general readers who’ve heard of quarks; if you haven’t, you might find this article useful:

    Yesterday I showed you that the usual argument that determines the electric charges of the various types of quarks uses circular reasoning and has a big loophole in it. (The up quark, for example, has charge 2/3, but the usual argument would actually allow it to have any charge!) But today I’m going to show you how this loophole can easily be closed — and we’ll need only addition, subtraction and fractions to close it.

    Throughout this post I’ll shorten “electric charge” to just “charge”.

    A Different Way to Check Quark Charges

    Our approach will be to study the process in which an electron and a positron (the electron’s anti-particle) collide, disappear (“annihilate”), and are converted into one or another type of quark and the corresponding anti-quark; see Figure 1. The rate for this process to occur, and the rate of a similar one in which a muon and anti-muon are produced, are all we will need to know.

    In an electron-positron collision, many things may happen. Among the possibilities, the electron and positron may be converted into two new particles. The new particles may have much more mass (specifically, rest mass) than the electron and positron do, if the collision is energetic enough. This is why physicists can use collisions of particles with small mass to discover unknown particles with large mass.

    Figure 1: (Top) an electron and positron, each carrying energy Ee, collide head-on. (Bottom) from the collision with total energy 2Ee , a quark and anti-quark may emerge, as long as Ee is bigger than the quark’s rest mass M times c2.

    In particular, for any quark of mass M, it is possible for an electron-positron collision to produce that quark and a corresponding anti-quark as long as the electron’s energy Ee is greater than the quark’s mass-energy Mc2. As Ee is gradually increased from low values, more and more types of quark/anti-quark pairs can be produced.

    This turns out to be a particularly interesting observation in the range where 1 GeV < Ee < 10 GeV, i.e. when the total collision energy (2 Ee) is between 2 and 20 GeV. If Ee is any lower, the effects of the strong nuclear force make the production of quarks extremely complicated (as we’ll see in another post). But when the collision energy is above 2 GeV, things start to settle down, and become both simple and interesting.

    In Figure 2 is the actual data showing the production rate for electron-positron collisions to lead to quark/anti-quark pairs of any type, no matter what their “flavor” or “color” (i.e. type or version), for different collision energies. What’s shown on the horizontal axis is not Ee but the collision energy 2Ee. The up, down and strange quarks have small masses, so they can be produced almost everywhere throughout this Figure. We should expect interesting changes to occur at or around twice each quark’s mass-energy, namely at twice the charm quark’s mass (around 3 GeV) and twice bottom’s mass (around 9 GeV.) You can see this expectation is borne out: there are big spikes in the data just above those locations. But then, after a few wiggles, things flatten out and become simple. It’s from these simple regions that we can gain simple insights through simple methods.

    Figure 2: Data showing the production rate for electron-positron collisions to produce particles containing quarks and anti-quarks, as a function of the collision energy (2Ee) in GeV. Ignoring all complex details, we see the regions between 2 and 3 GeV, between 5 and 10 GeV, and 11 to 40 GeV are particularly simple.

    The production rate is simple in these regions because

    • the weak nuclear force plays no important role in this process until 2Ee is about 40 GeV;
    • the strong nuclear force is increasingly unimportant as 2Ee increases above 2 GeV, especially in the regions with simple behavior, except where there are spikes in Figure 2;
    • the gravitational and Higgs forces are too tiny to have any effect;
    • and therefore the process can be understood using only electromagnetism, the very simplest of the elementary forces.

    With simple knowledge about how electromagnetism produces quark/anti-quark pairs from electron-positron annihilation, we can learn crucial information from these simple regions of Figure 2. This turns out to be easy; we don’t have to go into any detail.

    A Simple Fact

    Here’s the observation that makes it possible to measure the quark charges.

    In electron-positron collisions, the rate for producing a new particle/anti-particle pair via the electromagnetic force (as in Figure 1) is simply proportional to the square of the new particle’s electric charge.

    Why the square? Proof comes from quantum physics, but here’s a strong argument. If the rate were proportional to the charge itself, that would be weird. For positive charge, the rate for production would be positive, but a particle with negative charge would be produced with a negative rate… and how can a production rate be negative? (Would we be unproducing a particle that wasn’t there to start with?) So, no: the rate must be proportional to something that’s always positive.

    Also, the rate has to depend on the charge, since electrically neutral particles can’t be affected, much less produced, by electromagnetism. For similar reasons, the rate ought to be small for particles whose charge is small. The simplest positive quantity which satisfies these requirements is the square of the charge.

    Meanwhile, if the particle comes in multiple versions, as quarks do (we call those versions “colors”), then each version gets produced in this same way as in Figure 1.

    Figure 3: As in Figure 1, but with a muon and anti-muon produced. This is the process we will compare quark/anti-quark production to.

    So a particle with charge Q that comes in N versions will have a production rate proportional to N Q2.

    This is all we will need to know!

    A Simple Strategy

    The way to make everything simple, allowing us to avoid any hard calculations at all, is to compare the production of quarks and anti-quarks with the similar production of muons and anti-muons in electron-positron collisions (see Figure 3), both of which can be measured at each collision energy 2Ee.

    Specifically, we will calculate the ratio “R”:

    • R = Rate (e+ e quark anti-quark) / Rate (e+ e muon anti-muon)

    R will change with the collision energy as more and more types of quarks can be produced. The reason this is a good idea is that in electromagnetism, muon/anti-muon production is almost identical to quark/anti-quark production, except for simple details, so almost everything cancels out of this ratio.

    • R = Rate (e+ e quark anti-quark) / Rate (e+ emuon anti-muon)
      • = (sum of NQ2 for all quark types produced) / (NQ2 for muons)
      • = (sum of NQ2 for all quark types produced) / (1)

    In the last line, I used the fact that for muon/anti-muon pairs, N Q2 = 1; that’s because muons have the same charge as electrons (Q = -1, so Q2 = + 1) and they don’t have “color” — there’s only one version of a muon — so N=1. Meanwhile, N=3 for all quarks, so

    • R = (sum of 3Q2 for all quark types produced)
      • = 3 ✕ (sum of Q2 for all quark types produced)

    which is amazing simple for anything involving quantum field theory and particle physics!

    A Simple Prediction

    Therefore, since the Standard Model says [using notation that “Qu” means “electric charge of the u quark“]:

    • Up, Charm, Top (u,c,t): Qu = Qc = Qt = 2/3
    • Down, Strange, Bottom (d,s,b): Qd = Qs = Qb = -1/3

    we get three predictions that we can compare with data:

    • for small 2Ee in the 2 – 3 GeV range, we can produce up, down and strange quarks, so
      • R = 3(Qu2 + Qd2 + Qs2 ) = 3(4/9+1/9+1/9) = 4/3 + 1/3 + 1/3 = 2
    • for intermediate 2Ee > 3 GeV or so, we add the charm quark:
      • R = 4/3 + 1/3 + 1/3 + 4/3 = 10/3 = 3.33
    • for large 2Ee > 10 GeV or so, we add the bottom quark, so
      • R = 4/3 + 1/3 + 1/3 + 4/3 + 1/3 = 11/3 = 3.67

    Comparison with Data

    What does the data, taken over many years at many experiments, say? I’ve plotted it in Figure 4, along with the three predictions for R that I just calculated for you. The data scatters around because the measurements aren’t perfect (and I haven’t shown the uncertainty bars), but you can see the trends by eye. The predictions of the Standard Model work well — not perfectly, as they’re always a little below the data, but close in each region.

    Figure 4: Data (black dots) showing R (the ratio of quark/anti-quark pair production to muon/anti-muon pair production in electron-positron collisions) as a function of the collision energy 2Ee. Horizontal colored lines show the three predictions for R in the regions where the data is simple and 3, 4 or 5 of the quarks are produced. The minor jumpiness in the data is due to measurement imperfections. That the predictions nearly work is already sufficient to verify the number of colors is 3 and the quark charges are correct.

    If the Standard Model were wrong, the data and predictions could easily be far apart. For instance, the loophole I pointed out last time would allow Qu = Qc = 1 and Qd = Qs = Qb = 0. But then the predicted R in the three simple regions would have been 3, 6, and 6; that would have been way off. Unless the charges are very close to those predicted in the Standard Model, predictions are far from the data, and so the loophole from last time is now closed.

    Also, predictions can’t explain the data for any other value of N, so the number of quark “colors” is verified to be 3. And meanwhile, the fact that these predictions almost match the data confirms that we were right to largely ignore all the other forces in the Standard Model. In this sense, many facets of the Standard Model are being simultaneously tested here.

    But what about the fact that the data always runs about 10% above the prediction? It turns out this is due to the fact that the strong nuclear force cannot, in fact, be completely ignored. The process in which an electron and positron annihilate and produce a quark, an anti-quark and a gluon is large enough that we must include it if we want a more accurate prediction. Accounting for this makes the agreement much better. It also leads us to a more complex topic for another post I’ll produce soon: the variable strength of the strong nuclear force.

    Matt Strassler The Hunger for Power: Geopolitics and Particle Physics

    A few days after Russia invaded Ukraine (I will not call it a “war,” as that might offend Czar Vlad and his friends) for the nth time, my thoughts turned to the consequences for the CERN laboratory and for upcoming research at the Large Hadron Collider [LHC].   It was clear that Putin would blackmail Europe using his oil and gas supplies, leading to a spike in energy prices and a corresponding spike in CERN’s budget.

    Of course I didn’t foresee the heat waves and drought that have swept Europe, or the maintenance problems at France’s nuclear plants, which have made the energy crisis that much worse.  (Even though global climate change is now quite obvious, and the trends are partially predictable, one can’t predict what will happen in any given year.)  I am not familiar with the budgetary consequences of these higher energy prices for CERN operations, but they cannot be good.

    Now comes word via the Wall Street Journal that power shortages, rather than mere budget considerations, may require CERN to cut back its substantial energy usage, in order to stabilize the power grid.  The LHC, which just restarted in July after a couple of years of upgrades, is the largest power consumer at the CERN lab. Much of that power is used to keep the giant machine extremely cold so that its powerful magnets can function.  It takes many weeks to cool the accelerator down to 1.9 degrees above absolute zero (i.e. a bit colder than the temperatures found in deep space, far from any stars) so one can’t just flip the LHC on and off like a light switch.  Clearly CERN will try first to curtail other operations in times of a power crunch and keep the LHC cold, so as not to have to shut it down for months at a time.  But I would not be surprised if this year’s LHC run is somewhat curtailed, for one or another reason.

    That’s too bad, but it’s just the way it is.  We have far bigger problems at a time of war (oops, I wasn’t going to call it that…)  And we are fortunate that, in contrast to the 1930s, when the ill-timed discovery of the neutron coincided with Hitler’s rise to dictatorship and led to a rapid nuclear arms race, ongoing particle physics research focuses on much less dangerous questions — despite what the current ridiculous conspiracy theories about CERN may claim.  I am grateful that, unlike our ancestors a couple of generations back, we particle physicists do not currently face the risk of putting new, catastrophic power in the hands of power-hungry, blood-thirsty autocrats.  (No, Vlad, of course I don’t mean anyone in particular; and besides, you already can destroy entire continents of people if you feel like it.)

    CERN was intended as a peace project: a pan-Western-European organization, founded after a time of war that had torn the continent apart.  In its convention, it is stated that “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”  After the end of the Cold War, it further expanded to the rest of the continent and even beyond.  It is not yet seriously wounded by this new period of conflict in Europe. But like everyone else from Vladivostok to Lisbon, it feels the pain.

    Tommaso DorigoAre Particle Masses Fixed Or Not?

    A reader of this blog left an interesting question in the comments thread of the article I wrote on recent ATLAS results two days ago. As I tried to answer the question exhaustively, I think the material might be of interest of other readers here, so I decided to make an independent post of it, adding some more detail.

    John asks whether it is possible that what we see, when we plot the mass of a particle, is the true distribution of values of the particle - i.e. that the particle does not have only one mass, but a distribution of values. The question is not an idle one! So let us discuss it below. I will make a few points to clarify matters.

    1. We estimate by proxy

    read more

    September 05, 2022

    n-Category Café Compositional Constructions of Automata

    guest post by Ruben van Belle and Miguel Lopez

    In this post we will detail a categorical construction of automata following the work of Albasini, Sabadini, and Walters. We first recall the basic definitions of various automata, and then outline the construction of the aformentioned authors as well as generalizations and examples.

    A finite deterministic automaton consists of

    • a finite set QQ (state space),

    • an initial state q 0Xq_0\in X and a set FQF\subseteq Q of accepting states,

    • a finite set AA of input symbols and a transition map τ a:QQ\tau_a:Q\to Q for every aAa\in A.

    Imagine a frog in pond with a 100100 lily pads, which form a 1010 by 1010 grid. Suppose that the frog sits on a lily pad and can only make jumps of one unit i.e. the frog can jump forwards, backwards, left or right.

    Let QQ be the set of lily pads. Let q 0q_0 be the lily pad the frog currently is sitting on and let q 1q_1 be the lily pad the frog wants to go to. Let AA be the set of symbols {f,b,l,r}\{f,b,l,r\}. A jump forward corresponds to an input of ff and to the map τ f:QQ\tau_f:Q\to Q moving the frog forward (if possible) in the grid. Similarly, the symbols b,lb,l and rr with respective maps τ b,τ l\tau_b,\tau_l and τ r\tau_r correspond to jumps backwards, left and right, whenever these are possible. The data (Q,q 0,{q 1},A,(τ a) aA)(Q,q_0,\{q_1\},A, (\tau_a)_{a\in A}) form a finite deterministic automaton.

    The transition maps in the definition of a finite deterministic automaton are morphisms in FinSet\mathbf{FinSet}, the category of finite sets and functions. We can generalize the idea of a finite deterministic automaton, by letting the transition maps be morphisms in more general categories. In particular we will look at Kleisli categories of monads on FinSet\mathbf{FinSet} or Set\mathbf{Set}.

    If the transition map is a map in the Kleisli category of the powerset monad, i.e. functions M a:Q𝒫(Q)M_a:Q\to\mathcal{P}(Q), we obtain finite non-deterministic automata. In our example that would mean that the frog can choose between a set of different jumps for a given input. If the frog is sitting on lily pad qq, then τ f(q)\tau_f(q) is the set of all reachable lily pads that are in front of the frog.

    If we let the transition map be a morphism in the Kleisli category of the monad of finitely supported measures on Set\mathbf{Set}, we obtain weighted automata and if the transition map is a morphism in the Kleisli category of the distribution monad, we obtain Markov automata. In the case of the frog this would mean that the frog prefers some lily pads over others and this preference is given in terms of distributions. If the frog sits on lilypad qq, then τ f(q)\tau_f(q) is a distribution on the reachable lily pads in front of the frog.

    In the following, we will first consider categories of Markov automata as presented in The compositional construction of Markov process II by Albasini, Sabadini, and Walters and then generalize this to TT-automata for commutative strong monads TT on a monoidal category 𝒞\mathcal{C}. We will study categories where the morphisms are these generalized automata and look at the monoidal structure and properties this category inherits from 𝒞\mathcal{C} and 𝒞 T\mathcal{C}_T.

    Markov Automata

    Here, we will consider a definition of automata that deviates from the classical literature. A Markov automaton Q Y;A,B XQ^X_{Y;A,B} consists of the following data:

    • A set of states also denoted QQ.

    • Two sets of parallel interfaces AA and BB (playing the role of inputs/signals)

    • Two sets of sequential interfaces XX and YY together with functions γ Q:XQ\gamma_Q:X\to Q and δ Q:YQ\delta_Q:Y\to Q in 𝒞\mathcal{C} (playing the role of start/stop states)

    • Transition matrices indexed by the parallel interfaces Q a,bQ_{a,b} such that for all qQq \in Q qQ aA,bBQ a,b=1 \sum_{q'\in Q} \sum_{a\in A, b\in B} Q_{a,b} = 1

    Markov automata form a symmetric monoidal category in two distinctly different ways: one in which we compose automata in parallel and one in which we compose in sequence.

    Parallel Composition

    Let ParAut\mathbf{ParAut} be the category with

    • objects as finite sets A,B,C,A,B,C,\dots

    • morphisms ABA \to B as weighted automata whose parallel interfaces are the sets AA and BB.

    The parallel composite of morphisms ABCA \to B \to C is defined to be the weighted automaton with states Q×RQ \times R and whose top and bottom interfaces are the respective set products X×ZX \times Z and Y×WY \times W with sequential interface functions γ Q×γ R\gamma_Q \times \gamma_R and δ Q×δ R\delta_Q \times \delta_R. The parallel interfaces are AA and CC with transition matrices defined by

    (Q||R) a,c= bBQ a,bR b,c (Q||R)_{a,c} = \sum_{b\in B} Q_{a,b} \otimes R_{b,c}

    where \otimes denotes the Kronecker product of matrices. This is shown diagrammatically in the figure below.

    Parallel Composition.

    The tensor product of two objects AA and BB in ParAut\mathbf{ParAut} is their cartesian product A×BA\times B. On morphisms Q Y;A,B XQ^X_{Y;A,B} and R W;C,D ZR^Z_{W;C,D}, the tensor product has as states Q×RQ \times R, transition matrices

    (Q×R) (a,c),(b,d)=Q a,bR c,d, (Q \times R)_{(a,c),(b,d)} = Q_{a,b} \otimes R_{c,d},

    and X×ZX \times Z and Y×WY \times W as sequential interfaces with interface functions γ Q×γ R\gamma_Q \times \gamma_R and δ Q×δ R\delta_Q \times \delta_R respectively. Note that the identity object I={*}I = \{ *\} is the one element set.

    The tensor product in ParAut.

    To show ParAut\mathbf{ParAut} is symmetric, we will use the fact that there is a monoidal functor

    Par:RelParAut \text{Par} : \mathbf{Rel} \to \mathbf{ParAut}

    on the symmetric monoidal category of (finite) sets and relations Rel\mathbf{Rel}. This functor acts as the identity on objects Par(A)=A\text{Par}(A) = A and on morphisms RA×BR \subseteq A \times B by associating RR to the following automaton:

    • Each of the sequential interfaces and state space are the singleton set X=Y=Q={*}X=Y=Q=\{*\}.

    • The left and right parallel interfaces are AA and BB respectively and the transition matrices (in this case numbers) are given by Par(R)(a,b) *,*={1, (a,b)R 0, otherwise \text{Par}(R)(a,b)_{\ast,\ast} = \begin{cases} 1, & (a,b) \in R \\ 0, & \text{otherwise} \end{cases}

    for aAa \in A and bBb \in B. Symmetry in ParAut\mathbf{ParAut} then follows from Par\text{Par} being a monoidal functor. Using this technology we will show some nice properties of ParAut\mathbf{ParAut} by pushing them forward through Rel\mathbf{Rel}.

    Proposition. ParAut\mathbf{ParAut} is compact closed.

    This in particular allows us to define the notion of feedback. To show that Rel\mathbf{Rel} (and hence ParAut\mathbf{ParAut}) is compact closed we instead show that each object in Rel\mathbf{Rel} is a Frobenius algebra and apply the following lemma.

    Lemma. If an object AA in a monoidal category forms a Frobenius algebra then it is dual to itself.

    Let Δ A:AA×A\Delta_A : A \to A \times A be the diagonal map, thought of as a morphism in Rel\mathbf{Rel} and p A:A1p_A : A \to 1 the unique morphism to the terminal object. Moreover given a morphism RR, let R opR^{\text{op}} denote the opposite relation (by simply reversing the arrow). Then the monoid structure for the Frobenius algebra of an object AA in Rel\mathbf{Rel} is given by (A,Δ A op,p A op)(A,\Delta_A^{\text{op}},p_A^{\text{op}}) and, dually, the comonoid structure by (A,Δ A,p A)(A, \Delta_A,p_A).

    Sequential Composition

    Let SeqAut\mathbf{SeqAut} be the category with

    • objects as finite sets X,Y,ZX,Y,Z \dots

    • morphisms XYX \to Y as weighted automata

    Q Y;A,B XQ^X_{Y;A,B} whose sequential interfaces are the sets XX and YY.

    We define the sequential composite of two morphisms Q Y;A,B XQ^X_{Y;A,B} and R Z;C,D YR^Y_{Z;C,D} to be the weighted automaton QRQ \circ R whose states are given by the disjoint union (Q+R)/(Q + R)/\sim quotiented by the relation δ Q(y)γ R(y)\delta_Q(y) \sim \gamma_R(y). The parallel interfaces are A+CA+C and B+DB+D and top and bottom interfaces are XX and ZZ with the respective interface functions

    γ :XQQ+R(Q+R)/ δ :ZRQ+R(Q+R)/.\begin{array}{cc} \gamma &: X \to Q \to Q + R \to (Q+R)/\sim \\ \delta &: Z \to R \to Q + R \to (Q+R)/\sim. \end{array}

    If we let q,qQq,q' \in Q and r,rRr,r' \in R and [s][s] denote the equivalence class of state ss in Q+RQ + R, then the transition matrices are given by

    [(QR) a,b] [q],[q] = s[q],s[q][Q a,b] s,s [(QR) c,d] [r],[r] = s[q],s[q][Q a,b] s,s \begin{array}{cc} [(Q \circ R)_{a,b}]_{[q],[q']} &= \sum_{s \in[q], s'\in [q']} [Q_{a,b}]_{s,s'} \\ [(Q \circ R)_{c,d}]_{[r],[r']} &= \sum_{s \in[q], s'\in [q']} [Q_{a,b}]_{s,s'} \end{array}

    where all other entries are zero. This is shown diagrammatically below.

    Sequential Composition.

    The tensor product of Q Y;A,B XQ^X_{Y;A,B} and R W;C,D ZR^Z_{W;C,D} has states Q+RQ + R without any quotienting. The parallel interfaces are again A+CA+C and B+DB+D while top and bottom interfaces X+ZX + Z and Y+WY + W have interface functions γ Q+γ R\gamma_Q + \gamma_R and δ Q+δ R\delta_Q + \delta_R. The transition matrices are given by [(Q+R) a,b] q,q =[Q a,b] q,q [(Q+R) c,d] [r],[r] =[Q c,d] r,r \begin{array}{cc} [(Q + R)_{a,b}]_{q,q'} &= [Q_{a,b}]_{q,q'} \\ [(Q + R)_{c,d}]_{[r],[r']} &= [Q_{c,d}]_{r,r'} \end{array} where all other entries are again zero.

    The tensor product in SeqAut.

    Just as in the case of ParAut\mathbf{ParAut}, one can show SeqAut\mathbf{SeqAut} is a compact closed symmetric monoidal category by defining a monoidal functor Seq:RelSeqAut\text{Seq} : \mathbf{Rel} \to \mathbf{SeqAut}.

    TT-Automata

    We now generalize the preceding categories to work with a commutative strong monad. Let (𝒞,,I)(\mathcal{C},\otimes,I) be a (strict) symmetric monoidal category and let (T,μ,η)(T,\mu,\eta) be a commutative monad on 𝒞\mathcal{C} with strength σ\sigma. Let 𝒞 T\mathcal{C}_T be the Kleisli category of this monad. The strength induces a monoidal structure on 𝒞 T\mathcal{C}_T (see Folklore: Monoidal kleisli categories). We will denote the tensor product and unit object on 𝒞 T\mathcal{C}_T also by \otimes and II.

    Definition. A TT-automaton consists of

    • an object QQ in 𝒞\mathcal{C},

    • morphisms γ Q:XQ\gamma_Q:X\to Q and δ Q:YQ\delta_Q:Y\to Q in 𝒞\mathcal{C} (sequential interfaces) and

    • objects AA and BB in 𝒞\mathcal{C} (parallel interfaces) together with a map M Q:QAQBM_Q:Q\otimes A\rightsquigarrow Q\otimes B in 𝒞 T\mathcal{C}_T (transition morphisms).

    Parallel composition

    Consider two TT-automata 𝒬:=(Q,γ Q,δ Q,M Q:QAQB)\mathcal{Q}:=(Q,\gamma_Q,\delta_Q,M_Q:Q\otimes A\rightsquigarrow Q\otimes B) and :=(R,γ R,δ R,M R:RBRC)\mathcal{R}:=(R,\gamma_R,\delta_R,M_R:R\otimes B\rightsquigarrow R\otimes C).

    The parallel composite of \mathcal{R} and 𝒬\mathcal{Q} is given by the TT-automaton 𝒬 p:=(QR,γ Qγ R,δ Rδ Q,(1 RM Q)(1 QM R)),\mathcal{Q}\circ_p\mathcal{R}:=\big(Q\otimes R,\gamma_Q\otimes \gamma_R,\delta_R\otimes \delta_Q, (1_R\otimes M_Q)\diamond (1_Q\otimes M_R)\big), where \diamond is used to mean composition in 𝒞 T\mathcal{C}_T.

    Let ParAut(T)\mathbf{ParAut}(T) be the category that has the same objects as 𝒞\mathcal{C} and where a morphism from AA to BB is a TT-automaton whose parallel interfaces are given by AA and BB and where composition is given by parallel composition.

    Given two TT-automata 𝒬:=(Q;γ Q,δ Q,M Q:QAQB)\mathcal{Q}:=(Q;\gamma_Q,\delta_Q,M_Q:Q\otimes A\rightsquigarrow Q\otimes B) and :=(R,γ R,δ R,M R:RCRD\mathcal{R}:=(R,\gamma_R,\delta_R,M_R:R\otimes C \rightsquigarrow R\otimes D, we can form a new TT-automaton 𝒬 p:=(QR,γ Qγ R,δ Rδ Q,M QM R),\mathcal{Q}\otimes_p\mathcal{R}:=(Q\otimes R, \gamma_Q\otimes \gamma_R,\delta_R\otimes \delta_Q, M_Q\otimes M_R), which we call the parallel tensor product. The parrallel tensor product p\otimes_p together with the object II define a monoidal structure on ParAut(T)\mathbf{ParAut}(T).

    We will now define a (strict) monoidal functor Par:𝒞 TParAut(T)\text{Par}:\mathcal{C}_T\to \mathbf{ParAut}(T) The functor acts as the identity on objects and sends a morphism ABA\rightsquigarrow B in 𝒞\mathcal{C} to the TT-automaton (I,Id I,Id I,Id If).(I,\text{Id}_I,\text{Id}_I, \text{Id}_I\otimes f).

    Because the monoidal structure on 𝒞\mathcal{C} is symmetric, so is the monoidal structure on ParAut(T)\mathbf{ParAut}(T) using the monoidal functor Par\text{Par}.

    Note that Par\text{Par} sends Frobenius algebras to Frobenius algebras and is surjective on objects.

    Proposition. If 𝒞 T\mathcal{C}_T is compact closed, so is ParAut(T).\mathbf{ParAut}(T).

    Remark. We have tried to also generalize the sequential operations to TT-automata, but we will leave the details of this construction for future work.

    Examples

    Consider the following game in a casino: the player is asked to place a bet, let’s say the player decides to place a bet of £aa. After the bet is placed, the dealer blindly picks a coin from a bag of (fair and unfair) coins; suppose the dealer picks coin bb, which has probability pp on heads. Then the dealer flips the coin. If coin lands on heads, the player wins £aa. If it falls on tails, the player loses the placed bet of £aa.

    We represent the bank balance of the player by Q:={0,1,,N}Q:=\{0,1,\ldots, N\}. We furthermore suppose that the player enters the casino with £xx and has decided to leave the casino when their bank balance reaches states in YQY\subseteq Q.

    If we let AA to be the set of possible bets, i.e. A:={0,1,,N}A:=\{0,1,\ldots,N\} and let BB be the bag of coins the dealer picks from. For a coin bBb\in B, we denote the probability on heads by p bp_b. For a bet aAa\in A and a bank balance of qq such that qaq\geq a, let M(a,q)M(a,q) be the measure on B×QB\times Q defined by (b,q){p b if q=q+a 1p b if q=qa 0 otherwise.(b,q')\mapsto \begin{cases}p_b & \text{ if }q'=q+a\\ 1-p_b & \text{ if }q'=q-a\\ 0 & \text{ otherwise.}\end{cases} This measures the chance of moving from a bank balance of qq to a balance of qq' when playing the game.

    For a bet aa and a bank balance of qq such that q<aq &lt; a, let M(a,q)M(a,q) be the measure on B×QB\times Q defined by

    (b,q){1 if q=q 0 otherwise,(b,q')\mapsto \begin{cases}1 & \text{ if }q=q'\\ 0 & \text{ otherwise,}\end{cases}

    as the player is not allowed to bet more money than they have.

    This defines the Kleisli map Q×AQ×BQ\times A\rightsquigarrow Q\times B and the data (Q,x,Y,M)(Q,x,Y,M) form a weighted automaton. Note that 0Q0\in Q is a so called deadlock state, since the player’s bank balance won’t change anymore after reaching 00. In particular, if 00 is not an element of YY, the player will have to keep playing the game forever.

    Suppose now that the casino has multiple tables with different bags of coins and suppose that two friends go to the casino together. Then parallel compositions combines the information of the two friends playing at the same table, i.e. the game is played with the same coin. The parallel product and the sequential sum both combine the information of the two friends playing at different tables. The product measures the chance that person 1 reaches state qq' and that person 2 reaches state rr', while the sum measures the chance that that person 1 reaches state qq' or that person 2 reaches state rr'.

    If the player starts with £xx and decides to move to a different table when reaching £yy and to stop playing when their bank balance is £zz, we can model the combination of the two games with the sequential composition.

    Doug NatelsonComing next month....


    I'm going to be presenting a continuing education course starting next month, trying to give a general audience introduction to some key ideas about condensed matter and materials physics.   

    From the intro flyer:  "The world and the materials that compose it are full of profound phenomena often overlooked by the uninitiated, from quasiparticles to the quantum world. Did you know that there are hundreds of states of matter? Have you ever wondered why objects can’t pass through each other and why stars don’t collapse? What do sports fans doing the wave or a traffic slowdown on the 610 Loop have to do with electrical conduction in metal? Why are raindrops wet and how do snowflakes achieve their delicate sixfold symmetry? Learn how physics affects everything around you, defining the very laws of nature. Spanning physics, chemistry, materials science, electrical engineering and even a bit of biology, this course brings the foundations of everyday physics to life and shares some of the most intriguing research emerging today."

    Here is the link for registration for the course.  (The QR code I'd originally posted seems to point to the wrong class.)

    (My posting has been less frequent as I continue to work on preparing this class.  Should be fun.)
      

    Tommaso DorigoHighlights From ATLAS

    Bill Murray gave a nice summary of recent results from the ATLAS collaboration at the ICNFP conference this morning, and I will nit-pick a few graphs from his presentation to show the level of detail of investigations in subnuclear processes that the Large Hadron Collider (LHC) is providing these days, as seen from the lens of one of its two main microscopes, the wondrous ATLAS detector.
    The LHC is taking data. What, again?

    read more

    September 04, 2022

    Scott Aaronson What I’ve learned from having COVID

    1. The same thing Salman Rushdie learned: either you spend your entire life in hiding, or eventually it’ll come for you. Years might pass. You might emerge from hiding once, ten times, a hundred times, be fine, and conclude (emotionally if not intellectually) that the danger must now be over, that if it were going to come at all then it already would have, that maybe you’re even magically safe. But this is just the nature of a Poisson process: 0, 0, 0, followed by 1.
    2. First comes the foreboding (in my case, on the flight back home from the wonderful CQIQC meeting in Toronto)—“could this be COVID?”—the urge to reassure yourself that it isn’t, the premature relief when the test is negative. Only then, up to a day later, comes the second vertical line on the plastic cartridge.
    3. I’m grateful for the vaccines, which have up to a 1% probability of having saved my life. My body was as ready for this virus as my brain would’ve been for someone pointing a gun at my head and demanding to know a proof of the Karp-Lipton Theorem. All the same, I wish I also could’ve taken a nasal vaccine, to neutralize the intruder at the gate. Through inaction, through delays, through safetyism that’s ironically caused millions of additional deaths, the regulatory bureaucracies of the US and other nations have a staggering amount to answer for.
    4. Likewise, Paxlovid should’ve been distributed like candy, so that everyone would have a supply and could start the instant they tested positive. By the time you’re able to book an online appointment and send a loved one to a pharmacy, a night has likely passed and the Paxlovid is less effective.
    5. By the usual standards of a cold, this is mild. But the headaches, the weakness, the tiredness … holy crap the tiredness. I now know what it’s like to be a male lion or a hundred-year-old man, to sleep for 20 hours per day and have that feel perfectly appropriate and normal. I can only hope I won’t be one of the long-haulers; if I were, this could be the end of my scientific career. Fortunately the probability seems small.
    6. You can quarantine in your bedroom, speak to your family only through the door, have meals passed to you, but your illness will still cast a penumbra on everyone around you. Your spouse will be stuck watching the kids alone. Other parents won’t let their kids play with your kids … and you can’t blame them; you’d do the same in their situation.
    7. It’s hard to generalize from a sample size of 1 (or 2 if you count my son Daniel, who recovered from a thankfully mild case half a year ago). Readers: what are your COVID stories?

    September 02, 2022

    Matt von HippelCabinet of Curiosities: The Cubic

    Before I launch into the post: I got interviewed on Theoretically Podcasting, a new YouTube channel focused on beginning grad student-level explanations of topics in theoretical physics. If that sounds interesting to you, check it out!

    This Fall is paper season for me. I’m finishing up a number of different projects, on a number of different things. Each one was its own puzzle: a curious object found, polished, and sent off into the world.

    Monday I published the first of these curiosities, along with Jake Bourjaily and Cristian Vergu.

    I’ve mentioned before that the calculations I do involve a kind of “alphabet“. Break down a formula for the probability that two particles collide, and you find pieces that occur again and again. In the nicest cases, those pieces are rational functions, but they can easily get more complicated. I’ve talked before about a case where square roots enter the game, for example. But if square roots appear, what about something even more complicated? What about cubic roots?

    What about 1024th roots?

    Occasionally, my co-authors and I would say something like that at the end of a talk and an older professor would scoff: “Cube roots? Impossible!”

    You might imagine these professors were just being unreasonable skeptics, the elderly-but-distinguished scientists from that Arthur C. Clarke quote. But while they turned out to be wrong, they weren’t being unreasonable. They were thinking back to theorems from the 60’s, theorems which seemed to argue that these particle physics calculations could only have a few specific kinds of behavior: they could behave like rational functions, like logarithms, or like square roots. Theorems which, as they understood them, would have made our claims impossible.

    Eventually, we decided to figure out what the heck was going on here. We grabbed the simplest example we could find (a cube root involving three loops and eleven gluons in N=4 super Yang-Mills…yeah) and buckled down to do the calculation.

    When we want to calculate something specific to our field, we can reference textbooks and papers, and draw on our own experience. Much of the calculation was like that. A crucial piece, though, involved something quite a bit less specific: calculating a cubic root. And for things like that, you can tell your teachers we use only the very best: Wikipedia.

    Check out the Wikipedia entry for the cubic formula. It’s complicated, in ways the quadratic formula isn’t. It involves complex numbers, for one. But it’s not that crazy.

    What those theorems from the 60’s said (and what they actually said, not what people misremembered them as saying), was that you can’t take a single limit of a particle physics calculation, and have it behave like a cubic root. You need to take more limits, not just one, to see it.

    It turns out, you can even see this just from the Wikipedia entry. There’s a big cube root sign in the middle there, equal to some variable “C”. Look at what’s inside that cube root. You want that part inside to vanish. That means two things need to cancel: Wikipedia labels them \Delta_1, and \sqrt{\Delta_1^2-4\Delta_0^3}. Do some algebra, and you’ll see that for those to cancel, you need \Delta_0=0.

    So you look at the limit, \Delta_0\rightarrow 0. This time you need not just some algebra, but some calculus. I’ll let the students in the audience work it out, but at the end of the day, you should notice how C behaves when \Delta_0 is small. It isn’t like \sqrt[3]{\Delta_0}. It’s like just plain \Delta_0. The cube root goes away.

    It can come back, but only if you take another limit: not just \Delta_0\rightarrow 0, but \Delta_1\rightarrow 0 as well. And that’s just fine according to those theorems from the 60’s. So our cubic curiosity isn’t impossible after all.

    Our calculation wasn’t quite this simple, of course. We had to close a few loopholes, checking our example in detail using more than just Wikipedia-based methods. We found what we thought was a toy example, that turned out to be even more complicated, involving roots of a degree-six polynomial (one that has no “formula”!).

    And in the end, polished and in their display case, we’ve put our examples up for the world to see. Let’s see what people think of them!

    Scott Aaronson Win a $250,000 Scott Aaronson Grant for Advanced Precollege STEM Education!

    Back in January, you might recall, Skype cofounder Jaan Tallinn’s Survival and Flourishing Fund (SFF) was kind enough to earmark $200,000 for me to donate to any charitable organizations of my choice. So I posted a call for proposals on this blog. You “applied” to my “foundation” by simply sending me an email, or leaving a comment on this blog, with a link to your organization’s website and a 1-paragraph explanation of what you wanted the grant for, and then answering any followup questions that I had.

    After receiving about 20 awesome proposals in diverse areas, in the end I decided to split the allotment among organizations around the world doing fantastic, badly-needed work in math and science enrichment at the precollege level. These included Canada/USA Mathcamp, AddisCoder, a magnet school in Maine, a math circle in Oregon, a math enrichment program in Ghana, and four others. I chose to focus on advanced precollege STEM education both because I have some actual knowledge and experience there, and because I wanted to make a strong statement about an underfunded cause close to my heart that’s recently suffered unjust attacks.

    To quote the immortal Carl Sagan, from shortly before his death:

    [C]hildren with special abilities and skills need to be nourished and encouraged. They are a national treasure. Challenging programs for the “gifted” are sometimes decried as “elitism.” Why aren’t intensive practice sessions for varsity football, baseball, and basketball players and interschool competition deemed elitism? After all, only the most gifted athletes participate. There is a self-defeating double standard at work here, nationwide.

    Anyway, the thank-you notes from the programs I selected were some of the most gratifying emails I’ve ever received.

    But wait, it gets better! After reading about the Scott Aaronson Speculation Grants on this blog, representatives from a large, reputable family foundation contacted me to say that they wanted to be involved too. This foundation, which wishes to remain anonymous at this stage although not to the potential grant recipient, intends to make a single US$250,000 grant in the area of advanced precollege STEM education. They wanted my advice on where their grant should go.

    Of course, I could’ve simply picked one of the same wonderful organizations that SFF and I helped in the first round. On reflection, though, I decided that it would be more on the up-and-up to issue a fresh call for proposals.

    So: do you run a registered 501(c)(3) nonprofit dedicated to advanced precollege STEM education? If so, email me or leave a comment here by Friday, September 9, telling me a bit about what your organization does and what more it could do with an extra $250K. Include a rough budget, if that will help convince me that you can actually make productive use of that amount, that it won’t just sit in your bank account. Organizations that received a Scott Aaronson Speculation Grant the last time are welcome to reapply; newcomers are also welcome.

    I’ll pass up to three finalists along to the funder, which will then make a final decision as to the recipient. The funder will be directly in touch with the potential grantee(s) and will proceed with its intake, review and due diligence process.

    We expect to be able to announce a recipient on or around October 24. Can’t wait to see what people come up with!

    September 01, 2022

    Matt Strassler Relatively Confused: Is It True That Nothing Can Exceed Light Speed?

    A post for general readers:

    Einstein’s relativity. Everybody’s heard of it, many have read about it, a few have learned some of it.  Journalists love to write about it.  It’s part of our culture; it’s always in the air, and has been for over a century.

    Most of what’s in the air, though, is in the form of sound bites, partly true but often misleading.  Since Einstein’s view of relativity (even more than Galileo’s earlier one) is inherently confusing, the sound bites turn a maze into a muddled morass.

    For example, take the famous quip: “Nothing can go faster than the speed of light.”  (The speed of light is denoted “c“, and is better described as the “cosmic speed limit”.) This quip is true, and it is false, because the word “nothing” is ambiguous, and so is the phrase “go faster”. 

    What essential truth lies behind this sound bite?

    Faster Than Light? An Example.

    Let’s first see how it can lead us astray.

    At the Large Hadron Collider [LHC], the world’s largest particle accelerator, protons collide with other protons. Just before a collision happens, a proton that has been accelerated to 99.999999% of c comes in from the right, and another accelerated to the same speed comes in from the left.

    Figure 1: From the perspective of someone standing in the LHC control room, two protons about to collide are each moving at almost the speed of light in opposite directions.

    From your perspective, observing these two protons (using fancy electronics, not your eyes) from the LHC control room, the distance between the two protons is decreasing by almost the speed of light from the right and almost the speed of light from the left.  So it would seem that the distance between them is closing by almost twice the speed of light — by 1.99999998 c, if you want to be precise.  And if the distance between them is decreasing by almost twice the speed of light, then, well, their relative speed (as you see it) is faster than the speed of light.

    That must be wrong, somehow, mustn’t it? Otherwise it would violate relativity… right?

    But no.  It’s not wrong.  From your perspective, the two protons are approaching each other at nearly twice the speed of light.

    Even simpler: point two laser pointers at one another, and turn them on at the same moment. The light beams will each be traveling at the speed of light, and the distance between them, from your perspective, will be decreasing at twice the speed of light.

    When a flash of lightning occurs, light rushes off in all directions. The light moving north moves at c. The light moving south moves at c. From our perspective, standing on the ground, the distance between the light moving north and the light moving south grows at 2c. That’s all there is to it.

    But I thoughtwaithuh?that can’t be rididn’t Einst…? — you’re full of… — I just don’t believe… — … …can’t be true!?! — well I [sound of head exploding.]

    This is the kind of problem that sound bites lead to. Let’s take the two-proton example apart, and see what “nothing” and “go faster” actually mean.

    The Can’t-Exceed-c Rule

    What Einstein’s relativity actually says is this:

    • From the perspective of any observer who measures the speeds of physical objects (using a careful laying out of aligned rulers for distance and synchronized clocks for time), no objects will ever be measured to be moving faster than the cosmic speed limit c, also known as “the speed of light in empty space.”

    Let’s call this the can’t-exceed-c rule[This statement of the rule is okay as long as gravity isn’t too important; if gravity matters a lot, then it needs revision… which I’ll return to in a future post.]

    The two protons individually don’t violate this rule: we, as observers in the LHC control room, view both of the colliding protons as moving slower than c.  And notice: the can’t-exceed-c rule says nothing about the relative speed of two objects as observed by a bystander.

    But still, there is a potential threat to the rule lurking here. Suppose we somehow accelerated an observer (let’s name him Peter) to the same speed and direction as the proton coming from the left.  Peter would then move along with that proton, and would view it as stationary. From our perspective, as illustrated in Figure 1, we might think Peter would then view the proton from the right as approaching at nearly twice the speed of light.  That would violate the can’t-exceed-c rule.

    Similarly, if Paula were traveling with the proton coming from the right, we might think she’d view the proton from the left as moving at nearly twice c.

    Here’s where relativity of space and time as Einstein intuited it, and as experiments confirm, steps in to save the can’t-exceed-c rule. The point is this: because Peter is in motion relative to us, Peter’s view of space and time is not the same as ours. This is key. Because of his differing views, Peter will lay out clocks and rulers differently from how we, sitting in the LHC control room, would do so. Therefore, the way that Peter measures speed — the distance covered by a moving object over a certain amount of time — is different from how we are doing so. 

    And that’s why, even though we measure the two protons approaching each other at nearly twice light speed, Peter will measure the right-moving proton approaching him below light speed. (The analogous statement would be true for Paula, traveling along with the other proton.)

    Figure 2: From the perspective of an observer traveling with the proton on the left, that proton is stationary, while we are moving at nearly light speed, and the second proton is moving even closer to — but still below — light speed. That this picture can be consistent with Figure 1 is not obvious; after all, it took Einstein to figure it out. It requires non-intuitive (but experimentally confirmed) observer-dependent distortions of space and time .

    In summary: from the perspective of an observer traveling with either proton, 

    • we and the LHC control room are approaching at a speed of 0.99999999 c
    • the other proton is approaching at a speed of 0.9999999999999998 c .

    Meanwhile, from our perspective

    • Both protons are approaching us (from opposite directions) at a speed of 0.99999999 c

    Different observers simply disagree. Yet all are simultaneously correct. This is characteristic of things that are relative (i.e. perspective-dependent.) [You and I think the Sun is bright, but an observer out at Pluto would think the Sun is dim; there’s no logical contradiction, and all of us are correct.] The difference, again, stems from different (but equally valid) ways of measuring distances and durations.

    [One thing we do have to agree on: if A views B as approaching with speed v, then B must also view A as approaching with speed v. Our perspective and those of Peter and Paula are indeed all consistent with this requirement.]

    Notice that no one’s perspective violates the can’t-exceed-c rule. The universe preserves this rule, as described by the math of relativity, no matter how you try to trick it into an exception.  Yet that math doesn’t promise or assure that this rule applies to the “relative speed” of two objects — if “relative speed” is defined as the rate of change of the distance between two objects as measured by an bystander.

    As always, though, definitions matter. If instead we defined the “relative speed” of two objects to be the speed of the second object as measured by an observer traveling with the first, or vice versa, independent of any bystander, then indeed that “relative speed” cannot exceed c. This is the definition implicitly used by physicists most of the time. But as with all definitions, it’s an arbitrary choice. It’s not the intuitive one most non-physicists would use.

    From Sound Bite to Understanding

    So what does “nothing can go faster than the speed of light” really mean?  “Nothing” means “No physical object”, and “Go Faster…” means “can be measured by an observer to be moving…” And altogether the sound bite means,

    • “no physical object, from the perspective of any particular observer, can ever be measured to be moving faster than the speed of light in empty space, c.”

    Even this comes with additional fine print. (The observer should be inertial and must make the measurement using an inertial frame of reference, and gravity had better be irrelevant; and I’ve left out some details about what “observer” really means and how the measurement is really made.) I’ll return to some of that fine print another time. But I hope it’s clearer now what the initial glib statement doesn’t mean.

    Fundamentally, a sound bite or TL;DR approach to science can’t ever work. Many of the remarkable and non-intuitive features of the universe are within the grasp of non-scientists, but they require more than a single sentence, or even a paragraph. What sound bites do in relativity is similar to what they do in politics: they make us think we understand something, while actually obstructing the path to knowledge.

    Scott Aaronson My Quantum Information Science II Lecture Notes: The wait is over!

    Here they are [PDF].

    They’re 155 pages of awesome—for a certain extremely specific definition of “awesome”—which I’m hereby offering to the world free of charge (for noncommercial use only, of course). They cover material that I taught, for the first time, in my Introduction to Quantum Information Science II undergrad course at UT Austin in Spring 2022.

    The new notes pick up exactly where my older QIS I lecture notes left off, and they presuppose familiarity with the QIS I material. So, if you’re just beginning your quantum information journey, then please start with my QIS I notes, which presuppose only linear algebra and a bit of classical algorithms (e.g., recurrence relations and big-O notation), and which self-containedly explain all the rules of QM, moving on to (e.g.) quantum circuits, density matrices, entanglement entropy, Wiesner’s quantum money, QKD, quantum teleportation, the Bell inequality, interpretations of QM, the Shor 9-qubit code, and the algorithms of Deutsch-Jozsa, Bernstein-Vazirani, Simon, Shor, and Grover. Master all that, and you’ll be close to the quantum information research frontier of circa 1996.

    My new QIS II notes cover a bunch of topics, but the main theme is “perspectives on quantum computing that go beyond the bare quantum circuit model, and that became increasingly central to the field from the late 1990s onwards.” Thus, it covers:

    • Hamiltonians, ground states, the adiabatic algorithm, and the universality of adiabatic QC
    • The stabilizer formalism, the 1996 Gottesman-Knill Theorem on efficient classical simulation of stabilizer QC, my and Gottesman’s 2004 elaborations, boosting up to universality via “magic states,” transversal codes, and the influential 2016 concept of stabilizer rank
    • Bosons and fermions: the formalism of Fock space and of creation and annihilation operators, connection to the permanents and determinants of matrices, efficient classical simulation of free fermionic systems (Valiant’s 2002 “matchcircuits”), the 2001 Knill-Laflamme-Milburn (KLM) theorem on universal optical QC, BosonSampling and its computational complexity, and the current experimental status of BosonSampling
    • Cluster states, Raussendorf and Briegel’s 2000 measurement-based quantum computation (MBQC), and Gottesman and Chuang’s 1999 “gate teleportation” trick
    • Matrix product states, and Vidal’s 2003 efficient classical simulation of “slightly entangled” quantum computations

    Extra bonus topics include:

    • The 2007 Broadbent-Fitzsimons-Kashefi (BFK) protocol for blind and authenticated QC; brief discussion of later developments including Reichardt-Unger-Vazirani 2012 and Mahadev 2018
    • Basic protocols for quantum state tomography
    • My 2007 work on PAC-learnability of quantum states
    • The “dessert course”: the black hole information problem, and the Harlow-Hayden argument on the computational hardness of decoding Hawking radiation

    Master all this, and hopefully you’ll have the conceptual vocabulary to understand a large fraction of what people in quantum computing and information care about today, how they now think about building scalable QCs, and what they post to the quant-ph arXiv.

    Note that my QIS II course is complementary to my graduate course on quantum complexity theory, for which the lecture notes are here. There’s very little overlap between the two (and even less overlap between QIS II and Quantum Computing Since Democritus).

    The biggest, most important topic related to the QIS II theme that I didn’t cover was topological quantum computing. I’d wanted to, but it quickly became clear that topological QC begs for a whole course of its own, and that I had neither the time nor the expertise to do it justice. In retrospect, I do wish I’d at least covered the Kitaev surface code.

    Crucially, these lecture notes don’t represent my effort alone. I worked from draft scribe notes prepared by the QIS II students, who did a far better job than I had any right to expect (including creating the beautiful figures). My wonderful course TA and PhD student Daniel Liang, along with students Ethan Tan, Samuel Ziegelbein, and Steven Han, then assembled everything, fixed numerous errors, and compiled the bibliography. I’m grateful to all of them. At the last minute, we had a LaTeX issue that none of us knew how to fix—but, in response to a plea, Shtetl-Optimized reader Pablo Cingolani generously volunteered to help, completed the work by the very next day (I’d imagined it taking a month!), and earned a fruit basket from me in gratitude.

    Anyway, let me know of any mistakes you find! We’ll try to fix them.

    August 31, 2022

    Scott Aaronson Busy Beaver Updates: Now Even Busier

    Way back in the covid-filled summer of 2020, I wrote a survey article about the ridiculously-rapidly-growing Busy Beaver function. My survey then expanded to nearly twice its original length, with the ideas, observations, and open problems of commenters on this blog. Ever since, I’ve felt a sort of duty to blog later developments in BusyBeaverology as well. It’s like, I’ve built my dam, I’ve built my lodge, I’m here in the pond to stay!

    So without further ado:

    • This May, Pavel Kropitz found a machine demonstrating that $$ BB(6) \ge 10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}}}}}}} $$ (15 times)—thereby blasting through his own 2010 record, that BB(6)≥1036,534. Or, for those tuning in from home: Kropitz constructed a 6-state, 2-symbol, 1-tape Turing machine that runs for at least the above number of steps, when started on an initially blank tape, and then halt. The machine was analyzed and verified by Pascal Michel, the modern keeper of Busy Beaver lore. In my 2020 survey, I’d relayed an open problem posed by my then 7-year-old daughter Lily: namely, what’s the first n such that BB(n) exceeds A(n), the nth value of the Ackermann function? All that’s been proven is that this n is at least 5 and at most 18. Kropitz and Michel’s discovery doesn’t settle the question—titanic though it is, the new lower bound on BB(6) is still less than A(6) (!!)—but in light of this result, I now strongly conjecture that the crossover happens at either n=6 or n=7. Huge congratulations to Pavel and Pascal!
    • Tristan Stérin and Damien Woods wrote to tell me about a new collaborative initiative they’ve launched called BB Challenge. With the participation of other leading figures in the neither-large-nor-sprawling Busy Beaver world, Tristan and Damien are aiming, not only to pin down the value of BB(5)—proving or disproving the longstanding conjecture that BB(5)=47,176,870—but to do so in a formally verified way, with none of the old ambiguities about which Turing machines have or haven’t been satisfactorily analyzed. In my survey article, I’d passed along a claim that, of all the 5-state machines, only 25 remained to be analyzed, to understand whether or not they run forever—the “default guess” being that they all do, but that proving it for some of them might require fearsomely difficult number theory. With their more formal and cautious approach, Tristan and Damien still count 1.5 million (!) holdout machines, but they hope to cut down that number extremely rapidly. If you’re feeling yourself successfully nerd-sniped, please join the quest and help them out!

    August 30, 2022

    Peter Rohde The Aardvark

    I’m pleased to announce my new comic series, “The Aardvark”.

    “The Aardvark”, by Dr Peter Rohde.

    The post The Aardvark appeared first on Peter Rohde.

    August 28, 2022

    John BaezJoint Mathematics Meetings 2023

    This is the biggest annual meeting of mathematicians:

    Joint Mathematical Meetings 2023, Wednesday January 4 – Saturday January 7, 2023, John B. Hynes Veterans Memorial Convention Center, Boston Marriott Hotel, and Boston Sheraton Hotel, Boston, Massachusetts.

    As part of this huge meeting, the American Mathematical Society is having a special session on Applied Category Theory on Thursday January 5th.

    I hear there will be talks by Eugenia Cheng and Olivia Caramello!

    You can submit an abstract to give a talk. The deadline is Tuesday, September 13, 2022.

    It should be lots of fun. There will also be tons of talks on other subjects.

    However, there’s a registration fee which is pretty big unless you’re a student or, even better, a ‘nonmathematician guest’. (I assume you’re not allowed to give a talk if you’re a nonmathematician.)

    The special session is called SS 96 and it comes in two parts: one from 8 am to noon, and the other from 1 pm to 5 pm. It’s being run by these participants of this summer’s Mathematical Research Community on applied category theory:

    • Charlotte Aten, University of Denver
    • Pablo S. Ocal, University of California, Los Angeles
    • Layla H. M. Sorkatti, Southern Illinois University
    • Abigail Hickok, University of California, Los Angeles

    This Mathematical Research Community was run by Daniel Cicala, Simon Cho, Nina Otter, Valeria de Paiva and me, and I think we’re all coming to the special session. At least I am!

    August 26, 2022

    Matt von HippelWhy the Antipode Was Supposed to Be Useless

    A few weeks back, Quanta Magazine had an article about a new discovery in my field, called antipodal duality.

    Some background: I’m a theoretical physicist, and I work on finding better ways to make predictions in particle physics. Folks in my field make these predictions with formulas called “scattering amplitudes” that encode the probability that particles bounce, or scatter, in particular ways. One trick we’ve found is that these formulas can often be written as “words” in a kind of “alphabet”. If we know the alphabet, we can make our formulas much simpler, or even guess formulas we could never have calculated any other way.

    Quanta’s article describes how a few friends of mine (Lance Dixon, Ömer Gürdoğan, Andrew McLeod, and Matthias Wilhelm) noticed a weird pattern in two of these formulas, from two different calculations. If you flip the “words” around, back to front (an operation called the antipode), you go from a formula describing one collision of particles to a formula for totally different particles. Somehow, the two calculations are “dual”: two different-seeming descriptions that secretly mean the same thing.

    Quanta quoted me for their article, and I was (pleasantly) baffled. See, the antipode was supposed to be useless. The mathematicians told us it was something the math allows us to do, like you’re allowed to order pineapple on pizza. But just like pineapple on pizza, we couldn’t imagine a situation where we actually wanted to do it.

    What Quanta didn’t say was why we thought the antipode was useless. That’s a hard story to tell, one that wouldn’t fit in a piece like that.

    It fits here, though. So in the rest of this post, I’d like to explain why flipping around words is such a strange, seemingly useless thing to do. It’s strange because it swaps two things that in physics we thought should be independent: branch cuts and derivatives, or particles and symmetries.

    Let’s start with the first things in each pair: branch cuts, and particles.

    The first few letters of our “word” tell us something mathematical, and they tell us something physical. Mathematically, they tell us ways that our formula can change suddenly, and discontinuously.

    Take a logarithm, the inverse of e^x. You’re probably used to plugging in positive numbers, and getting out something reasonable, that changes in a smooth and regular way: after all, e^x is always positive, right? But in mathematics, you don’t have to just use positive numbers. You can use negative numbers. Even more interestingly, you can use complex numbers. And if you take the logarithm of a complex number, and look at the imaginary part, it looks like this:

    Mostly, this complex logarithm still seems to be doing what it’s supposed to, changing in a nice slow way. But there is a weird “cut” in the graph for negative numbers: a sudden jump, from \pi to -\pi. That jump is called a “branch cut”.

    As physicists, we usually don’t like our formulas to make sudden changes. A change like this is an infinitely fast jump, and we don’t like infinities much either. But we do have one good use for a formula like this, because sometimes our formulas do change suddenly: when we have enough energy to make a new particle.

    Imagine colliding two protons together, like at the LHC. Colliding particles doesn’t just break the protons into pieces: due to Einstein’s famous E=mc^2, it can create new particles as well. But to create a new particle, you need enough energy: mc^2 worth of energy. So as you dial up the energy of your protons, you’ll notice a sudden change: you couldn’t create, say, a Higgs boson, and now you can. Our formulas represent some of those kinds of sudden changes with branch cuts.

    So the beginning of our “words” represent branch cuts, and particles. The end represents derivatives and symmetries.

    Derivatives come from the land of calculus, a place spooky to those with traumatic math class memories. Derivatives shouldn’t be so spooky though. They’re just ways we measure change. If we have a formula that is smoothly changing as we change some input, we can describe that change with a derivative.

    The ending of our “words” tell us what happens when we take a derivative. They tell us which ways our formulas can smoothly change, and what happens when they do.

    In doing so, they tell us about something some physicists make sound spooky, called symmetries. Symmetries are changes we can make that don’t really change what’s important. For example, you could imagine lifting up the entire Large Hadron Collider and (carefully!) carrying it across the ocean, from France to the US. We’d expect that, once all the scared scientists return and turn it back on, it would start getting exactly the same results. Physics has “translation symmetry”: you can move, or “translate” an experiment, and the important stuff stays the same.

    These symmetries are closely connected to derivatives. If changing something doesn’t change anything important, that should be reflected in our formulas: they shouldn’t change either, so their derivatives should be zero. If instead the symmetry isn’t quite true, if it’s what we call “broken”, then by knowing how it was “broken” we know what the derivative should be.

    So branch cuts tell us about particles, derivatives tell us about symmetries. The weird thing about the antipode, the un-physical bizarre thing, is that it swaps them. It makes the particles of one calculation determine the symmetries of another.

    (And lest you’ve heard about particles with symmetries, like gluons and SU(3)…this is a different kind of thing. I don’t have enough room to explain why here, but it’s completely unrelated.)

    Why the heck does this duality exist?

    A commenter on the last post asked me to speculate. I said there that I have no clue, and that’s most of the answer.

    If I had to speculate, though, my answer might be disappointing.

    Most of the things in physics we call “dualities” have fairly deep physical meanings, linked to twisting spacetime in complicated ways. AdS/CFT isn’t fully explained, but it seems to be related to something called the holographic principle, the idea that gravity ties together the inside of space with the boundary around it. T duality, an older concept in string theory, is explained: a consequence of how strings “see” the world in terms of things to wrap around and things to spin around. In my field, one of our favorite dualities links back to this as well, amplitude-Wilson loop duality linked to fermionic T-duality.

    The antipode doesn’t twist spacetime, it twists the mathematics. And it may be it matters only because the mathematics is so constrained that it’s forced to happen.

    The trick that Lance Dixon and co. used to discover antipodal duality is the same trick I used with Lance to calculate complicated scattering amplitudes. It relies on taking a general guess of words in the right “alphabet”, and constraining it: using mathematical and physical principles it must obey and throwing out every illegal answer until there’s only one answer left.

    Currently, there are some hints that the principles used for the different calculations linked by antipodal duality are “antipodal mirrors” of each other: that different principles have the same implication when the duality “flips” them around. If so, then it could be this duality is in some sense just a coincidence: not a coincidence limited to a few calculations, but a coincidence limited to a few principles. Thought of in this way, it might not tell us a lot about other situations, it might not really be “deep”.

    Of course, I could be wrong about this. It could be much more general, could mean much more. But in that context, I really have no clue what to speculate. The antipode is weird: it links things that really should not be physically linked. We’ll have to see what that actually means.

    August 24, 2022

    Doug NatelsonA couple of quantum info papers

    Surfacing from being submerged by other responsibilities (including jury duty today), I wanted to point out two paper.

    This preprint (based on a talk at the Solvay Conference - the 2022 one, not the 1927 one) from John Preskill provides a nice overview of quantum information, at a very accessible level for non-experts.  It’s nice seeing this perspective.

    This paper is much more technical and nitty-gritty.  I’ve written before about my curiosity concerning how  good the performance would be of quantum computing devices made with the process control and precision of modern state-of-the-art semiconductor fab.  Here is a look at superconducting qubits made with a CMOS compatible process, using ebeam lithography and other steps developed for 300 mm wafer fabrication.  

    More soon….


    August 23, 2022

    Peter Rohde Climbing the Zinalrothorn (4,221m)

    Full video of the Zinalrothorn (4,221m) climb, Zermatt, Switzerland.

    The post Climbing the Zinalrothorn (4,221m) appeared first on Peter Rohde.

    August 18, 2022

    Jordan EllenbergSky Carp

    The former Beloit Snappers, the high-A farm team of the Miami Marlins, are now the Beloit Sky Carp, and they play in a brand new park, ABC Supply Stadium, naming rights courtesy of Diane Hendrick’s massive Beloit-based roofing supply company. Why Sky Carp? Apparently it is a slang term for a goose. Not in my circles. I assumed the strange name was to provide a kind of uniformity among the Marlins’ farm teams, which are all named after sea creatures of some kind. But, as CJ points out, a snapper is a fish too!

    I really like minor league ball, even though on the night we saw the Sky Carp edge the West Michigan Whitecaps, there weren’t really any big-time prospects on the field. Maybe the biggest was Trei Cruz, named Trei because he’s the son of Jose Cruz Jr. who was the son of Jose Cruz Sr. (Should the oldest Cruz have been a Hall of Famer? I’m not sure, but he should have gotten more than 0.4% of the vote and an early exit on his first ballot.) The youngest Cruz made two extremely nifty, almost to the point of showoffy, plays at short and drew a walk in a tough spot. I think there was an attempted steal of home in this game but I was getting a bison-elk dog and I only saw the end of the play.

    A fun and cheap night out. Beloit is only an hour’s drive from Madison, and the stadium is a short walk to the historic downtown. Cities like Beloit work hard to give people reasons to travel there and the Sky Carp are a pretty good reason.

    August 15, 2022

    John PreskillRocks that roll

    In Terry Pratchett’s fantasy novel Soul Music, rock ’n roll arrives in Ankh-Morpork. Ankh-Morpork resembles the London of yesteryear—teeming with heroes and cutthroats, palaces and squalor—but also houses vampires, golems, wizards, and a sentient suitcase. Against this backdrop, a young harpist stumbles upon a mysterious guitar. He forms a band with a dwarf and with a troll who plays tuned rocks, after which the trio calls its style “Music with Rocks In.” The rest of the story consists of satire, drums, and rocks that roll. 

    The topic of rolling rocks sounds like it should elicit more yawns than an Elvis concert elicited screams. But rocks’ rolling helped recent University of Maryland physics PhD student Zackery Benson win a National Research Council Fellowship. He and his advisor, Wolfgang Losert, converted me into a fan of granular flow.

    What I’ve been studying recently. Kind of.

    Grains make up materials throughout the galaxy, such as the substance of avalanches. Many granular materials undergo repeated forcing by their environments. For instance, the grains that form an asteroid suffer bombardment from particles flying through outer space. The gravel beneath train tracks is compressed whenever a train passes. 

    Often, a pattern characterizes the forces in a granular system’s environment. For instance, trains in a particular weight class may traverse some patch of gravel, and the trains may arrive with a particular frequency. Some granular systems come to encode information about those patterns in their microscopic configurations and large-scale properties. So granular flow—little rocks that roll—can impact materials science, engineering, geophysics, and thermodynamics.

    Granular flow sounds so elementary, you might expect us to have known everything about it since long before the Beatles’ time. But we didn’t even know until recently how to measure rolling in granular flows. 

    Envision a grain as a tiny sphere, like a globe of the Earth. Scientists focused mostly on how far grains are translated through space in a flow, analogouslly to how far a globe travels across a desktop if flicked. Recently, scientists measured how far a grain rotates about one axis, like a globe fixed in a frame. Sans frame, though, a globe can spin about more than one axis—about three independent axes. Zack performed the first measurement of all the rotations and translations of all the particles in a granular flow.

    Each grain was an acrylic bead about as wide as my pinky nail. Two holes were drilled into each bead, forming an X, for reasons I’ll explain. 

    Image credit: Benson et al., Phys. Rev. Lett. 129, 048001 (2022).

    Zack dumped over 10,000 beads into a rectangular container. Then, he poured in a fluid that filled the spaces between the grains. Placing a weight atop the grains, he exerted a constant pressure on them. Zack would push one of the container’s walls inward, compressing the grains similarly to how a train compresses gravel. Then, he’d decompress the beads. He repeated this compression cycle many times.

    Image credit: Benson et al., Phys. Rev. E 103, 062906 (2021).

    Each cycle consisted of many steps: Zack would compress the beads a tiny amount, pause, snap pictures, and then compress a tiny amount more. During each pause, the camera activated a fluorescent dye in the fluid, which looked clear in the photographs. Lacking the fluorescent dye, the beads showed up as dark patches. Clear X’s cut through the dark patches, as dye filled the cavities drilled into the beads. From the X’s, Zack inferred every grain’s orientation. He inferred how every grain rotated by comparing the orientation in one snapshot with the orientation in the next snapshot. 

    Image credit: Benson et al., Phys. Rev. Lett. 129, 048001 (2022).

    Wolfgang’s lab had been trying for fifteen years to measure all the motions in a granular flow. The feat required experimental and computational skill. I appreciated the chance to play a minor role, in analyzing the data. Physical Review Letters published our paper last month.

    From Zack’s measurements, we learned about the unique roles played by rotations in granular flow. For instance, rotations dominate the motion in a granular system’s bulk, far from the container’s walls. Importantly, the bulk dissipates the most energy. Also, whereas translations are reversible—however far grains shift while compressed, they tend to shift oppositely while decompressed—rotations are not. Such irreversibility can contribute to materials’ aging.

    In Soul Music, the spirit of rock ’n roll—conceived of as a force in its own right—offers the guitarist the opportunity to never age. He can live fast, die young, and enjoy immortality as a legend, for his guitar comes from a dusty little shop not entirely of Ankh-Morpork’s world. Such shops deal in fate and fortune, the author maintains. Doing so, he takes a dig at the River Ankh, which flows through the city of Ankh-Morpork. The Ankh’s waters hold so much garbage, excrement, midnight victims, and other muck that they scarcely count as waters:

    And there was even an Ankh-Morpork legend, wasn’t there, about some old drum [ . . . ] that was supposed to bang itself if an enemy fleet was seen sailing up the Ankh? The legend had died out in recent centuries, partly because this was the Age of Reason and also because no enemy fleet could sail up the Ankh without a gang of men with shovels going in front.

    Such a drum would qualify as magic easily, but don’t underestimate the sludge. As a granular-flow system, it’s more incredible than you might expect.

    August 07, 2022

    Jordan EllenbergMalcolm Gladwell doesn’t understand why anyone would play tennis

    Deeply weird article by Malcolm Gladwell about his plan to save high school sports, which he sees as breaking down under the pressure of premature specialization and elitism.

    To the extent that we cater to the 90th percentile, we make a sport psychologically forbidding to the 50th percentile. I mean, if your high school has four tennis players who have been honing their topspin forehands and kick-serves for 10 years, why would someone who grew up playing with their siblings on public courts on the weekends want to try out for the team?

    Well, CJ plays high school tennis. He plays junior varsity. He didn’t have to try out. There are elite players on his team, including the fourth-best player in Wisconsin. So why does he play?

    Hold that thought. Gladwell makes a similar point about the sport he himself competes in, cross country.

    If you were a mediocre runner, would you go out for the Corning cross country team? I doubt it. You couldn’t keep up in practice. And you wouldn’t matter. Corning sent eight runners to the state championships, and its eighth-place finisher, a young man named Ryan, was over 2 minutes slower than its best runner. Was anyone even watching when Ryan crossed the line? A sport that focuses its reward structure entirely on the top five finishers limits attention to those top five finishers. By the time Ryan came across the line, the championship was already decided.

    But by this point in the article, Gladwell has already explained why you’d go out for the cross country team!

    I won’t belabor the obvious about cross country. It is insanely fun. Races take place during the glory days of fall. The courses are typically in beautiful parts of the country. Cross country meets don’t feel like sporting events; they feel like outdoor festivals—except everyone is fit, as opposed to high. Everyone should be so lucky as to run cross country.

    But for Gladwell, this is somehow not enough. You have to matter. But why? Mattering is overrated. Kids play sports because sports, as Gladwell says, are fun to play. CJ’s games don’t matter to whether his high school wins the state championship. But they matter to him! The coaches make an effort to match players against kids from the opposing team with roughly similar skill and that leads to good games, games you care about while you’re playing them.

    Gladwell proposes a weird Rawlsian scheme where your cross country team’s performance is heavily dependent on how well your slowest runners do. OK, you could do that, and then it would all come down to Ryan. But is that what Ryan wants? I don’t think CJ wishes the varsity team’s fortunes depended on whether he could land the jump serve he’s just starting to learn. That sounds incredibly stressful. I think it’s fine not to matter, and if we teach kids there’s no point in playing unless you’re part of the final score, we’re teaching them something kind of bad about sports.

    August 06, 2022

    Doug NatelsonBrief items - talks, CHIPS, and a little reading

    An administrative role I've taken on for the last month that will run through September has been eating quite a bit of my time, but I wanted to point out some interesting items:

    July 24, 2022

    Jacques Distler HL ≠ HS

    There’s a nice new paper by Kang et al, who point out something about class-S theories that should be well-known, but isn’t.

    In the (untwisted) theories of class-S, the Hall-Littlewood index, at genus-0, coincides with the Hilbert Series of the Higgs branch. The Hilbert series counts the B^ R\hat{B}_R operators that parametrize the Higgs branch (each contributes τ 2R\tau^{2R} to the index). The Hall-Littlewood index also includes contributions from D R(0,j)D_{R(0,j)} operators (which contribute (1) 2j+1τ 2(1+R+j)(-1)^{2j+1}\tau^{2(1+R+j)} to the index). But, for the untwisted theories of class-S, there is a folk-theorem that there are no D R(0,j)D_{R(0,j)} operators at genus-0, and so the Hilbert series and Hall-Littlewood index agree.

    For genus g>0g\gt0, the gauge symmetry1 cannot be completely Higgsed on the Higgs branch of the theory. For the theory of type J=ADEJ=\text{ADE}, there’s a U(1) rank(J)gU(1)^{\text{rank}(J)g} unbroken at a generic point on the Higgs branch2. Correspondingly, the SCFT contains D R(0,0)D_{R(0,0)} multiplets which, when you move out onto the Higgs branch and flow to the IR, flow to the D 0(0,0)D_{0(0,0)} multiplets3 of the free theory.

    What Kang et al point out is that the same is true at genus-0, when you include enough 2\mathbb{Z}_2-twisted punctures. They do this by explicitly calculating the Hall-Littlewood index in a series of examples.

    But it’s nice to have a class of examples where that hard work is unnecessary.

    Consider the J=D NJ=D_N theory. The punctures in the 2\mathbb{Z}_2-twisted sector are labeled by nilpotent orbits in 𝔤=𝔰𝔭(N1)\mathfrak{g}=\mathfrak{sp}(N-1). The twisted full puncture is [1 2(N1)][1^{2(N-1)}] and the twisted simple puncture is [2(N1)][2(N-1)]. Consider a 4-punctured sphere with 2 twisted full punctures and two twisted simple punctures. The manifest 𝔰𝔭(N1) 2N×𝔰𝔭(N1) 2N\mathfrak{sp}(N-1)_{2N}\times \mathfrak{sp}(N-1)_{2N} symmetry is enhanced to 𝔰𝔭(2N2) 2N\mathfrak{sp}(2N-2)_{2N}. In a certain S-duality frame, this is a Lagrangian field theory: SO(2N)SO(2N) with 2(N1)2(N-1) hypermultiplets in the vector representation. That matter content is insufficient to Higgs the SO(2N)SO(2N) completely. At a generic point of the Higgs branch, there’s an SO(2)=U(1)SO(2)=U(1) unbroken.

    We can construct the corresponding D R(0,0)D_{R(0,0)} operator, where R=N1R=N-1. Organize the scalars in the hypermultiplets into complex scalars ϕ a i,ϕ˜ a˜ i\phi^i_a,\tilde{\phi}^i_{\tilde{a}}, where i=1,,2Ni=1,\dots,2N is an SO(2N)SO(2N) vector index, and a,a˜=1,,2(N1)a,\tilde{a}=1,\dots, 2(N-1) span the 4(N1)4(N-1)-dimensional defining representation of Sp(2N2)Sp(2N-2). Let Φ ij=Φ ji\Phi^{i j}=-\Phi^{j i} be the complex scalar in the adjoint of SO(2N)SO(2N). Then the superconformal primary of the D R(0,j)D_{R(0,j)} multiplet with R=N1,j=0R=N-1,\; j=0 is

    (1)ϵ i 1i 2i 2Nϕ a 1 i 1ϕ a 2 i 2ϕ a 2N2 i 2N2Φ i 2N1i 2N\epsilon_{i_1 i_2\dots i_{2N}}\phi^{i_1}_{a_1}\phi^{i_2}_{a_2}\dots\phi^{i_{2N-2}}_{a_{2N-2}}\Phi^{i_{2N-1}i_{2N}}

    which we see is in the traceless, completely anti-symmetric rank-(2N2)(2N-2) tensor representation of Sp(2N2)Sp(2N-2) (the representation with Dynkin labels (0,,0,1)(0,\dots,0,1)). This has Δ=1+2R+j=2N1\Delta=1+2R+j=2N-1 and contributes τ 2Nχ(0,,0,1)-\tau^{2N}\chi(0,\dots,0,1) to the Hall-Littlewood index.

    The above statement takes a little bit of work. At zero gauge coupling, the formula Δ=2N1\Delta=2N-1 obviously holds. We need to worry that, at finite gauge coupling this operator recombines with other operators to form a long superconformal multiplet (whose conformal dimension is not fixed). The relevant recombination formula is A R1,0(0,0) 2R+1=C^ R1(0,0)D R(0,0)D¯ R(0,0)B^ R+1 A^{2R+1}_{R-1,0(0,0)} = \hat{C}_{R-1(0,0)}\oplus D_{R(0,0)}\oplus \overline{D}_{R(0,0)}\oplus \hat{B}_{R+1} where we denote a long multiplet by A R,r(j 1,j 2) ΔA^\Delta_{R,r(j_1,j_2)}. One can check that the free theory has no candidate B^ N\hat{B}_N operator transforming in the appropriate representation of the flavour symmetry. So (1) necessarily remains in a short superconformal multiplet and Δ\Delta is independent of the gauge coupling.

    Similarly, you can replace one of the twisted full punctures with [2,1 2N4][2,1^{2N-4}]. The resulting SCFT has a Lagrangian description Layer 1 <foreignobject font-size="16" height="24" id="svg_89522_66" width="88" x="202.5" y="46.5"> SO ( 2 N 1 ) SO(2N-1) </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_53" width="38" x="297.5" y="45"> [ 1 2 N ] [1^{2N}] </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_68" width="80" x="62.5" y="141"> ( N 2 ) ( V ) (N-2)(V) </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_69" width="170" x="243.5" y="141"> ( N 1 ) ( V ) + ( N 1 ) ( 1 ) (N-1)(V)+(N-1)(1) </foreignobject> <foreignobject font-size="16" height="26" id="svg_89522_49" width="144" x="36.5" y="56"> ( [ 1 2 N ] , SO ( 2 N 1 ) ) \bigl([1^{2N}],Spin(2N-1)\bigr) </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_16" width="70" x="346.5" y="89"> [ 2 ( N 1 ) ] [2(N-1)] </foreignobject> <foreignobject font-size="16" height="20" id="svg_89522_6" width="14" x="374.5" y="121"> z 1 z_1 </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_18" width="70" x="51.5" y="89"> [ 2 ( N 1 ) ] [2(N-1)] </foreignobject> <foreignobject font-size="16" height="20" id="svg_89522_35" width="14" x="81.5" y="121"> z 2 z_2 </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_29" width="64" x="347.5" y="16"> [ 1 2 ( N 1 ) ] [1^{2(N-1)}] </foreignobject> <foreignobject font-size="16" height="20" id="svg_89522_44" width="14" x="375.5" y="49"> z 3 z_3 </foreignobject> <foreignobject font-size="16" height="24" id="svg_89522_33" width="78" x="55.5" y="5"> [ 2 , 1 2 ( N 2 ) ] [2,1^{2(N-2)}] </foreignobject> <foreignobject font-size="16" height="20" id="svg_89522_46" width="14" x="80.5" y="37"> z 4 z_4 </foreignobject> \begin{svg}<svg width="434" height="165" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" xmlns:se="http://svg-edit.googlecode.com" xmlns:math="http://www.w3.org/1998/Math/MathML" se:nonce="89522"> <g> <title>Layer 1</title> <ellipse ry="70" rx="100" id="svg_89522_8" cy="71" cx="101" stroke-linecap="null" stroke-linejoin="null" stroke-dasharray="null" stroke-width="2" stroke="#000000" fill="#ffeeee"/> <circle fill="#ffeeee" stroke="#000000" stroke-width="2" stroke-dasharray="null" stroke-linejoin="null" stroke-linecap="null" cx="362.5" cy="71" r="70" id="svg_89522_65"/> <foreignObject x="202.5" y="46.5" id="svg_89522_66" font-size="16" width="88" height="24"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <mi>SO</mi> <mo stretchy="false">(</mo> <mn>2</mn> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> <annotation encoding="application/x-tex">SO(2N-1)</annotation> </semantics> </math> </foreignObject> <line fill="none" stroke="#000000" stroke-width="2" stroke-dasharray="null" stroke-linejoin="null" stroke-linecap="null" x1="197.5" y1="71" x2="302.5" y2="71" id="svg_89522_67" marker-start="url(#se_marker_start_svg_89522_67)" marker-end="url(#se_marker_end_svg_89522_67)"/> <g id="svg_89522_51"> <circle fill="#ffffff" stroke="#000000" stroke-width="2" cx="317.5" cy="70" r="5" id="svg_89522_52"/> <foreignObject x="297.5" y="45" font-size="16" width="38" height="24" id="svg_89522_53"> <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline" id="svg_89522_54"> <semantics> <mrow> <mo stretchy="false">[</mo> <msup> <mn>1</mn> <mrow> <mn>2</mn> <mi>N</mi> </mrow> </msup> <mo stretchy="false">]</mo> </mrow> <annotation encoding="application/x-tex">[1^{2N}]</annotation> </semantics> </math> </foreignObject> </g> <foreignObject x="62.5" y="141" id="svg_89522_68" font-size="16" width="80" height="24"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>2</mn> <mo stretchy="false">)</mo> <mo stretchy="false">(</mo> <mi>V</mi> <mo stretchy="false">)</mo> </mrow> <annotation encoding="application/x-tex">(N-2)(V)</annotation> </semantics> </math> </foreignObject> <foreignObject x="243.5" y="141" font-size="16" width="170" height="24" id="svg_89522_69"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> <mo stretchy="false">(</mo> <mi>V</mi> <mo stretchy="false">)</mo> <mo>+</mo> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> <annotation encoding="application/x-tex">(N-1)(V)+(N-1)(1)</annotation> </semantics> </math> </foreignObject> <g id="svg_89522_9"> <circle fill="#ffffff" stroke="#000000" stroke-width="2" cx="185.5" cy="71" r="5" id="svg_89522_3"/> <foreignObject x="36.5" y="56" id="svg_89522_49" font-size="16" width="144" height="26"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <mo minsize="1.2em" maxsize="1.2em">(</mo> <mo stretchy="false">[</mo> <msup> <mn>1</mn> <mrow> <mn>2</mn> <mi>N</mi> </mrow> </msup> <mo stretchy="false">]</mo> <mo>,</mo> <mi>SO</mi> <mo stretchy="false">(</mo> <mn>2</mn> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> <mo minsize="1.2em" maxsize="1.2em">)</mo> </mrow> <annotation encoding="application/x-tex">\bigl([1^{2N}],Spin(2N-1)\bigr)</annotation> </semantics> </math> </foreignObject> </g> <g id="svg_89522_10"> <circle fill="#aaaaaa" stroke="#000000" stroke-width="2" cx="381.5" cy="114" r="5" id="svg_89522_15"/> <foreignObject x="346.5" y="89" font-size="16" width="70" height="24" id="svg_89522_16"> <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline" id="svg_89522_17"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>2</mn> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> <mo stretchy="false">]</mo> </mrow> <annotation encoding="application/x-tex">[2(N-1)]</annotation> </semantics> </math> </foreignObject> <foreignObject height="20" width="14" font-size="16" id="svg_89522_6" y="121" x="374.5"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub> <mi>z</mi> <mn>1</mn> </msub> </mrow> <annotation encoding="application/x-tex">z_1</annotation> </semantics> </math> </foreignObject> </g> <g id="svg_89522_43"> <circle id="svg_89522_14" fill="#aaaaaa" stroke="#000000" stroke-width="2" cx="86.5" cy="114" r="5"/> <foreignObject id="svg_89522_18" x="51.5" y="89" font-size="16" width="70" height="24"> <math id="svg_89522_19" xmlns="http://www.w3.org/1998/Math/MathML" display="inline"> <semantics id="svg_89522_20"> <mrow id="svg_89522_21"> <mo id="svg_89522_22" stretchy="false">[</mo> <mn id="svg_89522_23">2</mn> <mo id="svg_89522_24" stretchy="false">(</mo> <mi id="svg_89522_25">N</mi> <mo id="svg_89522_26">&#x2212;</mo> <mn id="svg_89522_27">1</mn> <mo id="svg_89522_28" stretchy="false">)</mo> <mo id="svg_89522_31" stretchy="false">]</mo> </mrow> <annotation id="svg_89522_34" encoding="application/x-tex">[2(N-1)]</annotation> </semantics> </math> </foreignObject> <foreignObject id="svg_89522_35" height="20" width="14" font-size="16" y="121" x="81.5"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub> <mi>z</mi> <mn>2</mn> </msub> </mrow> <annotation encoding="application/x-tex">z_2</annotation> </semantics> </math> </foreignObject> </g> <g id="svg_89522_45"> <circle fill="#aaaaaa" stroke="#000000" stroke-width="2" cx="381.5" cy="41" r="5" id="svg_89522_7"/> <foreignObject x="347.5" y="16" id="svg_89522_29" font-size="16" width="64" height="24"> <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <msup> <mn>1</mn> <mrow> <mn>2</mn> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">]</mo> </mrow> <annotation encoding="application/x-tex">[1^{2(N-1)}]</annotation> </semantics> </math> </foreignObject> <foreignObject height="20" width="14" font-size="16" id="svg_89522_44" y="49" x="375.5"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub> <mi>z</mi> <mn>3</mn> </msub> </mrow> <annotation encoding="application/x-tex">z_3</annotation> </semantics> </math> </foreignObject> </g> <g id="svg_89522_47"> <circle fill="#aaaaaa" stroke="#000000" stroke-width="2" cx="86.5" cy="30" r="5" id="svg_89522_32"/> <foreignObject x="55.5" y="5" font-size="16" width="78" height="24" id="svg_89522_33"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>2</mn> <mo>,</mo> <msup> <mn>1</mn> <mrow> <mn>2</mn> <mo stretchy="false">(</mo> <mi>N</mi> <mo>&#x2212;</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">]</mo> </mrow> <annotation encoding="application/x-tex">[2,1^{2(N-2)}]</annotation> </semantics> </math> </foreignObject> <foreignObject height="20" width="14" font-size="16" id="svg_89522_46" y="37" x="80.5"> <math display="inline" xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub> <mi>z</mi> <mn>4</mn> </msub> </mrow> <annotation encoding="application/x-tex">z_4</annotation> </semantics> </math> </foreignObject> </g> </g> <defs> <marker id="se_marker_start_svg_89522_67" markerUnits="strokeWidth" orient="auto" viewBox="0 0 100 100" markerWidth="5" markerHeight="5" refX="50" refY="50"> <path id="svg_89522_1" d="m0,50l100,40l-30,-40l30,-40l-100,40z" fill="#000000" stroke="#000000" stroke-width="10"/> </marker> <marker id="se_marker_end_svg_89522_67" markerUnits="strokeWidth" orient="auto" viewBox="0 0 100 100" markerWidth="5" markerHeight="5" refX="50" refY="50"> <path id="svg_89522_4" d="m100,50l-100,40l30,-40l-30,-40l100,40z" fill="#000000" stroke="#000000" stroke-width="10"/> </marker> </defs> </svg>\end{svg} as SO(2N1)+(2N3)(V)+(N1)(1)SO(2N-1)+(2N-3)(V) + (N-1)(1). Again, this matter content leaves an unbroken U(1)U(1) at a generic point on the Higgs branch. The D R(0,0)D_{R(0,0)} multiplet (for R=(2N3)/2R=(2N-3)/2) in the traceless rank-(2N3)(2N-3) completely anti-symmetric tensor representation of Sp(2N3)Sp(2N-3), constructed by the analogue of (1), has Δ=2N2\Delta=2N-2 and contributes τ 2N1χ(0,,0,1)-\tau^{2N-1}\chi(0,\dots,0,1) to the Hall-Littlewood index.

    These examples were rather special, in that they had an S-duality frame in which they were Lagrangian field theories. Generically that won’t be the case. But there’s no reason to expect that theories, with an S-duality frame in which they are Lagrangian, should be distinguished in this regard. And, indeed, Kang et al find that the presence of D R(0,0)D_{R(0,0)} operators in the spectrum persists in examples with no Lagrangian field theory realization.


    1 For genus-gg and nn punctures, the class-S theory can be presented (in multiple ways) as a “gauge theory” with (3g3+n)(3g-3+n) simple factors in the gauge group. This statement has to be modified slightly in the presence of “atypical” punctures in the twisted theory.

    2 Here, I’m taking “Higgs branch” to mean the branch on which the gauge symmetry is maximally-Higgsed.

    3 The superconformal primary of the D 0(0,0)D_{0(0,0)} multiplet is the complex scalar in the free 𝒩=2\mathcal{N}=2 vector multiplet. Its superconformal descendents include the photino and the imaginary-self-dual part of the field strength.

    July 21, 2022

    Terence TaoUsing the Smith normal form to manipulate lattice subgroups and closed torus subgroups

    Let {M_{n \times m}({\bf Z})} denote the space of {n \times m} matrices with integer entries, and let {GL_n({\bf Z})} be the group of invertible {n \times n} matrices with integer entries. The Smith normal form takes an arbitrary matrix {A \in M_{n \times m}({\bf Z})} and factorises it as {A = UDV}, where {U \in GL_n({\bf Z})}, {V \in GL_m({\bf Z})}, and {D} is a rectangular diagonal matrix, by which we mean that the principal {\min(n,m) \times \min(n,m)} minor is diagonal, with all other entries zero. Furthermore the diagonal entries of {D} are {\alpha_1,\dots,\alpha_k,0,\dots,0} for some {0 \leq k \leq \min(n,m)} (which is also the rank of {A}) with the numbers {\alpha_1,\dots,\alpha_k} (known as the invariant factors) principal divisors with {\alpha_1 | \dots | \alpha_k}. The invariant factors are uniquely determined; but there can be some freedom to modify the invertible matrices {U,V}. The Smith normal form can be computed easily; for instance, in SAGE, it can be computed calling the {{\tt smith\_form()}} function from the matrix class. The Smith normal form is also available for other principal ideal domains than the integers, but we will only be focused on the integer case here. For the purposes of this post, we will view the Smith normal form as a primitive operation on matrices that can be invoked as a “black box”.

    In this post I would like to record how to use the Smith normal form to computationally manipulate two closely related classes of objects:

    • Subgroups {\Gamma \leq {\bf Z}^d} of a standard lattice {{\bf Z}^d} (or lattice subgroups for short);
    • Closed subgroups {H \leq ({\bf R}/{\bf Z})^d} of a standard torus {({\bf R}/{\bf Z})^d} (or closed torus subgroups for short).
    (This arose for me due to the need to actually perform (with a collaborator) some numerical calculations with a number of lattice subgroups and closed torus subgroups.) It’s possible that all of these operations are already encoded in some existing object classes in a computational algebra package; I would be interested to know of such packages and classes for lattice subgroups or closed torus subgroups in the comments.

    The above two classes of objects are isomorphic to each other by Pontryagin duality: if {\Gamma \leq {\bf Z}^d} is a lattice subgroup, then the orthogonal complement

    \displaystyle  \Gamma^\perp := \{ x \in ({\bf R}/{\bf Z})^d: \langle x, \xi \rangle = 0 \forall \xi \in \Gamma \}

    is a closed torus subgroup (with {\langle,\rangle: ({\bf R}/{\bf Z})^d \times {\bf Z}^d \rightarrow {\bf R}/{\bf Z}} the usual Fourier pairing); conversely, if {H \leq ({\bf R}/{\bf Z})^d} is a closed torus subgroup, then

    \displaystyle  H^\perp := \{ \xi \in {\bf Z}^d: \langle x, \xi \rangle = 0 \forall x \in H \}

    is a lattice subgroup. These two operations invert each other: {(\Gamma^\perp)^\perp = \Gamma} and {(H^\perp)^\perp = H}.

    Example 1 The orthogonal complement of the lattice subgroup

    \displaystyle  2{\bf Z} \times \{0\} = \{ (2n,0): n \in {\bf Z}\} \leq {\bf Z}^2

    is the closed torus subgroup

    \displaystyle  (\frac{1}{2}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z}) = \{ (x,y) \in ({\bf R}/{\bf Z})^2: 2x=0\} \leq ({\bf R}/{\bf Z})^2

    and conversely.

    Let us focus first on lattice subgroups {\Gamma \leq {\bf Z}^d}. As all such subgroups are finitely generated abelian groups, one way to describe a lattice subgroup is to specify a set {v_1,\dots,v_n \in \Gamma} of generators of {\Gamma}. Equivalently, we have

    \displaystyle  \Gamma = A {\bf Z}^n

    where {A \in M_{d \times n}({\bf Z})} is the matrix whose columns are {v_1,\dots,v_n}. Applying the Smith normal form {A = UDV}, we conclude that

    \displaystyle  \Gamma = UDV{\bf Z}^n = UD{\bf Z}^n

    so in particular {\Gamma} is isomorphic (with respect to the automorphism group {GL_d({\bf Z})} of {{\bf Z}^d}) to {D{\bf Z}^n}. In particular, we see that {\Gamma} is a free abelian group of rank {k}, where {k} is the rank of {D} (or {A}). This representation also allows one to trim the representation {A {\bf Z}^n} down to {U D'{\bf Z}^k}, where {D' \in M_{d \times k}} is the matrix formed from the {k} left columns of {D}; the columns of {UD'} then give a basis for {\Gamma}. Let us call this a trimmed representation of {A{\bf Z}^n}.

    Example 2 Let {\Gamma \leq {\bf Z}^3} be the lattice subgroup generated by {(1,3,1)}, {(2,-2,2)}, {(3,1,3)}, thus {\Gamma = A {\bf Z}^3} with {A = \begin{pmatrix} 1 & 2 & 3 \\ 3 & -2 & 1 \\ 1 & 2 & 3 \end{pmatrix}}. A Smith normal form for {A} is given by

    \displaystyle  A = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 \\ -1 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}

    so {A{\bf Z}^3} is a rank two lattice with a basis of {(3,1,3) \times 1 = (3,1,3)} and {(1,0,1) \times 8 = (8,0,8)} (and the invariant factors are {1} and {8}). The trimmed representation is

    \displaystyle  A {\bf Z}^3 = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 8 \\ 0 & 0 \end{pmatrix} {\bf Z}^2 = \begin{pmatrix} 3 & 8 \\ 1 & 0 \\ 3 & 8 \end{pmatrix} {\bf Z}^2.

    There are other Smith normal forms for {A}, giving slightly different representations here, but the rank and invariant factors will always be the same.

    By the above discussion we can represent a lattice subgroup {\Gamma \leq {\bf Z}^d} by a matrix {A \in M_{d \times n}({\bf Z})} for some {n}; this representation is not unique, but we will address this issue shortly. For now, we focus on the question of how to use such data representations of subgroups to perform basic operations on lattice subgroups. There are some operations that are very easy to perform using this data representation:

    • (Applying a linear transformation) if {T \in M_{d' \times d}({\bf Z})}, so that {T} is also a linear transformation from {{\bf Z}^d} to {{\bf Z}^{d'}}, then {T} maps lattice subgroups to lattice subgroups, and clearly maps the lattice subgroup {A{\bf Z}^n} to {(TA){\bf Z}^n} for any {A \in M_{d \times n}({\bf Z})}.
    • (Sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2} \leq {\bf Z}^d} for some {A_1 \in M_{d \times n_1}({\bf Z})}, {A_2 \in M_{d \times n_2}({\bf Z})}, the sum {A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = (A_1 A_2) \in M_{d \times n_1 + n_2}({\bf Z})} is the matrix formed by concatenating the columns of {A_1} with the columns of {A_2}.
    • (Direct sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1} \leq {\bf Z}^{d_1}}, {A_2 {\bf Z}^{n_2} \leq {\bf Z}^{d_2}}, the direct sum {A_1 {\bf Z}^{n_1} \times A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = \begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix} \in M_{d_1+d_2 \times n_1 + n_2}({\bf Z})} is the block matrix formed by taking the direct sum of {A_1} and {A_2}.

    One can also use Smith normal form to detect when one lattice subgroup {B {\bf Z}^m \leq {\bf Z}^d} is a subgroup of another lattice subgroup {A {\bf Z}^n \leq {\bf Z}^d}. Using Smith normal form factorization {A = U D V}, with invariant factors {\alpha_1|\dots|\alpha_k}, the relation {B {\bf Z}^m \leq A {\bf Z}^n} is equivalent after some manipulation to

    \displaystyle  U^{-1} B {\bf Z}^m \leq D {\bf Z}^n.

    The group {U^{-1} B {\bf Z}^m} is generated by the columns of {U^{-1} B}, so this gives a test to determine whether {B {\bf Z}^{m} \leq A {\bf Z}^{n}}: the {i^{th}} row of {U^{-1} B} must be divisible by {\alpha_i} for {i=1,\dots,k}, and all other rows must vanish.

    Example 3 To test whether the lattice subgroup {\Gamma'} generated by {(1,1,1)} and {(0,2,0)} is contained in the lattice subgroup {\Gamma = A{\bf Z}^3} from Example 2, we write {\Gamma'} as {B {\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 1 & 2 \\ 1 & 0\end{pmatrix}}, and observe that

    \displaystyle  U^{-1} B = \begin{pmatrix} 1 & 2 \\ -2 & -6 \\ 0 & 0 \end{pmatrix}.

    The first row is of course divisible by {1}, and the last row vanishes as required, but the second row is not divisible by {8}, so {\Gamma'} is not contained in {\Gamma} (but {4\Gamma'} is); also a similar computation verifies that {\Gamma} is conversely contained in {\Gamma'}.

    One can now test whether {B{\bf Z}^m = A{\bf Z}^n} by testing whether {B{\bf Z}^m \leq A{\bf Z}^n} and {A{\bf Z}^n \leq B{\bf Z}^m} simultaneously hold (there may be more efficient ways to do this, but this is already computationally manageable in many applications). This in principle addresses the issue of non-uniqueness of representation of a subgroup {\Gamma} in the form {A{\bf Z}^n}.

    Next, we consider the question of representing the intersection {A{\bf Z}^n \cap B{\bf Z}^m} of two subgroups {A{\bf Z}^n, B{\bf Z}^m \leq {\bf Z}^d} in the form {C{\bf Z}^p} for some {p} and {C \in M_{d \times p}({\bf Z})}. We can write

    \displaystyle  A{\bf Z}^n \cap B{\bf Z}^m = \{ Ax: Ax = By \hbox{ for some } x \in {\bf Z}^n, y \in {\bf Z}^m \}

    \displaystyle  = (A 0) \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \}

    where {(A B) \in M_{d \times n+m}({\bf Z})} is the matrix formed by concatenating {A} and {B}, and similarly for {(A 0) \in M_{d \times n+m}({\bf Z})} (here we use the change of variable {z = \begin{pmatrix} x \\ -y \end{pmatrix}}). We apply the Smith normal form to {(A B)} to write

    \displaystyle  (A B) = U D V

    where {U \in GL_d({\bf Z})}, {D \in M_{d \times n+m}({\bf Z})}, {V \in GL_{n+m}({\bf Z})} with {D} of rank {k}. We can then write

    \displaystyle  \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \} = V^{-1} \{ w \in {\bf Z}^{n+m}: Dw = 0 \}

    \displaystyle  = V^{-1} (\{0\}^k \times {\bf Z}^{n+m-k})

    (making the change of variables {w = Vz}). Thus we can write {A{\bf Z}^n \cap B{\bf Z}^m = C {\bf Z}^{n+m-k}} where {C \in M_{d \times n+m-k}({\bf Z})} consists of the right {n+m-k} columns of {(A 0) V^{-1} \in M_{d \times n+m}({\bf Z})}.

    Example 4 With the lattice {A{\bf Z}^3} from Example 2, we shall compute the intersection of {A{\bf Z}^3} with the subgroup {{\bf Z}^2 \times \{0\}}, which one can also write as {B{\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}}. We obtain a Smith normal form

    \displaystyle  (A B) = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 & 0 & 1 \\ 1 & 2 & 3 & 1 & 0 \\ 1 & 2 & 3 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \end{pmatrix}

    so {k=3}. We have

    \displaystyle  (A 0) V^{-1} = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 3 & 0 & -8 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix}

    and so we can write {A{\bf Z}^3 \cap B{\bf Z}^2 = C{\bf Z}^2} where

    \displaystyle  C = \begin{pmatrix} 0 & 0 \\ 0 & -8 \\ 0 & 0 \end{pmatrix}.

    One can trim this representation if desired, for instance by deleting the first column of {C} (and replacing {{\bf Z}^2} with {{\bf Z}}). Thus the intersection of {A{\bf Z}^3} with {{\bf Z}^2 \times \{0\}} is the rank one subgroup generated by {(0,-8,0)}.

    A similar calculation allows one to represent the pullback {T^{-1} (A {\bf Z}^n) \leq {\bf Z}^{d'}} of a subgroup {A{\bf Z}^n \leq {\bf Z}^d} via a linear transformation {T \in M_{d \times d'}({\bf Z})}, since

    \displaystyle T^{-1} (A {\bf Z}^n) = \{ x \in {\bf Z}^{d'}: Tx = Ay \hbox{ for some } y \in {\bf Z}^m \}

    \displaystyle  = (I 0) \{ z \in {\bf Z}^{d'+m}: (T A) z = 0 \}

    where {(I 0) \in M_{d' \times d'+m}({\bf Z})} is the concatenation of the {d' \times d'} identity matrix {I} and the {d' \times m} zero matrix. Applying the Smith normal form to write {(T A) = UDV} with {D} of rank {k}, the same argument as before allows us to write {T^{-1}(A{\bf Z}^n) = C {\bf Z}^{d'+m-k}} where {C \in M_{d' \times d'+m-k}} consists of the right {d'+m-k} columns of {(I 0) V^{-1} \in M_{d' \times d'+m}({\bf Z})}.

    Among other things, this allows one to describe lattices given by systems of linear equations and congruences in the {A{\bf Z}^n} format. Indeed, the set of lattice vectors {x \in {\bf Z}^d} that solve the system of congruences

    \displaystyle  \alpha_i | x \cdot v_i \ \ \ \ \ (1)

    for {i=1,\dots,k}, some natural numbers {\alpha_i}, and some lattice vectors {v_i \in {\bf Z}^d}, together with an additional system of equations

    \displaystyle  x \cdot w_j = 0 \ \ \ \ \ (2)

    for {j=1,\dots,l} and some lattice vectors {w_j \in {\bf Z}^d}, can be written as {T^{-1}(A {\bf Z}^k)} where {T \in M_{k+l \times d}({\bf Z})} is the matrix with rows {v_1,\dots,v_k,w_1,\dots,w_l}, and {A \in M_{k+l \times k}({\bf Z})} is the diagonal matrix with diagonal entries {\alpha_1,\dots,\alpha_k}. Conversely, any subgroup {A{\bf Z}^n} can be described in this form by first using the trimmed representation {A{\bf Z}^n = UD'{\bf Z}^k}, at which point membership of a lattice vector {x \in {\bf Z}^d} in {A{\bf Z}^n} is seen to be equivalent to the congruences

    \displaystyle  \alpha_i | U^{-1} x \cdot e_i

    for {i=1,\dots,k} (where {k} is the rank, {\alpha_1,\dots,\alpha_k} are the invariant factors, and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}) together with the equations

    \displaystyle  U^{-1} x \cdot e_j = 0

    for {j=k+1,\dots,d}. Thus one can obtain a representation in the form (1), (2) with {l=d-k}, and {v_1,\dots,v_k,w_1,\dots,w_{d-k}} to be the rows of {U^{-1}} in order.

    Example 5 With the lattice subgroup {A{\bf Z}^3} from Example 2, we have {U^{-1} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & -3 & 1 \\ 1 & 0 & -1 \end{pmatrix}}, and so {A{\bf Z}^3} consists of those triples {(x_1,x_2,x_3)} which obey the (redundant) congruence

    \displaystyle  1 | x_2,

    the congruence

    \displaystyle  8 | -3x_2 + x_3

    and the identity

    \displaystyle  x_1 - x_3 = 0.

    Conversely, one can use the above procedure to convert the above system of congruences and identities back into a form {A' {\bf Z}^{n'}} (though depending on which Smith normal form one chooses, the end result may be a different representation of the same lattice group {A{\bf Z}^3}).

    Now we apply Pontryagin duality. We claim the identity

    \displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: A^Tx = 0 \}

    for any {A \in M_{d \times n}({\bf Z})} (where {A^T \in M_{n \times d}({\bf Z})} induces a homomorphism from {({\bf R}/{\bf Z})^d} to {({\bf R}/{\bf Z})^n} in the obvious fashion). This can be verified by direct computation when {A} is a (rectangular) diagonal matrix, and the general case then easily follows from a Smith normal form computation (one can presumably also derive it from the category-theoretic properties of Pontryagin duality, although I will not do so here). So closed torus subgroups that are defined by a system of linear equations (over {{\bf R}/{\bf Z}}, with integer coefficients) are represented in the form {(A{\bf Z}^n)^\perp} of an orthogonal complement of a lattice subgroup. Using the trimmed form {A{\bf Z}^n = U D' {\bf Z}^k}, we see that

    \displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: (UD')^T x = 0 \}

    \displaystyle  = (U^{-1})^T \{ y \in ({\bf R}/{\bf Z})^d: (D')^T x = 0 \}

    \displaystyle  = (U^{-1})^T (\frac{1}{\alpha_1} {\bf Z}/{\bf Z} \times \dots \times \frac{1}{\alpha_k} {\bf Z}/{\bf Z} \times ({\bf R}/{\bf Z})^{d-k}),

    giving an explicit representation “in coordinates” of such a closed torus subgroup. In particular we can read off the isomorphism class of a closed torus subgroup as the product of a finite number of cyclic groups and a torus:

    \displaystyle (A{\bf Z}^n)^\perp \equiv ({\bf Z}/\alpha_1 {\bf Z}) \times \dots \times ({\bf Z}/\alpha_k{\bf Z}) \times ({\bf R}/{\bf Z})^{d-k}.

    Example 6 The orthogonal complement of the lattice subgroup {A{\bf Z}^3} from Example 2 is the closed torus subgroup

    \displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: x_1 + 3x_2 + x_3

    \displaystyle  = 2x_1 - 2x_2 + 2x_3 = 3x_1 + x_2 + 3x_3 = 0 \};

    using the trimmed representation of {(A{\bf Z}^3)^\perp}, one can simplify this a little to

    \displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: 3x_1 + x_2 + 3x_3

    \displaystyle  = 8 x_1 + 8x_3 = 0 \}

    and one can also write this as the image of the group {\{ 0\} \times (\frac{1}{8}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z})} under the torus isomorphism

    \displaystyle  (y_1,y_2,y_3) \mapsto (y_3, y_1 - 3y_2, y_2 - y_3).

    In other words, one can write

    \displaystyle  (A{\bf Z}^3)^\perp = \{ (y,0,-y) + (0,-\frac{3a}{8},\frac{a}{8}): y \in {\bf R}/{\bf Z}; a \in {\bf Z}/8{\bf Z} \}

    so that {(A{\bf Z}^3)^\perp} is isomorphic to {{\bf R}/{\bf Z} \times {\bf Z}/8{\bf Z}}.

    We can now dualize all of the previous computable operations on subgroups of {{\bf Z}^d} to produce computable operations on closed subgroups of {({\bf R}/{\bf Z})^d}. For instance:

    • To form the intersection or sum of two closed torus subgroups {(A_1 {\bf Z}^{n_1})^\perp, (A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, use the identities

      \displaystyle  (A_1 {\bf Z}^{n_1})^\perp \cap (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2})^\perp

      and

      \displaystyle  (A_1 {\bf Z}^{n_1})^\perp + (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} \cap A_2 {\bf Z}^{n_2})^\perp

      and then calculate the sum or intersection of the lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2}} by the previous methods. Similarly, the operation of direct sum of two closed torus subgroups dualises to the operation of direct sum of two lattice subgroups.
    • To determine whether one closed torus subgroup {(A_1 {\bf Z}^{n_1})^\perp \leq ({\bf R}/{\bf Z})^d} is contained in (or equal to) another closed torus subgroup {(A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, simply use the preceding methods to check whether the lattice subgroup {A_2 {\bf Z}^{n_2}} is contained in (or equal to) the lattice subgroup {A_1 {\bf Z}^{n_1}}.
    • To compute the pull back {T^{-1}( (A{\bf Z}^n)^\perp )} of a closed torus subgroup {(A{\bf Z}^n)^\perp \leq ({\bf R}/{\bf Z})^d} via a linear transformation {T \in M_{d' \times d}({\bf Z})}, use the identity

      \displaystyle T^{-1}( (A{\bf Z}^n)^\perp ) = (T^T A {\bf Z}^n)^\perp.

      Similarly, to compute the image {T( (B {\bf Z}^m)^\perp )} of a closed torus subgroup {(B {\bf Z}^m)^\perp \leq ({\bf R}/{\bf Z})^{d'}}, use the identity

      \displaystyle T( (B{\bf Z}^m)^\perp ) = ((T^T)^{-1} B {\bf Z}^m)^\perp.

    Example 7 Suppose one wants to compute the sum of the closed torus subgroup {(A{\bf Z}^3)^\perp} from Example 6 with the closed torus subgroup {\{0\}^2 \times {\bf R}/{\bf Z}}. This latter group is the orthogonal complement of the lattice subgroup {{\bf Z}^2 \times \{0\}} considered in Example 4. Thus we have {(A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = (C{\bf Z}^2)^\perp} where {C} is the matrix from Example 6; discarding the zero column, we thus have

    \displaystyle (A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = \{ (x_1,x_2,x_3): -8x_2 = 0 \}.

    Robert HellingGiving the Playground Express a Spin

     The latest addition to our single chip computer zoo is Adafruit's Circuit Playground Express. It is sold for about 30$ and comes with a lot of GIO pins, 10 RGB LEDs, a small speaker, lots of sensors (including acceleration, temperature, IR,...) and 1.5MB of flash rom. The excuse for buying it is that I might interest the kids in it (being better equipped on board than an Arduino while being less complex than a RaspberryPi.


    As the ten LEDs are arranged around the circular shape, I thought a natural idea for a first project using the accelerometer would be to simulate a ball going around the circumference.



    The video does not really capture the visual impression due to overexposure of the lit LEDs.

    The Circuit Playground Express comes with a graphical programming language (like Scratch) and an embedded version of Python. But you can also directly program it with the Arduino IDE to code in C which I used since this is what I am familiar with.

    Here is the source code (as always with GPL 2.0)
    // A first project simulating a ball rolling around the Playground Express

    #include <Adafruit_CircuitPlayground.h>

    uint8_t pixeln = 0;
    float phi = 0.0;
    float phid = 0.10;

    void setup() {
      CircuitPlayground.begin();
      CircuitPlayground.speaker.enable(1);
    }

    int phi2pix(float alpha) {
       alpha *= 180.0 / 3.141459;
       alpha += 60.0;
       if (alpha < 0.0) 
          alpha += 360.0;
        if (alpha > 360.0)
          alpha -= 360.0;
          
        return (int) (alpha/36.0);
    }

    void loop() {
        static uint8_t lastpix = 0;
        float ax = CircuitPlayground.motionX();
        float ay = CircuitPlayground.motionY();
        phid += 0.001 * (cos(phi) * ay - sin(phi) * ax);
        phi += phid;
        phid *= 0.997;
        Serial.print(phi);

        while (phi < 0.0) 
          phi += 2.0 * 3.14159265;

        while (phi > 2.0 * 3.14159265)
          phi -= 2.0 * 3.14159265;


        pixeln = phi2pix(phi);
     
        if (pixeln != lastpix) {
          if (CircuitPlayground.slideSwitch())
            CircuitPlayground.playTone(2ssseff000, 5);
          lastpix = pixeln;
        }
        CircuitPlayground.clearPixels(); 
        CircuitPlayground.setPixelColor(pixeln, CircuitPlayground.colorWheel(25 * pixeln));
        delay(0);
    }

    July 18, 2022

    Terence TaoRequest for comments from the ICM Structure Committee

    [This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]

    The ICM structure committee is responsible for the preparation of the Scientific Program of the International Congress of Mathematicians (ICM). It decides the structure of the Scientific Program, in particular,

    • the number of plenary lectures,
    • the sections and their precise definition,
    • the target number of talks in each section,
    • other kind of lectures, and
    • the arrangement of sections.

    (The actual selection of speakers and the local organization of the ICM are handled separately by the Program Committee and Organizing Comittee respectively.)

    Our committee can also propose more radical changes to the format of the congress, although certain components of the congress, such as the prize lectures and satellite events, are outside the jurisdiction of this committee. For instance, in 2019 we proposed the addition of two new categories of lectures, “special sectional lectures” and “special plenary lectures”, which are broad and experimental categories of lectures that do not fall under the traditional format of a mathematician presenting their recent advances in a given section, but can instead highlight (for instance) emerging connections between two areas of mathematics, or present a “big picture” talk on a “hot topic” from an expert with the appropriate perspective. These new categories made their debut at the recently concluded virtual ICM, held on July 6-14, 2022.

    Over the next year or so, our committee will conduct our deliberations on proposed changes to the structure of the congress for the next ICM (to be held in-person in Philadelphia in 2026) and beyond. As part of the preparation for these deliberations, we are soliciting feedback from the general mathematics community (on this blog and elsewhere) on the current state of the ICM, and any proposals to improve that state for the subsequent congresses; we had issued a similar call on this blog back in 2019. This time around, of course, the situation is complicated by the extraordinary and exceptional circumstances that led to the 2022 ICM being moved to a virtual platform on short notice, and so it is difficult for many reasons to hold the 2022 virtual ICM as a model for subsequent congresses. On the other hand, the scientific program had already been selected by the 2022 ICM Program Committee prior to the invasion of Ukraine, and feedback on the content of that program will be of great value to our committee.

    Among the specific questions (in no particular order) for which we seek comments are the following:

    1. Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
    2. Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
    3. The special sectional and special plenary lectures were introduced in part to increase the emphasis on the quality of exposition at ICM lectures. Has this in fact resulted in a notable improvement in exposition, and should any alternations be made to the special lecture component of the ICM?
    4. Is the balance between plenary talks, sectional talks, special plenary and sectional talks, and public talks at an optimal level?  There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
    5. The ICM is generally perceived to be more important to pure mathematics than to applied mathematics.  In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
    6. Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be under-represented at the ICM?  What, if anything, can be done to mitigate these barriers?
    7. The recently concluded virtual ICM had a sui generis format, in which the core virtual program was supplemented by a number of physical “overlay” satellite events. Are there any positive features of that format which could potentially be usefully adapted to such congresses? For instance, should there be any virtual or hybrid components at the next ICM?

    Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions.  Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions.  We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee.  We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type.   (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee. Comments that are specific to the recently concluded virtual ICM can be made instead at this blog post.)

    July 14, 2022

    John PreskillQuantum Encryption in a Box

    Over the last few decades, transistor density has become so high that classical computers have run into problems with some of the quirks of quantum mechanics. Quantum computers, on the other hand, exploit these quirks to revolutionize the way computers work. They promise secure communications, simulation of complex molecules, ultrafast computations, and much more. The fear of being left behind as this new technology develops is now becoming pervasive around the world. As a result, there are large, near-term investments in developing quantum technologies, with parallel efforts aimed at attracting young people into the field of quantum information science and engineering in the long-term.

    I was not surprised then that, after completing my master’s thesis in quantum optics at TU Berlin in Germany, I was invited to participate in a program called Quanten 1×1 and hosted by the Junge Tueftler (Young Tinkerers) non-profit, to get young people excited about quantum technologies. As part of a small team, we decided to develop tabletop games to explain the concepts of superposition, entanglement, quantum gates, and quantum encryption. In the sections that follow, I will introduce the thought process that led to the design of one of the final products on quantum encryption. If you want to learn more about the other games, you can find the relevant links at the end of this post.

    The price of admission into the quantum realm

    How much quantum mechanics is too much? Is it enough for people to know about the health of Schrödinger’s cat, or should we use a squishy ball with a smiley face and an arrow on it to get people excited about qubits and the Bloch sphere? In other words, what is the best way to go beyond metaphors and start delving into the real stuff? After all, we are talking about cutting-edge quantum technology here, which requires years of study to understand. Even the quantum experts I met with during the project had a hard time explaining their work to lay people.

    Since there is no standardized way to explain these topics outside a university, the goal of our project was to try different models to teach quantum phenomena and make the learning as entertaining as possible. Compared to methods where people passively absorb the information, our tabletop-games approach leverages people’s curiosity and leads to active learning through trial and error.

    A wooden quantum key generator (BB84)

    Everybody has secrets

    Most of the (sensitive) information that is transmitted over the Internet is encrypted. This means that only those with the right “secret key” can unlock the digital box and read the private message within. Without the secret key used to decrypt, the message looks like gibberish – a series of random characters. To encrypt the billions of messages being exchanged every day (over 300 billion emails alone), the Internet relies heavily on public-key cryptography and so-called one-way functions. These mathematical functions allow one to generate a public key to be shared with everyone, from a private key kept to themselves. The public key plays the role of a digital padlock that only the private key can unlock. Anyone (human or computer) who wants to communicate with you privately can get a digital copy of your padlock (by copying it from a pinned tweet on your Twitter account, for example), put their private message inside a digital box provided by their favorite app or Internet communication protocol running behind the scenes, lock the digital box using your digital padlock (public-key), and then send it over to you (or, accidentally, to anyone else who may be trying to eavesdrop). Ingeniously, only the person with the private key (you) can open the box and read the message, even if everyone in the world has access to that digital box and padlock.

    But there is a problem. Current one-way functions hide the private key within the public key in a way that powerful enough quantum computers can reveal. The implications of this are pretty staggering. Your information (bank account, email, bitcoin wallet, etc) as currently encrypted will be available to anyone with such a computer. This is a very serious issue of global importance. So serious indeed, that the President of the United States recently released a memo aimed at addressing this very issue. Fortunately, there are ways to fight quantum with quantum. That is, there are quantum encryption protocols that not even quantum computers can break. In fact, they are as secure as the laws of physics.

    Quantum Keys

    A popular way of illustrating how quantum encryption works is through single photon sources and polarization filters. In classroom settings, this often boils down to lasers and small polarizing filters a few meters apart. Although lasers are pretty cool, they emit streams of photons (particles of light), not single photons needed for quantum encryption. Moreover, measuring polarization of individual photons (another essential part of this process) is often very tricky, especially without the right equipment. In my opinion the concept of quantum mechanical measurement and the collapse of wave functions is not easily communicated in this way.

    Inspired by wooden toys and puzzles my mom bought for me as a kid after visits to the dentist, I tried to look for a more physical way to visualize the experiment behind the famous BB84 quantum key distribution protocol. After a lot of back and forth between the drawing board and laser cutter, the first quantum key generator (QeyGen) was built. 

    How does the box work?

    Note: This short description leaves out some details. For a deeper dive, I recommend watching the tutorial video on our Youtube channel.

    The quantum key generator (QeyGen) consists of an outer and an inner box. The outer box is used by the person generating the secret key, while the inner box is used by the person with whom they wish to share that key. The sender prepares a coin in one of two states (heads = 0, tails = 1) and inserts it either into slot 1 (horizontal basis), or slot 2 (vertical basis) of the outer box. The receiver then measures the state of the coin in one of the same two bases by sliding the inner box to the left (horizontal basis = 1) or right (vertical basis = 2). Crucially, if the bases to prepare and measure the coin match, then both sender and receiver get the same value for the coin. But if the basis used to prepare the coin doesn’t match the measurement basis, the value of the coin collapses into one of the two allowed states in the measurement basis with 50/50 chance. Because of this design, the box can be used to illustrate the BB84 protocol that allows two distant parties to create and share a secure encryption key.

    Simulating the BB84 protocol

    The following is a step by step tutorial on how to play out the BB84 protocol with the QeyGen. You can play it with two (Alice, Bob) or three (Alice, Bob, Eve) people. It is useful to know right from the start that this protocol is not used to send private messages, but is instead used to generate a shared private key that can then be used with various encryption methods, like the one-time pad, to send secret messages.

    BB84 Protocol:

    1. Alice secretly “prepares” a coin by inserting it facing-towards (0) or facing-away (1) from her into one of the two slots (bases) on the outer box. She writes down the value (0 or 1) and basis (horizontal or vertical) of the coin she just inserted.
    2. (optional) Eve, the eavesdropper, tries to “measure” the coin by sliding the inner box left (horizontal basis) or right (vertical basis), before putting the coin back through the outer box without anyone noticing.
    3. Bob then secretly measures the coin in a basis of his choice and writes down the value (0 or 1) and basis (horizontal and vertical) as well.
    4. Steps 1 and 3 are then repeated several times. The more times Alice and Bob go through this process, the more secure their secret key will be.

    Sharing the key while checking for eavesdroppers:

    1. Alice and Bob publicly discuss which bases they used at each “prepare” and “measure” step, and cross out the values of the coin corresponding to the bases that didn’t match (about half of them on average; here, it would be rounds 1,3,5,6,7, and 11).
    2. Then, they publicly announce the first few (or a random subset of the) values that survive the previous step (i.e. have matching bases; here, it is rounds 2 and 4). If the values match for each round, then it is safe to assume that there was no eavesdrop attack. The remaining values are kept secret and can be used as a secure key for further communication.
    3. If the values of Alice and Bob don’t match, Eve must have measured the coin (before Bob) in the wrong basis (hence, randomizing its value) and put it back in the wrong orientation from the one Alice had originally chosen. Having detected Eve’s presence, Alice and Bob switch to a different channel of communication and try again.

    Note that the more rounds Alice and Bob choose for the eavesdropper detection, the higher the chance that the channel of communication is secure, since N rounds that all return the same value for the coin mean a 2^{-N} chance that Eve got lucky and guessed Alice’s inputs correctly. To put this in perspective, a 20-round check for Eve provides a 99.9999% guarantee of security. Of course, the more rounds used to check for Eve, the fewer secure bits are left for Alice and Bob to share at the end. On average, after a total of 2(N+M) rounds, with N rounds dedicated to Eve, we get an M-bit secret key.

    What do people learn?

    When we play with the box, we usually encounter three main topics that we discuss with the participants.

    1. qm states and quantum particles: We talk about superposition of quantum particles and draw an analogy from the coin to polarized photons.
    2. qm measurement and basis: We ask about the state of the coin and discuss how we actually define a state and a basis for a coin. By using the box, we emphasize that the measurement itself (in which basis the coin is observed) can directly affect the state of the coin and collapse its “wavefunction”.
    3. BB84 protocol: After a little playtime of preparing and measuring the coin with the box, we introduce the steps to perform the BB84 protocol as described above. The penny-dropping moment (pun intended) often happens when the participants realize that a spy intervening between preparation and measurement can change the state of the coin, leading to contradictions in the subsequent eavesdrop test of the protocol and exposing the spy.

    I hope that this small outline has provided a rough idea of how the box works and why we developed it. If you have access to a laser cutter, I highly recommend making a QeyGen for yourself (link to files below). For any further questions, feel free to contact me at t.schubert@fu-berlin.de.

    Resources and acknowledgments

    Project page Junge Tueftler: tueftelakademie.de/quantum1x1
    Video series for the QeyGen: youtube.com/watch?v=YmdoAP1TJRo
    Laser cut files: thingiverse.com/thing:5376516

    The program was funded by the Federal Ministry of Education and Research (Germany) and was a collaboration between the Jungen Tueftlern and the Technical University of Berlin.
    A special thanks to Robert from Project Sci.Com who helped me with the development.

    July 11, 2022

    John PreskillIf I could do science like Spider-Man

    A few Saturdays ago, I traveled home from a summer school at which I’d been lecturing in Sweden. Around 8:30 AM, before the taxi arrived, I settled into an armchair in my hotel room and refereed a manuscript from a colleague. After reaching the airport, I read an experimental proposal for measuring a quantity that colleagues and I had defined. I drafted an article for New Scientist on my trans-Atlantic flight, composed several emails, and provided feedback about a student’s results (we’d need more data). Around 8 PM Swedish time, I felt satisfyingly exhausted—and about ten hours of travel remained. So I switched on Finnair’s entertainment system and navigated to Spider-Man: No Way Home.

    I found much to delight. Actor Alfred Molina plays the supervillain Doc Ock with charisma and verve that I hadn’t expected from a tentacled murderer. Playing on our heartstrings, Willem Dafoe imbues the supervillain Norman Osborn with frailty and humanity. Three characters (I won’t say which, for the spoiler-sensitive) exhibit a playful chemistry. To the writers who thought to bring the trio together, I tip my hat. I tip my hat also to the special-effects coders who sweated over reconciling Spider-Man’s swoops and leaps with the laws of mechanics.

    I’m not a physicist to pick bones with films for breaking physical laws. You want to imagine a Mirror Dimension controlled by a flying erstwhile surgeon? Go for it. Falling into a vat of electrical eels endows you with the power to control electricity? Why not. Films like Spider-Man’s aren’t intended to portray physical laws accurately; they’re intended to portray people and relationships meaningfully. So I raised nary an eyebrow at characters’ zipping between universes (although I had trouble buying teenage New Yorkers who called adults “sir” and “ma’am”).

    Anyway, no hard feelings about the portrayal of scientific laws. The portrayal of the scientific process, though, entertained me even more than Dr. Strange’s trademark facetiousness. In one scene, twelfth grader Peter Parker (Spider-Man’s alter-ego) commandeers a high-school lab with two buddies. In a fraction of a night, the trio concocts cures for four supervillains whose evil stems from physical, chemical, and biological accidents (e.g., falling into the aforementioned vat of electric eels).1 And they succeed. In a few hours. Without test subjects or even, as far as we could see, samples of their would-be test subjects. Without undergoing several thousand iterations of trying out their cures, failing, and tweaking their formulae—or even undergoing one iteration.

    I once collaborated with an experimentalist renowned for his facility with superconducting qubits. He’d worked with a panjandrum of physics years before—a panjandrum who later reminisced to me, “A theorist would propose an experiment, [this experimentalist would tackle the proposal,] and boom—the proposal would work.” Yet even this experimentalist’s team invested a year in an experiment that he’d predicted would take a month.

    Worse, the observatory LIGO detected gravitational waves in 2016 after starting to take data in 2002…after beginning its life during the 1960s.2 

    Recalling the toil I’d undertaken all day—and only as a theorist, not even as an experimentalist charged with taking data through the night—I thought, I want to be like Spider-Man. Specifically, I want to do science like Spider-Man. Never mind shooting webs out of my wrists or swooping through the air. Never mind buddies in the Avengers, a Greek-statue physique, or high-tech Spandex. I want to try out a radical new idea and have it work. On the first try. Four times in a row on the same day. 

    Daydreaming in the next airport (and awake past my bedtime), I imagined what a theorist could accomplish with Spider-Man’s scientific superpowers. I could calculate any integral…write code free of bugs on the first try3…prove general theorems in a single appendix!

    Too few hours later, I woke up at home, jet-lagged but free of bites from radioactive calculators. I got up, breakfasted, showered, and settled down to work. Because that’s what scientists do—work. Long and hard, including when those around us are dozing or bartering frequent-flyer miles, such that the satisfaction of discoveries is well-earned. I have to go edit a paper now, but, if you have the time, I recommend watching the latest Spider-Man movie. It’s a feast of fantasy.

    1And from psychological disorders, but the therapy needed to cure those would doom any blockbuster.

    2You might complain that comparing Peter Parker’s labwork with LIGO’s is unfair. LIGO required the construction of large, high-tech facilities; Parker had only to cure a lizard-man of his reptilian traits and so on. But Tony Stark built a particle accelerator in his basement within a few hours, in Iron Man; and superheroes are all of a piece, as far as their scientific exploits are concerned.

    3Except for spiders?

    July 10, 2022

    Robert HellingVoting systems, once more

     Over the last few days, I have been involved in some heated Twitter discussions around a possible reform of the voting system for the German parliament. Those have sharpened my understanding of one or two things and that's why I think it's worthwhile writing a blog post about it.

    The root of the problem is that the system currently in use tries to optimise two goals which are not necessarily compatible: Proportional representation (number of seats for a party should be proportional to votes received) and local representation (each constituency being represented by at least one MP). If you only wanted to optimise the first you would not have constituencies but collect all votes in one big bucket and assign seats accordingly to the competing parties, if you only wanted to optimise the second goal you would use a first past the pole (FPTP) voting system like in the UK or the US.

    In a nutshell (glancing over some additional complications), the current system is as follows: We start by assuming there are twice as many seats in parliament as there are constituencies. Each voter has two different votes. The first is a FPTP vote that determines a local candidate that will definitely get a seat in parliament. The second vote is the proportional vote that determines the percentage of seats for the parties. The parties will then send further MPs to reach their allocated lot but the winners of the constituencies are counted as well and the parties only "fill up" the remaining seats from their party list. So far so good, you have achieved both goals: There is one winner MP from each constituency and the parties have seats proportional to the number of (second) votes. Great.

    Well, except if a party wins more constituencies  than they are assigned seats according to proportional votes. This was not so much of a problem some decades ago when there were two major parties (conservative and social democrat) and one or two smaller ones. The two parties would somehow share the constituency wins but since those make up only half  of the total number of seats those would not be many more than their share to total seats (which would typically be well above 30% or even 40%).

    The voting system's solution to this problem is to increase the total number of seats to the minimal total number such that each party's number of won constituencies is at least as high as their shore of total seats according to proportional vote.

    But these days, the two former big parties have lost a lot of their support (winning only 20-25% in the last election) and four additional parties being also represented and not getting much less votes than the two former big ones. In the constituencies it is not rare that you win your FPTP seat with less than 30% of the votes in the constituency and it the last election it can be as low as only 18% sufficient to being the winner of a seat. This lead to the parliament having 736 seats as compared to the nominal size of 598 and there were polls not long before that election which suggested 800+ seats or possibly even over 1000.

    A particular case is the CSU, the conservative party here in Bavaria (which is nominally a different party from the CDU, which is the conservative party in the rest of Germany. In Bavaria, the CDU is not competing while in the rest of the country, the CSU is not on the ballot): Still being relative winners here, they won all but one constituencies but got only about 30% of the votes in Bavaria which translates to slightly above 5% of all votes in Germany.

    According to a general sentiment, 700+ seats is far too big (for a functioning parliament and also cost wise), so the system should be reformed. But people differ on how to reform it. A simple solution mathematically would be to increase the size of the constituencies to decrease their total number. So the total number of constituency winners to be matched by proportional votes would be less. But that solution is not very popular with the main argument being that those constituents would be too big for a reasonable contact of the local MPs to their constituents. Another likely reason nobody really likes to talk about is that by redrawing district lines by a lot would probably cause a lot of infighting in all the parties because the candidatures would have to be completely redistributed with many established candidates losing their job. So that is off the table, after all, it's the parties in parliament which decide about the voting system by simple majority (with boundary conditions set by relatively vague rules set by the constitution).

    There is now a proposal by the governing social democrat-green-liberal coalition. The main idea is to weaken the FPTP system in the constituencies maintaining the proportional vote: Winning a constituency no longer guarantees you a seat in parliament. If you party wins more constituencies than their share of total seats according to the proportional votes, those constituency seats where the party's relative majority was the smallest would be allocated to the runner up (as that candidates party still has to be allocated seats according to proportional vote). This breaks FPTP, but keeps the proportional representation as well as the principle of each constituency sending at least one MP while fixing the total number of seats in parliament to the magic 598.

    The conservatives in opposition do not like this idea (been traditionally the relatively strongest parties and thus tending to win more constituencies). You can calculate how many seats each party would get assuming the last election's votes: All parties would have to give up about 18% of their seats except for the CSU, the Bavarian conservatives, who would lose about 25% since some fine print I did not explain so far favours parties winning relatively many constituencies directly. 

    The conservatives also have a proposal. They are willing to give up proportionality in favour of maintaining FPTP and fixing the number of seats to 598: They propose to assign 299 of the seats according to FPTP to constituency winners and only distributing the remaining 299 seats proportionally. So they don't want to include the constituency winners in the proportional calculation.

    This is the starting point of the Twitter discussions. Both sides accusing the other side have an undemocratic proposal. One side says a parliament where the majorities do not necessarily (and with current data) unlikely represent majorities in the population is not democratic while the other side arguing that denying a seat to a candidate that won his/her constituency (even by a small relative majority) being not democratic.

    Of course it is a total coincidence that each side is arguing for the system that would be better for them (the governing coalition hurting everybody almost equally only the CSU a bit more while the conservative proposal actually benefitting the conservatives quite a bit while in particular hurting the smaller parties that do not win many constituencies or none at all).

    ImageImage

    (Ampel being the governing coalition, Union being the conservative parties).

    Of course, both proposals are in a mathematical sense "democratic" each in their own logic emphasising different legitimate aspects (accurate proportional representation vs accurate representation of local winners).

    Beyond the understandable preference for a system that favours one's own political side I think a more honest discussion would be about which of these legitimate aspects is actually more relevant for the political discourse. If a lot of debates would be along geographic lines, north against south, east against west or even rural vs urban then yes, it is very important that the local entities are as accurately represented as possible to get the outcomes of these debates right. That would emphasise FPTP as making sure local communities are most honestly represented.

    If however typical debates are along other fault lines, for example progressive vs conservative or pro business vs pro social wealth redistribution then we should make sure the views of the population are optimally represented. And that would be in favour of a strict proportional representation. 

    Guess what I think is actually the case.

    All that in addition to a political tradition in which "calling your local MP or representative" is a much less common thing that in anglo-saxon countries and studies showing that even shortly after a general election less than a quarter of the voters being able to name at least two of the names of their constituency's candidates casting serious doubts about an informed decision at the local level rather than along party lines (where parties being only needed to make sure there is only one candidate per party in the FPTP system while being the central entity for proportional votes).

    PS: The governing coalition's proposal has some ambiguities as well (as I demonstrate here --- in German).

    July 09, 2022

    Terence TaoICM and IMU award ceremony begins tomorrow

    I’m currently in Helsinki, Finland for the General Assembly meeting of the International Mathematical Union (IMU), which runs the International Congress of Mathematicians (ICM) as well as several other events and initiatives. In particular the assembly voted on the location of the 2026 ICM; it will be held in Philadelphia, USA (with the general assembly being held in New York, USA).

    Tomorrow the IMU award ceremony will take place, where the recipients of the various IMU awards (such as the Fields medal) will be revealed and honored. Event information can be found at this Facebook Event page, and will also be streamed at this Youtube page; participants who have registered at the virtual ICM can also view it from the web page links they would have received in email in the last few days. (Due to high demand, registration for the virtual ICM has unfortunately reached the capacity of the live platform; but lectures will be made available on the IMU Youtube channel a few hours after they are given. The virtual ICM program will begin the day after the award ceremony, beginning with the lectures of the prize laureates.

    We have an unofficial ICM Discord server set up to follow the virtual ICM as it happens, with events set up for the prize ceremony and individual days of the congress, as well as for individual sections, as well as more recreational channels, such as a speculation page for the IMU prize winners. There are also a number of other virtual ICM satellite events that are being held either simultaneously with, or close to, the virtual ICM; I would like to draw particular attention to the satellite public lectures by Williamson (July 8), Giorgi (July 11), and Tokieda (July 13), which was also highlighted in my previous blog post. (EDIT: I would also like to mention the now-live poster room for the short communic

    After the virtual ICM concludes, I will solicit feedback on this blog (in my capacity as chair of the IMU Structure Committee) on all aspects of that congress, as well as suggestions for future congresses; but I am not formally requesting such feedback at this present time.

    July 07, 2022

    Peter Rohde MoodSnap! Technology for mental health

    In this article published in The Spectator, I discuss the importance of incorporating mental health into digital health ecosystems like Apple Health, Google Fit and Fitbit. Mental health is as important as physical health, and the two are intimately connected. It would be remiss for digital health ecosystems to overlook this important aspect of our wellbeing.

    Featuring in the article is my new open-source project MoodSnap for realtime tracking of mental health.

    The post MoodSnap! Technology for mental health   appeared first on Peter Rohde.

    June 27, 2022

    John PreskillQuantum connections

    We were seated in the open-air back of a boat, motoring around the Stockholm archipelago. The Swedish colors fluttered above our heads; the occasional speedboat zipped past, rocking us in its wake; and wildflowers dotted the bank on either side. Suddenly, a wood-trimmed boat glided by, and the captain waved from his perch.

    The gesture surprised me. If I were in a vehicle of the sort most familiar to me—a car—I wouldn’t wave to other drivers. In a tram, I wouldn’t wave to passengers on a parallel track. Granted, trams and cars are closed, whereas boats can be open-air. But even as a pedestrian in a downtown crossing, I wouldn’t wave to everyone I passed. Yet, as boat after boat pulled alongside us, we received salutation after salutation.

    The outing marked the midpoint of the Quantum Connections summer school. Physicists Frank Wilczek, Antti Niemi, and colleagues coordinate the school, which draws students and lecturers from across the globe. Although sponsored by Stockholm University, the school takes place at a century-old villa whose name I wish I could pronounce: Högberga Gård. The villa nestles atop a cliff on an island in the archipelago. We ventured off the island after a week of lectures.

    Charlie Marcus lectured about materials formed from superconductors and semiconductors; John Martinis, about superconducting qubits; Jianwei Pan, about quantum advantages; and others, about symmetries, particle statistics, and more. Feeling like an ant among giants, I lectured about quantum thermodynamics. Two other lectures linked quantum physics with gravity—and in a way you might not expect. I appreciated the opportunity to reconnect with the lecturer: Igor Pikovski.

    Cruising around Stockholm

    Igor doesn’t know it, but he’s one of the reasons why I joined the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) as an ITAMP Postdoctoral Fellow in 2018. He’d held the fellowship beginning a few years before, and he’d earned a reputation for kindness and consideration. Also, his research struck me as some of the most fulfilling that one could undertake.

    If you’ve heard about the intersection of quantum physics and gravity, you’ve probably heard of approaches other than Igor’s. For instance, physicists are trying to construct a theory of quantum gravity, which would describe black holes and the universe’s origin. Such a “theory of everything” would reduce to Einstein’s general theory of relativity when applied to planets and would reduce to quantum theory when applied to atoms. In another example, physicists leverage quantum technologies to observe properties of gravity. Such technologies enabled the observatory LIGO to register gravitational waves—ripples in space-time. 

    Igor and his colleagues pursue a different goal: to observe phenomena whose explanations depend on quantum theory and on gravity.

    In his lectures, Igor illustrated with an experiment first performed in 1975. The experiment relies on what happens if you jump: You gain energy associated with resisting the Earth’s gravitational pull—gravitational potential energy. A quantum object’s energy determines how the object’s quantum state changes in time. The experimentalists applied this fact to a beam of neutrons. 

    They put the beam in a superposition of two locations: closer to the Earth’s surface and farther away. The closer component changed in time in one way, and the farther component changed another way. After a while, the scientists recombined the components. The two interfered with each other similarly to the waves created by two raindrops falling near each other on a puddle. The interference evidenced gravity’s effect on the neutrons’ quantum state.

    Summer-school venue. I’d easily say it’s gorgeous but not easily pronounce its name.

    The experimentalists approximated gravity as dominated by the Earth alone. But other masses can influence the gravitational field noticeably. What if you put a mass in a superposition of different locations? What would happen to space-time?

    Or imagine two quantum particles too far apart to interact with each other significantly. Could a gravitational field entangle the particles by carrying quantum correlations from one to the other?

    Physicists including Igor ponder these questions…and then ponder how experimentalists could test their predictions. The more an object influences gravity, the more massive the object tends to be, and the more easily the object tends to decohere—to spill the quantum information that it holds into its surroundings.

    The “gravity-quantum interface,” as Igor entitled his lectures, epitomizes what I hoped to study in college, as a high-school student entranced by physics, math, and philosophy. What’s more curious and puzzling than superpositions, entanglement, and space-time? What’s more fundamental than quantum theory and gravity? Little wonder that connecting them inspires wonder.

    But we humans are suckers for connections. I appreciated the opportunity to reconnect with a colleague during the summer school. Boaters on the Stockholm archipelago waved to our cohort as they passed. And who knows—gravitational influences may even have rippled between the boats, entangling us a little.

    Requisite physicist-visiting-Stockholm photo

    With thanks to the summer-school organizers, including Pouya Peighami and Elizabeth Yang, for their invitation and hospitality.