Planet Musings

August 09, 2025

Justin WilsonPhases of a Game Show, Part 2

In a previous post, we discussed a phase transition that occurred in the piping above you on a game show. In the scenario, you are led on stage in front of a large audience. After a brief time, the audience votes on how “likeable” you are. The catch is that it doesn’t simply tally the votes, but turns spigots on a lattice of piping above your head. Water is then released and if enough people like you, it closes off the passage, keeping you dry. This exciting game show1 was described in that post:

Each “like” turns a spigot off, stopping water from flowing through one pipe in a grid overhead. Once voting ends, water is dumped into the system. If it can find a path to the bottom, you get soaked. [Emphasis added] The better your “likeability,” the less likely spigots open a path for water to flow and the drier you stay. That’s your prize for this game show (and hey, you also get the knowledge that people out there like you).

This system models a type of phase transition known as percolation.

The full post is here:

I highlighted above a key phrase “If it can find a path to the bottom, you get soaked.” What I didn’t say, but should have is that the water was being forced through the pipes, not just dropping down due to gravity. This is a very important point since our phases and phase transition changes dramatically if we just let gravity do the work. In the case of the water being “forced,” it can travel back up pipes if it helps it find its way out and onto your head, but in the case when only gravity is present, it falls down the pipes. To facilitate gravity, we’ll turn the pipes 45 degrees, and if we insert water at a single point on top, it could look like this:

Testing our gravity setup by putting in water at only one pipe up top. Notice that it never goes back up a pipe, only down.

This setup is a different problem called directed percolation. It also has a phase transition, but one that is different in some fundamental ways from regular percolation.

Thanks for reading Quantum Matters! Subscribe for free to receive new posts and support my work.

Before we explore its stranger properties, we can ask, “At what likability threshold do you remain dry?” Well, this happens to have a transition chance of 35.53%!2 This system is a lot more generous, keeping you dry even when a majority of people dislike you. This number comes from numerical computations which have been done rather precisely, and we can even compute it ourselves. In fact, you can see this clearly with this plot

Notice that as we make the system bigger and bigger, the chance of getting soaked less than 35.53% increases and above it, it decreases. This is the same kind of hallmark of a phase transition as we saw in our previous case.

We can also look at the water as it flows down the system to see the clusters that make it from top to bottom

The “Soaked” phase (left), the transition point (middle), and the “Dry” phase (right) as well as the water’s flow through the system (blue).

There is still a fractal-looking pattern at the transition point. With all of these similarities with the regular percolation problem from the last post, what is different? And why is that plot so long and skinny? If gravity wants to pull you down, is that somehow altering the motion down, making it distinct from the motion left or right?

Well, if you go back to the two plots above, you’ll notice a few things that really make them differ from the percolation plots. In the fine print of the first, I’ve noted that the vertical distance is L1.58, so for a horizontal size of 40, the vertical size is roughly 340! That is definitely not a square. And in the second plot, there appears to be more vertical distance than horizontal distance. What is special about this 1.58 number3? It turns out, it’s a critical exponent in this problem, a universal aspect of directed percolation, that distinguishes it from regular percolation. We will call it z = 1.58 the dynamical critical exponent since it is revealed as water flows down in time (dynamically). This dynamical exponent z can reveal itself by looking at these “long and skinny” setups, but be masked by the square setup.

Universality and the finite size of our system

One thing we took away in the previous post was that we lose any sense of scale at this type of phase transition4. But whenever we have “only” thousands of pipes, the size of the system provides a scale! This is the main reason why we begin to see smooth curves and not sharp jumps in quantities. If the system of pipes were infinite (and we had infinite time for the water to go down the pipes), the probability you get soaked would be 100% less than the 35.53% likeability and 0% more than 35.53% likeability. For physical systems, the finite size is often not a huge issue since the scale is closer to the 1023 atoms present in macroscopic systems, and so even things that are technically smooth curves look very sharp.

The problem of size becomes more severe with directed percolation because horizontal and vertical distances start behaving differently thanks to gravity. In this case, if we lay out our nice grid of 10 × 10, 20 × 20, or 30 × 30, we start to notice that the likeability threshold where you stop getting soaked, seems to depend on the size of the system more than before. In actuality it doesn’t, but for these small sizes, you are definitely getting soaked well into the so-called “Dry Phase” we previously labeled. This is seen in the red curves here where each bigger square has a curve underneath the last:

Gravity has done something to the system. Flowing down is different from flowing left or right. In fact, if we flow down by some amount h and over to the right by some distance w, then at the directed percolation transition point

The amount water flows down is related to how far it flows to the right or left by this weird, fractional power of w. This 1.58 is z, our new dynamical critical exponent, which is a universal feature of directed percolation5. It tells us that if we make a system 30 pipes wide, it should extend roughly 301.58 ≈ 216 pipes in height to begin picking out the phase transition effectively. The blue curves in the above plot show this and notice how they all converge on one point; that point is the phase transition. It is revealed by small sizes! To understand why, just think about how the curves are changing as we make the system bigger and bigger.

The red curves will still converge to the phase transition, but it takes larger system sizes for it to reveal itself. This is related to the property that at the phase transition there is no longer a sense of scale, but away from the transition, the vertical scale of clusters could be so large that our puny 60-by-60 grid cannot even begin to reveal it. So if we sit at say a likeability of 0.4 in the 60-by-60 grid, we can say that the vertical size of a typical cluster is most likely more than 60.

A different phase transition but connections to new types of physics

This “gravity mode” for our game show we may call “easy mode” since it requires less of the audience to like you, but the implications here are wide. This type of phase transition has been seen in many kinds of local dynamics where there is a preferred configuration or state. These called an absorbing state phase transitions, and they are a property of certain random dynamical systems. Gravity has provided the distinction here, but more generically, causality and time itself provide that direction, leading to dynamics that obey the same universality as directed percolation.

1

Trademark pending.

2

Usually, you’ll see 0.6447 quoted instead, but that’s just 1−0.3553, which counts open pipes instead of closed as we’re doing.

3

I should note that we have this number to much higher precision than the two decimal points presented here, see the Wikipedia entry where

4

This is a second-order or continuous phase transition. Most transitions in the water phase diagram are first-order transitions which still retain a scale.

5

To drive this point home: Even if we change the lattice, this power law will remain intact. Sometimes it shows up in completely different scenarios too, like in absorbing state phase transitions.

August 08, 2025

Matt von HippelNewsworthiness Bias

I had a chat about journalism recently, and I had a realization about just how weird science journalism, in particular, is.

Journalists aren’t supposed to be cheerleaders. Journalism and PR have very different goals (which is why I keep those sides of my work separate). A journalist is supposed to be uncompromising, to write the truth even if it paints the source in a bad light.

Norms are built around this. Serious journalistic outlets usually don’t let sources see pieces before they’re published. The source doesn’t have the final say in how they’re portrayed: the journalist reserves the right to surprise them if justified. Investigative journalists can be superstars, digging up damning secrets about the powerful.

When a journalist starts a project, the piece might turn out positive, or negative. A politician might be the best path forward, or a disingenuous grifter. A business might be a great investment opportunity, or a total scam. A popular piece of art might be a triumph, or a disappointment.

And a scientific result?

It might be a fraud, of course. Scientific fraud does exist, and is a real problem. But it’s not common, really. Pick a random scientific paper, filter by papers you might consider reporting on in the first place, and you’re very unlikely to find a fraudulent result. Science journalists occasionally report on spectacularly audacious scientific frauds, or frauds in papers that have already made the headlines. But you don’t expect fraud in the average paper you cover.

It might be scientifically misguided: flawed statistics, a gap in a proof, a misuse of concepts. Journalists aren’t usually equipped to ferret out these issues, though. Instead, this is handled in principle by peer review, and in practice by the scientific community outside of the peer review process.

Instead, for a scientific result, the most common negative judgement isn’t that it’s a lie, or a mistake. It’s that it’s boring.

And certainly, a good science journalist can judge a paper as boring. But there is a key difference between doing that, and judging a politician as crooked or a popular work of art as mediocre. You can write an article about the lying candidate for governor, or the letdown Tarantino movie. But if a scientific result is boring, and nobody else has covered it…then it isn’t newsworthy.

In science, people don’t usually publish their failures, their negative results, their ho-hum obvious conclusions. That fills the literature with only the successes, a phenomenon called publication bias. It also means, though, that scientists try to make their results sound more successful, more important and interesting, than they actually are. Some of the folks fighting the replication crisis have coined a term for this: they call it importance hacking.

The same incentives apply to journalists, especially freelancers. Starting out, it was far from clear that I could make enough to live on. I felt like I had to make every lead count, to find a newsworthy angle on every story idea I could find, because who knew when I would find another one? Over time, I learned to balance that pull better. Now that I’m making most of my income from consulting instead, the pressure has eased almost entirely: there are things I’m tempted to importance-hack for the sake of friends, but nothing that I need to importance-hack to stay in the black.

Doing journalism on the side may be good for me personally at the moment, but it’s not really a model. Much like we need career scientists, even if their work is sometimes boring, we need career journalists, even if they’re sometimes pressured to overhype.

So if we don’t want to incentivize science journalists to be science cheerleaders, what can we do instead?

In science, one way to address publication bias is with pre-registered studies. A scientist sets out what they plan to test, and a journal agrees to publish the result, no matter what it is. You could imagine something like this for science journalism. I once proposed a recurring column where every month I would cover a random paper from arXiv.org, explaining what it meant to accomplish. I get why the idea was turned down, but I still think about it.

In journalism, the arts offer the closest parallel with a different approach. There are many negative reviews of books, movies, and music, and most of them merely accuse the art of being boring, not evil. These exist because they focus on popular works that people pay attention to anyway, so that any negative coverage has someone to convince. You could imagine applying this model to science, though it could be a bit silly. I’m envisioning a journalist who writes an article every time Witten publishes, rating some papers impressive and others disappointing, the same way a music journalist might cover every Taylor Swift album.

Neither of these models are really satisfactory. You could imagine an even more adversarial model, where journalists run around accusing random scientists of wasting the government’s money, but that seems dramatically worse.

So I’m not sure. Science is weird, and hard to accurately value: if we knew how much something mattered already, it would be engineering, not science. Journalism is weird: it’s public-facing research, where the public facing is the whole point. Their combination? Even weirder.

Doug NatelsonBrief items - Static electricity, quantum geometry, Hubbard model, + news

It's been a busy time that has cut into my blogging, but I wanted to point out some links from the past couple of weeks.

  • Physics Today has a cover article this past issue about what is colloquially known as static electricity, but what is more technically described as triboelectricity, the transfer of charge between materials by rubbing.  I just wrote about this six months ago, and the detailed mechanisms remain poorly understood.  Large surface charge densities (like \(10^{12}\) electronic charges per square cm) can be created this way on insulators, leading to potential differences large enough to jump a spark from your finger to the door handle.  This can also lead to static electric fields near surfaces that are not small and can reveal local variations in material properties.
  • That leads right into this paper (which I learned about from here) about the extreme shapes of the heads of a family of insects called treehoppers.  These little crawlies have head and body shapes that often have cuspy, pointy bits that stick out - spines, horns, etc.  As we learn early on about electrostatics, elongated and pointy shapes tend to lead to large local electric fields and field gradients.  The argument of this paper is that the spiky body and cranial morphology can help these insects better sense electric field distributions, and this makes it easier for them to find their way and avoid predators. 
  • This manuscript on the arXiv this week is a particularly nice, pedagogical review article (formatted for Rev Mod Phys) about quantum geometry and Berry curvature in condensed matter systems.  I haven't had the chance to read it through, but I think this will end up being very impactful and a true resource for students to learn about these topics.
  • Another very pretty recent preprint is this one, which examines the electronic phase diagram of twisted bilayers of WSe2, with a relative twist angle of 4.6°.  Much attention has been paid to the idea that moiré lattices can be in a regime seemingly well described by a Hubbard-like model, with an on-site Coulomb repulsion energy \(U\) and an electronic bandwidth \(W\).  This paper shows an exceptionally clean example of this, where disorder seems to be very weak, electron temperatures are quite cold, and phase diagrams are revealed that look remarkably like the phenomena seen in the cuprate superconductors (superconducting "domes" as a function of charge density adjacent to antiferromagnetic insulating states, and with "strange metal" linear-in-\(T\) resistance in the normal state near the superconducting charge density).  Results like this make me more optimistic about overcoming some of the major challenges in using twisted van der Waals materials as simulators of hard-to-solve hamilitonians.
I was all set to post this earlier today, with no awful news for once about science in the US that I felt compelled to discuss, but I got sidetracked by real work.  Then, late this afternoon, this executive order about federal grants was released.  

I can't sugar coat it - it's awful.  Ignoring a large volume of inflammatory rhetoric, it contains this gem, for instance:  "The grant review process itself also undermines the interests of American taxpayers."   It essentially tries to bar any new calls for proposals until a new (and problematic) process is put in place at every agency (see Sect. 3(c)).  Also, it says "All else being equal, preference for discretionary awards should be given to institutions with lower indirect cost rates."  Now, indirect cost rates are set by negotiations between institutions and the government.   Places that only do very small volumes of research have low rates, so get ready for MIT to get fewer grants and Slippery Rock University to get more.  The only certainty is that the nation's lawyers are going to have a field day with all the suits that will come out of this.

August 05, 2025

Jordan EllenbergPredicament

I just learned that the origin of this word is “that which is predicated,” which is to say, more or less, any condition that can be described or specified, whether good, bad, or neutral. Not much different in this respect from the word “situation,” that which is situated. In other words: the present English meaning of “predicament” — “a difficult problem” — must be some kind of fossilization of a now-forgotten euphemistic phrase akin to the current “We have a situation.”

Scott Aaronson ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman

Scott Aaronson’s Brief Foreword:

Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my best friends among the UT professoriate. He’s my favorite kind of philosopher, the kind who sees scientists as partners in discovering the truth, and also has a great sense of humor. He and I are both involved in UT’s new AI and Human Objectives Initiative (AHOI), which is supported by Open Philanthropy.

The other day, Harvey emailed me an eloquent meditation he wrote on what will be the meaning of life if AI doesn’t kill us all, but “merely” does everything we do better than we do it. While the question is of course now extremely familiar to me, Harvey’s erudition—bringing to bear everything from speculative fiction to the history of polar exploration—somehow brought the stakes home for me in a new way.

Harvey mentioned that he’d sent his essay to major magazines but hadn’t had success. So I said, why not a Shtetl-Optimized guest post? Harvey replied—what might be the highest praise this blog has ever received—well, that would be even better than the national magazine, as it would reach more relevant people.

And so without further ado, I present to you…


ChatGPT and the Meaning of Life, by Harvey Lederman

For the last two and a half years, since the release of ChatGPT, I’ve been suffering from fits of dread. It’s not every minute, or even every day, but maybe once a week, I’m hit by it—slackjawed, staring into the middle distance—frozen by the prospect that someday, maybe pretty soon, everyone will lose their job.

At first, I thought these slackjawed fits were just a phase, a passing thing. I’m a philosophy professor; staring into the middle distance isn’t exactly an unknown disease among my kind. But as the years have begun to pass, and the fits have not, I’ve begun to wonder if there’s something deeper to my dread. Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?

The titans of artificial intelligence tell us that there’s nothing to fear. Dario Amodei, CEO of Anthropic, the maker of Claude, suggests that: “historical hunter-gatherer societies might have imagined that life is meaningless without hunting,” and “that our well-fed technological society is devoid of purpose.” But of course, we don’t see our lives that way. Sam Altman, the CEO of OpenAI, sounds so similar, the text could have been written by ChatGPT. Even if the jobs of the future will look as “fake” to us as ours do to “a subsistence farmer”, Altman has “no doubt they will feel incredibly important and satisfying to the people doing them.”

Alongside these optimists, there are plenty of pessimists who, like me, are filled with dread. Pope Leo XIV has decried the threats AI poses to “human dignity, labor and justice”. Bill Gates has written about his fear, that “if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then?” And Douglas Hofstadter, the computer scientist and author of Gödel, Escher, Bach, has spoken eloquently of his terror and depression at “an oncoming tsunami that is going to catch all of humanity off guard.”

Who should we believe? The optimists with their bright visions of a world without work, or the pessimists who fear the end of a key source of meaning in human life?


I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.

I had the sense already then that geographical discovery was played out. I loved the heroes of the great Polar Age, but I saw them—especially Roald Amundsen and Robert Falcon Scott—as the last of their kind. In December 1911, Amundsen reached the South Pole using skis and dogsleds. Scott reached it a month later, in January 1912, after ditching the motorized sleds he’d hoped would help, and man-hauling the rest of the way. As the black dot of Amundsen’s flag came into view on the ice, Scott was devastated to reach this “awful place”, “without the reward of priority”. He would never make it back.

Scott’s motors failed him, but they spelled the end of the great Polar Age. Even Amundsen took to motors on his return: in 1924, he made a failed attempt for the North Pole in a plane, and, in 1926, he successfully flew over it, in a dirigible. Already by then, the skis and dogsleds of the decade before were outdated heroics of a bygone world.

We may be living now in a similar twilight age for human exploration in the realm of ideas. Akshay Venkatesh, whose discoveries earned him the 2018 Fields Medal, mathematics’ highest honor, has written that, the “mechanization of our cognitive processes will alter our understanding of what mathematics is”. Terry Tao, a 2006 Fields Medalist, expects that in just two years AI will be a copilot for working mathematicians. He envisions a future where thousands of theorems are proven all at once by mechanized minds.

Now, I don’t know any more than the next person where our current technology is headed, or how fast. The core of my dread isn’t based on the idea that human redundancy will come in two years rather than twenty, or, for that matter, two hundred. It’s a more abstract dread, if that’s a thing, dread about what it would mean for human values, or anyway my values, if automation “succeeds”: if all mathematics—and, indeed all work—is done by motor, not by human hands and brains.

A world like that wouldn’t be good news for my childhood dreams. Venkatesh and Tao, like Amundsen and Scott, live meaningful lives, lives of purpose. But worthwhile discoveries like theirs are a scarce resource. A territory, once seen, can’t be seen first again. If mechanized minds consume all the empty space on the intellectual map, lives dedicated to discovery won’t be lives that humans can lead.

The right kind of pessimist sees here an important argument for dread. If discovery is valuable in its own right, the loss of discovery could be an irreparable loss for humankind.

A part of me would like this to be true. But over these last strange years, I’ve come to think it’s not. What matters, I now think, isn’t being the first to figure something out, but the consequences of the discovery: the joy the discoverer gets, the understanding itself, or the real life problem their knowledge solves. Alexander Fleming discovered penicillin, and through that work saved thousands, perhaps millions of lives. But if it were to emerge, in the annals of an outlandish future, that an alien discovered penicillin thousands of years before Fleming did, we wouldn’t think that Fleming’s life was worse, just because he wasn’t first. He eliminated great suffering from human life; the alien discoverer, if they’re out there, did not. So, I’ve come to see, it’s not discoveries themselves that matter. It’s what they bring about.


But the advance of automation would mean the end of much more than human discovery. It could mean the end of all necessary work. Already in 1920, the Czech playwright Karel Capek asked what a world like that would mean for the values in human life. In the first act of R.U.R.—the play which introduced the modern use of the word “robot”—Capek has Henry Domin, the manager of Rossum’s Universal Robots (the R.U.R. of the title), offer his corporation’s utopian pitch. “In ten years”, he says, their robots will “produce so much corn, so much cloth, so much everything” that “There will be no poverty.” “Everybody will be free from worry and liberated from the degradation of labor.” The company’s engineer, Alquist, isn’t convinced. Alquist (who, incidentally, ten years later, will be the only human living, when the robots have killed the rest) retorts that “There was something good in service and something great in humility”, “some kind of virtue in toil and weariness”.

Service—work that meets others’ significant needs and wants— is, unlike discovery, clearly good in and of itself. However we work— as nurses, doctors, teachers, therapists, ministers, lawyers, bankers, or, really, anything at all—working to meet others’ needs makes our own lives go well. But, as Capek saw, all such work could disappear. In a “post-instrumental” world, where people are comparatively useless and the bots meet all our important needs, there would be no needed work for us to do, no suffering to eliminate, no diseases to cure. Could the end of such work be a better reason for dread?

The hardline pessimists say that it is. They say that the end all needed work would not only be a loss of some value to humanity, as everyone should agree. For them it would be a loss to humanity on balance, an overall loss, that couldn’t be compensated in another way.

I feel a lot of pull to this pessimistic thought. But once again, I’ve come to think it’s wrong. For one thing, pessimists often overlook just how bad most work actually is. In May 2021, Luo Huazhang, a 31 year-old ex-factory worker in Sichuan wrote a viral post, entitled “Lying Flat is Justice”. Luo had searched at length for a job that, unlike his factory job, would allow him time for himself, but he couldn’t find one. So he quit, biked to Tibet and back, and commenced his lifestyle of lying flat, doing what he pleased, reading philosophy, contemplating the world. The idea struck a chord with overworked young Chinese, who, it emerged, did not find “something great” in their “humility”. The movement inspired memes, selfies flat on one’s back, and even an anthem.

That same year, as the Great Resignation in the United States took off, the subreddit r/antiwork played to similar discontent. Started in 2013, under the motto “Unemployment for all, not only the rich!”, the forum went viral in 2021, starting with a screenshot of a quitting worker’s texts to his supervisor (“No thanks. Have a good life”), and culminating in labor-actions, first supporting striking workers at Kelloggs by spamming their job application site, and then attempting to support a similar strike at McDonald’s. It wasn’t just young Chinese who hated their jobs.

In Automation and Utopia: Human Flourishing in a World without Work, the Irish lawyer and philosopher John Danaher imagines an antiwork techno-utopia, with plenty of room for lying flat. As Danaher puts it: “Work is bad for most people most of the time.”“We should do what we can to hasten the obsolescence of humans in the arena of work.”

The young Karl Marx would have seen both Domin’s and Danaher’s utopias as a catastrophe for human life. In his notebooks from 1844, Marx describes an ornate and almost epic process, where, by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature. The end of needed work, for the Marx of these notes, would be the impossibility of fully realizing our nature, the end, in a way, of humanity itself.

But such pessimistic lamentations have come to seem to me no more than misplaced machismo. Sure, Marx’s and my culture, the ethos of our post-industrial professional class, might make us regret a world without work. But we shouldn’t confuse the way two philosophers were brought up with the fundamental values of human life. What stranger narcissism could there be than bemoaning the end of others’ suffering, disease, and need, just because it deprives you of the chance to be a hero?


The first summer after the release of ChatGPT—the first summer of my fits of dread—I stayed with my in-laws in Val Camonica, a valley in the Italian alps. The houses in their village, Sellero, are empty and getting emptier; the people on the streets are old and getting older. The kids that are left—my wife’s elementary school class had, even then, a full complement of four—often leave for better lives. But my in-laws are connected to this place, to the houses and streets where they grew up. They see the changes too, of course. On the mountains above, the Adamello, Italy’s largest glacier, is retreating faster every year. But while the shows on Netflix change, the same mushrooms appear in the summer, and the same chestnuts are collected in the fall.

Walking in the mountains of Val Camonica that summer, I tried to find parallels for my sense of impending loss. I thought about William Shanks, a British mathematician who calculated π to 707 digits by hand in 1873 (he made a mistake at 527; almost 200 digits were wrong). He later spent years of his life, literally years, on a table of the reciprocals of the primes up to one-hundred and ten thousand, calculating in the morning by hand, and checking it over in the afternoon. That was his life’s work. Just sixty years after his death, though, already in the 1940s, the table on which his precious mornings were spent, the few mornings he had on this earth, could be made by a machine in a day.

I feel sad thinking about Shanks, but I don’t feel grief for the loss of calculation by hand. The invention of the typewriter, and the death of handwritten notes seemed closer to the loss I imagined we might feel. Handwriting was once a part of your style, a part of who you were. With its decline some artistry, a deep and personal form of expression, may be lost. When the bots help with everything we write, couldn’t we too lose our style and voice?

But more than anything I thought of what I saw around me: the slow death of the dialects of Val Camonica and the culture they express. Chestnuts were at one time so important for nutrition here, that in the village of Paspardo, a street lined with chestnut trees is called “bread street” (“Via del Pane”). The hyper-local dialects of the valley, outgrowths sometimes of a single family’s inside jokes, have words for all the phases of the chestnut. There’s a porridge made from chestnut flour that, in Sellero goes by ‘skelt’, but is ‘pult’ in Paspardo, a cousin of ‘migole’ in Malonno, just a few villages away. Boiled, chestnuts are tetighe; dried on a grat, biline or bascocc, which, seasoned and boiled become broalade. The dialects don’t just record what people eat and ate; they recall how they lived, what they saw, and where they went. Behind Sellero, every hundred-yard stretch of the walk up to the cabins where the cows were taken to graze in summer, has its own name. Aiva Codaola. Quarsanac. Coran. Spi. Ruc.

But the young people don’t speak the dialect anymore. They go up to the cabins by car, too fast to name the places along the way. They can’t remember a time when the cows were taken up to graze. Some even buy chestnuts in the store.

Grief, you don’t need me to tell you, is a complicated beast. You can grieve for something even when you know that, on balance, it’s good that it’s gone. The death of these dialects, of the stories told on summer nights in the mountains with the cows, is a loss reasonably grieved. But you don’t hear the kids wishing more people would be forced to stay or speak this funny-sounding tongue. You don’t even hear the old folks wishing they could go back fifty years—in those days it wasn’t so easy to be sure of a meal. For many, it’s better this way, not the best it could be, but still better, even as they grieve what they stand to lose and what they’ve already lost.

The grief I feel, imagining a world without needed work, seems closest to this kind of loss. A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.


In the last chapter of Edith Wharton’s Age of Innocence, Newland Archer contemplates a world that has changed dramatically since, thirty years earlier, before these new fangled telephones and five-day trans-Atlantic ships, he renounced the love of his life. Awaiting a meeting that his free-minded son Dallas has organized with Ellen Olenska, the woman Newland once loved, he wonders whether his son, and this whole new age, can really love the way he did and does. How could their hearts beat like his, when they’re always so sure of getting what they want?

There have always been things to grieve about getting old. But modern technology has given us new ways of coming to be out of date. A generation born in 1910 did their laundry in Sellero’s public fountains. They watched their grandkids grow up with washing machines at home. As kids, my in-laws worked with their families to dry the hay by hand. They now know, abstractly, that it can all be done by machine. Alongside newfound health and ease, these changes brought, as well, a mix of bitterness and grief: grief for the loss of gossip at the fountains or picnics while bringing in the hay; and also bitterness, because the kids these days just have no idea how easy they have it now.

As I look forward to the glories that, if the world doesn’t end, my grandkids might enjoy, I too feel prospective bitterness and prospective grief. There’s grief, in advance, for what we now have that they’ll have lost: the formal manners of my grandparents they’ll never know, the cars they’ll never learn to drive, and the glaciers that will be long gone before they’re born. But I also feel bitter about what we’ve been through that they won’t have to endure: small things like folding the laundry, standing in security lines or taking out the trash, but big ones too—the diseases which will take our loved ones that they’ll know how to cure.

All this is a normal part of getting old in the modern world. But the changes we see could be much faster and grander in scale. Amodei of Anthropic speculates that a century of technological change could be compressed into the next decade, or less. Perhaps it’s just hype, but—what if it’s not? It’s one thing for a person to adjust, over a full life, to the washing machine, the dishwasher, the air-conditioner, one by one. It’s another, in five years, to experience the progress of a century. Will I see a day when childbirth is a thing of the past? What about sleep? Will our ‘descendants’ have bodies at all?

And this round of automation could also lead to unemployment unlike any our grandparents saw. Worse, those of us working now might be especially vulnerable to this loss. Our culture, or anyway mine—professional America of the early 21st century—has apotheosized work, turning it into a central part of who we are. Where others have a sense of place—their particular mountains and trees—we’ve come to locate ourselves with professional attainment, with particular degrees and jobs. For us, ‘workists’ that so many of us have become, technological displacement wouldn’t just be the loss of our jobs. It would be the loss of a central way we have of making sense of our lives.

None of this will be a problem for the new generation, for our kids. They’ll know how to live in a world that could be—if things go well—far better overall. But I don’t know if I’d be able to adapt. Intellectual argument, however strong, is weak against the habits of years. I fear they’d look at me, stuck in my old ways, with the same uncomprehending look that Dallas Archer gives his dad, when Newland announces that he won’t go see Ellen Olenska, the love of his life, after all. “Say”, as Newland tries to explain to his dumbfounded son, “that I’m old fashioned, that’s enough.”


And yet, the core of my dread is not about aging out of work before my time. I feel closest to Douglas Hofstadter, the author of Gödel, Escher, Bach. His dread, like mine, isn’t only about the loss of work today, or the possibility that we’ll be killed off by the bots. He fears that even a gentle superintelligence will be “as incomprehensible to us as we are to cockroaches.”

Today, I feel part of our grand human projects—the advancement of knowledge, the creation of art, the effort to make the world a better place. I’m not in any way a star player on the team. My own work is off in a little backwater of human thought. And I can’t understand all the details of the big moves by the real stars. But even so, I understand enough of our collective work to feel, in some small way, part of our joint effort. All that will change. If I were to be transported to the brilliant future of the bots, I wouldn’t understand them or their work enough to feel part of the grand projects of their day. Their work would have become, to me, as alien as ours is to a roach.


But I’m still persuaded that the hardline pessimists are wrong. Work is far from the most important value in our lives. A post-instrumental world could be full of much more important goods— from rich love of family and friends, to new undreamt of works of art—which would more than compensate the loss of value from the loss of our work.

Of course, even the values that do persist may be transformed in almost unrecognizable ways. In Deep Utopia: Life and Meaning in a Solved World, the futurist and philosopher Nick Bostrom imagines how things might look. In one of the most memorable sections of the book—right up there with an epistolary novella about the exploits of Pignolius the pig (no joke!)—Bostrom says that even child-rearing may be something that we, if we love our children, would come to forego. In a truly post-instrumental world, a robot intelligence could do better for your child, not only in teaching the child to read, but also in showing unbreakable patience and care. If you’ll snap at your kid, when the robot would not, it would only be selfishness for you to get in the way.

It’s a hard question whether Bostrom is right. At least some of the work of care isn’t like eliminating suffering or ending mortal disease. The needs or wants are small-scale stuff, and the value we get from helping each other might well outweigh the fact that we’d do it worse than a robot could.

But even supposing Bostrom is right about his version of things, and we wouldn’t express our love by changing diapers, we could still love each other. And together with our loved ones and friends, we’d have great wonders to enjoy. Wharton has Newland Archer wonder at five-day transatlantic ships. But what about five day journeys to Mars? These days, it’s a big deal if you see the view from Everest with your own eyes. But Olympus Mons on Mars is more than twice as tall.

And it’s not just geographical tourism that could have a far expanded range. There’d be new journeys of the spirit as well. No humans would be among the great writers or sculptors of the day, but the fabulous works of art a superintelligence could make could help to fill our lives. Really, for almost any aesthetic value you now enjoy—sentimental or austere, minute or magnificent, meaningful or jocular—the bots would do it much better than we have ever done.

Humans could still have meaningful projects, too. In 1976, about a decade before any of Altman, Amodei or even I were born, the Canadian philosopher Bernhard Suits argued that “voluntary attempts to overcome unnecessary obstacles” could give people a sense of purpose in a post-instrumental world. Suits calls these “games”, but the name is misleading; I prefer “artificial projects”. The projects include things we would call games like chess, checkers and bridge, but also things we wouldn’t think of as games at all, like Amundsen’s and Scott’s exploits to the Pole. Whatever we call them, Suits—who’s followed here explicitly by Danaher, the antiwork utopian and, implicitly, by Altman and Amodei—is surely right: even as things are now, we get a lot of value from projects we choose, whether or not they meet a need. We learn to play a piece on the piano, train to run a marathon, or even fly to Antartica to “ski the last degree” to the Pole. Why couldn’t projects like these become the backbone of purpose in our lives?

And we could have one real purpose, beyond the artificial ones, as well. There is at least one job that no machine can take away: the work of self-fashioning, the task of becoming and being ourselves. There’s an aesthetic accomplishment in creating your character, an artistry of choice and chance in making yourself who you are. This personal style includes not just wardrobe or tattoos, not just your choice of silverware or car, but your whole way of being, your brand of patience, modesty, humor, rage, hobbies and tastes. Creating this work of art could give some of us something more to live for.


Would a world like that leave any space for human intellectual achievement, the stuff of my childhood dreams? The Buddhist Pali Canon says that “All conditioned things are impermanent—when one sees this with wisdom, one turns away from suffering.” Apparently, in this text, the intellectual achievement of understanding gives us a path out of suffering. To arrive at this goal, you don’t have to be the first to plant your flag on what you’ve understood; you just have to get there.

A secular version of this idea might hold, more simply, that some knowledge or understanding is good in itself. Maybe understanding the mechanics of penicillin matters mainly because of what it enabled Fleming and others to do. But understanding truths about the nature of our existence, or even mathematics, could be different. That sort of understanding plausibly is good in its own right, even if someone or something has gotten there first.

Venkatesh the Fields Medalist seems to suggest something like this for the future of math. Perhaps we’ll change our understanding of the discipline, so that it’s not about getting the answers, but instead about human understanding, the artistry of it perhaps, or the miracle of the special kind of certainty that proof provides.

Philosophy, my subject, might seem an even more promising place for this idea. For some, philosophy is a “way of life”. The aim isn’t necessarily an answer, but constant self-examination for its own sake. If that’s the point, then in the new world of lying flat, there could be a lot of philosophy to do.

I don’t myself accept this way of seeing things. For me, philosophy aims at the truth as much as physics does. But I of course agree that there are some truths that it’s good for us to understand, whether or not we get there first. And there could be other parts of philosophy that survive for us, as well. We need to weigh the arguments for ourselves, and make up our own minds, even if the work of finding new arguments comes to belong to a machine.

I’m willing to believe, and even hope that future people will pursue knowledge and understanding in this way. But I don’t find, here, much consolation for my personal grief. I was trained to produce knowledge, not merely to acquire it. In the hours when I’m not teaching or preparing to teach, my job is to discover the truth. The values I imbibed—and I told you I was an obedient kid—held that the prize goes for priority.

Thinking of this world where all we learn is what the bots have discovered first, I feel sympathy with Lee Sedol, the champion Go player who retired after his defeat by Google’s AlphaZero in 2016. For him, losing to AI “in a sense, meant my entire world was collapsing”. “Even if I become the number one, there is an entity that cannot be defeated.” Right or wrong, I would feel the same about my work, in a world with an automated philosophical champ.

But Sedol and I are likely just out of date models, with values that a future culture will rightly revise. It’s been more than twenty years since Garry Kasparov lost to IBM’s Deep Blue, but chess has never been more popular. And this doesn’t seem some new-fangled twist of the internet age. I know of no human who quit the high-jump after the invention of mechanical flight. The Greeks sprinted in their Olympics, though they had, long before, domesticated the horse. Maybe we too will come to value the sport of understanding with our own brains.


Frankenstein, Mary Shelley’s 1818 classic of the creations-kill-creator genre, begins with an expedition to the North Pole. Robert Walton hopes to put himself in the annals of science and claim the Pole for England, when he comes upon Victor Frankenstein, floating in the Arctic Sea. It’s only once Frankenstein warms up, that we get into the story everyone knows. Victor hopes he can persuade Walton to turn around, by describing how his own quest for knowledge and glory went south.

Frankenstein doesn’t offer Walton an alternative way of life, a guide for living without grand goals. And I doubt Walton would have been any more personally consoled by the glories of a post-instrumental future than I am. I ended up a philosopher, but I was raised by parents who, maybe like yours, hoped for doctors or lawyers. They saw our purpose in answering real needs, in, as they’d say, contributing to society. Lives devoted to families and friends, fantastic art and games could fill a wondrous future, a world far better than it has ever been. But those aren’t lives that Walton or I, or our parents for that matter, would know how to be proud of. It’s just not the way we were brought up.

For the moment, of course, we’re not exactly short on things to do. The world is full of grisly suffering, sickness, starvation, violence, and need. Frankenstein is often remembered with the moral that thirst for knowledge brings ruination, that scientific curiosity killed the cat. But Victor Frankenstein makes a lot of mistakes other than making his monster. His revulsion at his creation persistently prevents him, almost inexplicably, from feeling the love or just plain empathy that any father should. On top of all we have to do to help each other, we have a lot of work to do, in engineering as much as empathy, if we hope to avoid Frankenstein’s fate.

But even with these tasks before us, my fits of dread are here to stay. I know that the post-instrumental world could be a much better place. But its coming means the death of my culture, the end of my way of life. My fear and grief about this loss won’t disappear because of some choice consolatory words. But I know how to relish the twilight too. I feel lucky to live in a time where people have something to do, and the exploits around me seem more poignant, and more beautiful, in the dusk. We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.

n-Category Café (BT) Diversity from (LC) Diversity

Guest post by Mark Meckes

Around 2010, in papers that both appeared in print in 2012, two different mathematical notions were introduced and given the name “diversity”.

One, introduced by Tom Leinster and Christina Cobbold, is already familiar to regular readers of this blog. Say XX is a finite set, and for each x,yXx,y \in X we have a number Z(x,y)=Z(y,x)[0,1]Z(x,y) = Z(y,x) \in [0,1] that specifies how “similar” xx and yy are. (Typically we also assume Z(x,x)=1Z(x,x) = 1.) Fix a parameter q[0,]q \in [0,\infty]. If pp is a probability distribution on XX, then the quantity D q Z(p)=( xsupp(p)( ysupp(p)Z(x,y)p(y)) q1p(x)) 1/(1q) D_q^Z(p) = \left(\sum_{x\in supp(p)} \left( \sum_{y\in supp(p)} Z(x,y) p(y)\right)^{q-1} p(x)\right)^{1/(1-q)} (with the cases q=1,q=1,\infty defined by taking limits) can be interpreted as the “effective number of points” in XX, taking into account both the similarities between points as quantified by ZZ and the weights specified by pp. Its logarithm logD q Z(p)\log D_q^Z(p) is a refinement of the qq-Rényi entropy of pp. The main motivating example is when XX is a set of species of organisms present in an ecosystem, and D q Z(p)D_q^Z(p) quantifies the “effective number of species” in XX, accounting for both similarities between species and their relative abundances. This family of quantities turns out to subsume many of the diversity measures previously introduced in the theoretical ecology literature, and they are now often referred to as Leinster–Cobbold diversities.

The parameter qq determines how much D q Z(p)D_q^Z(p) counts the very “rare” points (those for which p(x)p(x) is very small). An interesting question from an ecological point of view is, given XX and ZZ, which probability distribution pp maximizes the diversity D q Z(p)D_q^Z(p)? It turns out that the answer is independent of qq. Moreover, if XX is a metric space and Z(x,y)=e d(x,y)Z(x,y) = e^{-d(x,y)}, this maximum diversity D(X):=max pD q Z(p) D(X) := \max_p D_q^Z(p) is an isometric invariant closely related to the magnitude of XX. It also extends in a natural way to compact metric spaces.

Independently, David Bryant and Paul Tupper defined a diversity on a set XX to be a [0,)[0,\infty)-valued function δ\delta on the finite subsets of XX which satisfies:

  • δ(A)=0\delta(A) = 0 if AA has at most one element, and

  • δ(AB)δ(AC)+δ(CB)\delta(A\cup B) \le \delta(A \cup C) + \delta(C \cup B) whenever CC \neq \emptyset.

I will refer to a diversity in this sense as a BT diversity. If δ\delta were defined only on sets with at most two elements, this would amount to the definition of a metric. In fact, if dd is a metric on XX, then δ(A)=diam(A):=max a,bAd(a,b) \delta(A) = diam (A) := \max_{a,b \in A} d(a,b) defines a BT diversity on XX, so BT diversities are actually a generalization of metrics.

Here as well, the motivation for the name “diversity” comes from an example in theoretical ecology: suppose XX is a set of species in a phylogenetic tree TT. Define δ(A)\delta(A) to be the length of the smallest subtree of TT containing AA. Then δ\delta is a BT diversity, known in the literature as phylogenetic diversity. However, just as with the maximum diversity discussed above, most of the subsequent work on BT diversities has focused on geometric examples.

So we now have two seemingly quite different geometric notions, introduced about the same time, going by strikingly similar names for conceptually similar reasons. One can’t help wondering, do they have something to do with each other? In particular, could maximum (LC) diversity be an example of a BT diversity?

In a new paper with Gautam Ashwarya, Dongbin Li, and Mokshay Madiman, we show that, after a minor tweak, maximum diversity does give rise to a BT diversity. The minor tweak is necessary to handle the first condition in the definition of BT diversity: if XX is a metric space and xXx \in X, it’s easy to check that D({x})=1D(\{x\}) = 1, whereas a BT diversity must satisfy δ({x})=0\delta(\{x\}) = 0. This can be dealt with in the simplest imaginable way:

Theorem 1 Let XX be a metric space. For each nonempty finite AXA \subseteq X set δ(A)=D(A)1\delta(A) = D(A) - 1, and define also δ()=0\delta(\emptyset) = 0. Then δ\delta is a BT diversity on XX.

(In the paper itself, we adopt the term complexity when referring to the quantities logD q Z(p)\log D_q^Z(p) and logD(X)\log D(X), and state most of the results in terms of complexity instead of maximum diversity; we further deduce from Theorem 1 that the complexity logD(X)log D(X) is also a BT diversity. This terminology is used partly to cut down on the potential confusion created by using “diversity” in multiple ways. It also alludes to the relationship between logD q Z(p)\log D_q^Z(p) and Rényi entropy, which is widely understood as a measure of “complexity”. Further connections between LC complexity and Rényi entropy are the subject of forthcoming work that I hope to be able to tell you more about soon! But for the remainder of this blog post I’ll stick to the maximum diversity formulation.)

Interestingly, maximum diversity has some properties that are quite nice and natural, but turn out to make it intriguingly different from the heretofore most thoroughly studied BT diversities. For example, D=1+δD = 1 + \delta has the following subadditivity property, which is not shared by the functional 1+diam1 + diam:

Theorem 2 Let XX be a metric space, and let A 1,,A nXA_1, \ldots, A_n \subseteq X be compact subsets. Then D( i=1 nA i) i=1 nD(A i). D\left(\bigcup_{i=1}^n A_i \right) \le \sum_{i=1}^n D(A_i).

Maximum diversity actually satisfies a much stronger property called fractional subadditivity, which arises naturally in inequalities for entropy. Another special case of fractional subadditivity is the following.

Theorem 3 Let X={x 1,,x n}X = \{x_1, \ldots, x_n\} be a finite metric space. Then D(X)n1n i=1 nD(X{x i})n1. \frac{D(X)}{n} \le \frac{1}{n} \sum_{i=1}^n \frac{D(X \setminus \{x_i\})}{n-1}.

Theorem 3 can be interpreted as saying that the “complexity per element” of XX is at most the average complexity per element of a randomly chosen subset of cardinality n1n-1. This captures the natural intuition that as the size of a metric space increases, its complexity per element decreases.

In the setting of n\mathbb{R}^n, many examples of BT diversities are homogeneous, in the sense that δ(λA)=λδ(A)\delta(\lambda A) = \lambda \delta(A) for all λ0\lambda \ge 0 and nonempty finite A nA \subseteq \mathbb{R}^n, and either sublinear, meaning homogeneous and also satisfying δ(A+B)δ(A)+δ(B), \delta(A + B) \le \delta(A) + \delta(B), or else linear, where we have equality in the condition above. For example, the diameter is a sublinear diversity. (Diversities with these properties are the focus of a recent work by Bryant and Tupper.)

By contrast, maximum diversity has no simple homogeneity property; in fact its complex behavior with respect to scaling is part of what gives it such rich geometric interest. And at least in one dimension, the diversity δ=logD\delta = \log D satisfies the following superlinearity properties.

Theorem 4 Let δ\delta be the diversity δ=logD\delta = \log D defined on compact subsets of \mathbb{R}. Then δ(A+B)δ(A)+δ(B) \delta(A + B) \ge \delta(A) + \delta(B) and δ(λA+(1λ)B)λδ(A)+(1λ)δ(B) \delta(\lambda A + (1-\lambda)B) \ge \lambda \delta(A) + (1-\lambda) \delta(B) for every 0λ10 \le \lambda \le 1 and nonempty compact A,BA,B \subseteq \mathbb{R}.

The first inequality in Theorem 4 can be regarded as a generalization of the Cauchy–Davenport inequality in \mathbb{R}, and the second as a version of the Brunn–Minkowski inequality in \mathbb{R}. (In fact, since Lebesgue measure can be recovered from maximum diversity, it implies the Brunn–Minkowski inequality in \mathbb{R}.) It is an open question, for which we know some partial results, whether Theorem 4 can be extended to higher dimensions.

In conclusion, our results make (at least) the following points:

  • The seemingly independent mathematical notions of diversity introduced by Leinster and Cobbold on the one hand, and Bryant and Tupper on the other hand, are actually closely connected.

  • Maximum diversity, in the sense of LC diversities, leads to a geometrically interesting example of a BT diversity whose behavior is quite different from many of the previously studied examples of BT diversities.

  • Maximum diversity, at least in certain contexts, satisfies a number of inequalities which extend important classical inequalities, and it would be especially interesting to push this line of inquiry further.

Please read the paper itself for more detail and other remarks (it’s short!).

August 04, 2025

Clifford JohnsonHarvest

There’s a lot of joyful knife-work in my future. #bolognese #summersalad –cvj

The post Harvest appeared first on Asymptotia.

John BaezThe Kepler Problem (Part 9)

Today I want to make a little digression into the quaternions. We won’t need this for anything later—it’s just for fun. But it’s quite beautiful.

We saw in Part 8 that if we take the spin of the electron into account, we can think of bound states of the hydrogen atom as spinor-valued functions on the 3-sphere. Here a ‘spinor’ is a pair of complex numbers.

But we can also think of a spinor as a quaternion! And we can think of the 3-sphere as the unit sphere in the quaternions! So bound states of hydrogen have a nice quaternionic description.

We can go further using quaternionic analysis.

It took a long time for people to figure out the best generalization of complex analysis to the quaternions. Complex analytic functions are incredibly nice, and important in physics. But when you try to generalize them to ‘quaternion analytic functions’, your first few guesses are unlikely to work well. A guy named Rudolf Fueter figured out the right definition:

• Rudolf Fueter, Über die analytische Darstellung der regulären Funktionen einer Quaternionenvariablen, Commentarii Mathematici Helvetici 8 (1936), 371–378.

More recently, some very good mathematical physicists have been further developing this subject:

• Anthony Sudbery, Quaternionic analysis, Mathematical Proceedings of the Cambridge Philosophical Society 85 (1979), 199–225.

• Igor Frenkel and Matvei Libine, Quaternionic analysis, representation theory and physics, Advances in Mathematics 217 (2008), 1806–1877.

Using this, we can describe a lot of hydrogen atom bound states as quaternion analytic functions! And even better, the Dirac operator on spinor-valued functions on the 3-sphere, which I described in Part 8, has a nice description in these terms.

To be a bit more precise: we start by describing a bound state of hydrogen as a function

\psi \colon S^3 \to \mathbb{H}

obeying

\int_{S^3} |\psi(q)|^2 < \infty

Here \mathbb{H} is the quaternions and S^3 is the sphere of quaternions with length 1, which forms a group isomorphic to \text{SU}(2). But we’ll show that a dense subspace of functions of this sort extend to functions \psi \colon \mathbb{H} - \{0\} \to \mathbb{H} that obey a quaternionic analogue of the Cauchy–Riemann equations. Remember, those are the equations obeyed by complex analytic functions. So, hydrogen atom bound states are giving us ‘quaternion analytic functions’ on \mathbb{H} - \{0\}.

You’ll notice I removed the point 0 from the quaternions here. That’s because we allow functions that blow up at 0: that is, approach infinity for very small quaternions.

But of course we also allow functions that blow up at ∞: that is, approach infinity for very large quaternions. In fact there’s a nice symmetry here. To make this evident, we can take the quaternions and add on an extra point called ∞. This gives a space called the quaternionic projective line or \mathbb{H}P^1 for short. It’s a 4-sphere, with 0 as the south pole and ∞ as the north pole. The quaternions q with |q| = 1 form the equator of this 4-sphere. This equator is our friend S^3.

All this is just like what people often do with the complex numbers. They take the complex plane and add on an extra point called ∞. This gives a space called the complex projective line or \mathbb{C}P^1. It’s a 2-sphere with 0 as the south pole and ∞ as the north pole. Thus, it’s also called the Riemann sphere.

Anyway, the idea is that, apart from some niggles which I will mention later, bound states of the hydrogen atom are the same as quaternion analytic functions from \mathbb{H}P^1 - \{0,\infty\} to \mathbb{H}.

You can take any one of these states and write it as a linear combination of two: one that blows up only at 0, and one that blows up only at ∞. This has an interesting interpretation. We’ve already seen that bound states of the hydrogen atom are spinor-valued functions on the 3-sphere, and a certain Dirac-like operator D acts on these states. The states that are linear combinations of eigenvectors of D with positive eigenvalues correspond to the analytic functions that blow up only at \infty. And the states that are linear combinations of eigenvectors of D with negative eigenvalues correspond to analytic functions that blow up only at 0.

All this is analogous to familiar things people do in the complex case. The introduction of Frenkel and Libine’s paper explains the analogy.

Okay, let’s get started!

Here is the quaternionic Cauchy–Riemann equation:

\frac{\partial \psi}{\partial q_0} +  i \frac{\partial \psi}{\partial q_1} + j \frac{\partial \psi}{\partial q_2} + k \frac{\partial \psi }{\partial q_3} = 0

Here \psi is some quaternion-valued function defined on some open subset of the quaternions, and q_0, \dots, q_3 are the usual real coordinates on \mathbb{H} for which any quaternion q is of the form

q = q_0 + q_1 i + q_2 j + q_3 k

For any open set O \subseteq \mathbb{H}, people say a function \psi \colon O \to \mathbb{H} is regular if it’s differentiable in the usual real sense and the quaternionic Cauchy–Riemann equation holds. In Theorem 1 of his paper, Sudbery shows that any regular function is infinitely differentiable in the usual real sense, in fact real-analytic.

Let U_k be the space of regular functions on \mathbb{H} - \{0\} that are homogeneous of degree k \in \mathbb{Z}, meaning that

\psi(\alpha q) = \alpha^k f(q)  \qquad \qquad \forall q \in \mathbb{H} - \{0\}, \alpha \in \mathbb{R} - \{0\}

Clearly any function \psi \in U_k is determined by its restriction to the unit sphere S^3 \subset \mathbb{H}. But in the proof of his Theorem 7, Sudbery shows something less obvious: the restriction is an eigenfunction of the Dirac-like operator D that I mentioned in Part 8!

To prove this, the trick is to write the quaternionic Cauchy–Riemann operator

\overline{\partial} = \frac{\partial}{\partial q_0} +  i \frac{\partial}{\partial q_1} + j \frac{\partial}{\partial q_2} + k \frac{\partial}{\partial q_3}

in something like polar coordinates, involving a radial derivative but also the operator D that I introduced in Part 8. The radial derivative of a homogeneous function \psi is easy to work out, and then using \overline{\partial} \psi = 0 we can show

D(\psi|_{S^3}) = k \psi|_{S^3}

So, Sudbery shows that

\psi \in U_k \implies D(\psi|_{S^3}) = k \psi|_{S^3}

(although he uses different notation).

We saw last time that the Dirac operator on the 3-sphere is

\partial\!\!\!/ = D + \tfrac{3}{2}

So, we get

\psi \in U_k \implies \partial\!\!\!/(\psi|_{S^3}) = (k + \tfrac{3}{2}) \psi

With more work (see my paper) we can show the converse: any eigenfunction of the Dirac operator with eigenvalue k + \tfrac{3}{2} is the restriction of a function in U_k.

Thus, each eigenspace of the Dirac operator on the 3-sphere can be seen as the space of all regular functions \psi \colon \mathbb{H} - \{0\} \to \mathbb{H} that are homogeneous of some particular degree.

So, we can think of hydrogen atom bound states, or at least those that are finite linear combinations of energy eigenstates, as regular functions

\psi \colon \mathbb{H} - \{0\} \to \mathbb{H}

And these finite linear combinations are dense in the space of all hydrogen atom bound states!

To summarize in a sensationalistic way: hydrogen is quaternionic!

Nitty-gritty details

I’ve skimmed over some details. Please stop here unless you really love the quaternions. But to get everything from Part 8 to mesh nicely with what we’re doing now, we need to think of spinors as quaternions in a good way. We need to choose an isomorphism of real vector spaces

\alpha \colon \mathbb{C}^2 \xrightarrow{\sim} \mathbb{H}

in such a way that

• multiplication by -i\sigma_1, -i\sigma_2 and -i\sigma_3 on \mathbb{C}^2 correspond to left multiplication by the quaternions i,j and k on \mathbb{H}, and

• multiplication by i on \mathbb{C}^2 corresponds to right multiplication by the quaternion i.

In case you know some algebra and are wondering what’s really going on here, the idea is that \mathbb{H} is both a left and a right module of itself in the usual way. We can make it into a 2-dimensional complex vector space in a unique way such that multiplication by i is right multiplication by the quaternion i. Since left and right multiplication commute, this makes \mathbb{H} into a 2-dimensional complex vector space on which \mathbb{H} acts complex-linearly by left multiplication.

But \mathbb{C}^2 is also a 2-dimensional complex vector space on which \mathbb{H} acts complex-linearly, with i, j, k acting as matrix multiplication by -i \sigma_1, -i \sigma_2, -i \sigma_3.

All this suggests that with these structures chosen, \mathbb{H} and \mathbb{C}^2 are isomorphic as complex vector spaces on which \mathbb{H} acts complex-linearly!

But how do we find such an isomorphism

\alpha \colon \mathbb{C}^2 \xrightarrow{\sim} \mathbb{H} ?

I got confused for a while, but here’s a systematic approach. Suppose we have such an isomorphism. We must have

\alpha(1) = (x,y)

for some numbers x,y \in \mathbb{C}. We want

\alpha(i) = \alpha(1 i) = \alpha(1) i = (ix,iy)

but we also want

\alpha(i) = \alpha(i 1) = -i \sigma_3 \alpha(1) = (-iy,-ix)

(I’m going to skip lots of computational steps and focus on explaining the strategy.) So, we must have

(ix,iy) = (-iy,-ix)

or in other words

y = -x

Because we’re assuming \alpha is complex-linear (where we multiply quaternions on the right by a + bi), we can assume without loss of generality that

x = 1

Then we have

\alpha(1) = (1,-1)

and

\alpha(i) = (i,-i)

But we also must have

\alpha(j) = \alpha(j 1) = -i \sigma_2 \alpha(1) = -i \sigma_2 (1,-1) = (1,1)

and

\alpha(k) = \alpha(k 1) = -i \sigma_3 \alpha(1) = -i \sigma_3 (1,-1) = (-i,-i)

So, we must have

\alpha(a + bi + cj + dk) = (a+bi+c-di,-a-bi+c-di)

Of course we still need to check that this actually works: that it has the desired properties in my bulleted list. But it does.

The formula is not something I was able to instantly guess.

August 03, 2025

John BaezThe Kepler Problem (Part 4)

The Kepler problem is the study of a particle moving in an attractive inverse square force. In classical mechanics, this problem shows up when you study the motion of a planet around the Sun in the Solar System. In quantum mechanics, it shows up when you study the motion of an electron around a proton in a hydrogen atom.

In Part 2 we saw that the classical Kepler problem has, besides energy and the three components of angular momentum, three more conserved quantities: the components of the eccentricity vector!

This was discovered long ago, in 1710, by the physicist Jakob Hermann. But thanks to Noether, we now know that in classical mechanics, conserved quantities come from symmetries. In the Kepler problem, conservation of energy comes from time translation symmetry, while conservation of the angular momentum comes from rotation symmetry. Which symmetries give conservation of the eccentricity vector?

As we shall see, these symmetries are rotations in 4-dimensional space. These include the obvious rotations in 3-dimensional space which give angular momentum. The other 4-dimensional rotations act in a much less obvious way, and give the eccentricity vector.

In fact, we’ll see that the Kepler problem can be rephrased in terms of a free particle moving around on a sphere in 4-dimensional space. This is a nice explanation of the 4-dimensional rotation symmetry.

After that we’ll see a second way to rephrase the Kepler problem: in terms of a massless, relativistic free particle moving at the speed of light on a sphere in 4-dimensional space. Our first formulation will not involve relativity. This second will.

All this is very nice. You can read some fun explanations of the first formulation here:

• Greg Egan, The ellipse and the atom.

• John Baez, Planets in the fourth dimension.

But how could you guess this 4-dimensional rotation symmetry if you didn’t know about it already? One systematic approach uses Poisson brackets. I won’t explain these, just dive in and use them!

Remember, the particle in the Kepler problem has various observables, which are all ultimately functions of its position and momentum:

• position: \vec q

• momentum: \vec p

• energy: E = \tfrac{1}{2} p^2 - \tfrac{1}{q}

• angular momentum: \vec L = \vec q \times \vec p

• the eccentricity vector: \vec e = \vec p \times \vec L - \tfrac{\vec q}{q}

I’ll use conventions where the Poisson brackets of the components of position q_k and momentum p_\ell are taken to be

\{q_k,p_\ell\} = \delta_{jk}

From this, using the rules for Poisson brackets, we can calculate the Poisson brackets of everything else. For starters:

\{E, L_k\} = \{H,e_k\} = 0

These equations are utterly unsurprising, since they are equivalent to saying that angular momentum \vec L and the eccentricity vector \vec e are conserved. More interestingly, we have

\begin{array}{ccl}  \{L_k, L_\ell\} &=& \epsilon_{jk\ell} L_\ell  \\  \{e_k, L_\ell\} &=& \epsilon_{jk\ell} e_\ell \\  \{e_k, e_\ell \} &=& -2E \epsilon_{jk\ell} L_\ell  \end{array}

where all the indices go from 1 to 3, I’m summing over repeated indices even if they’re both subscripts, and \epsilon_{jk\ell} are the Levi–Civita symbols.

Now, the factor of -2E above is annoying. But on the region of phase space where E < 0—that is, the space of bound states, where the particle carries out an elliptical orbit—we can define a new vector to deal with this annoyance:

\displaystyle{    \vec M = \frac{\vec e}{\sqrt{-2E}} }

Now we easily get

\begin{array}{ccl}  \{L_k, L_\ell\} &=& \epsilon_{jk\ell} L_\ell  \\  \{L_j, M_k\} &=& \epsilon_{jk\ell} M_\ell \\  \{M_j, M_k \} &=& \epsilon_{jk\ell} L_\ell  \end{array}

This is nicer, but we can simplify it even more if we introduce some new vectors that are linear combinations of \vec L and \vec M, namely half their sum and half their difference:

\vec A = \tfrac{1}{2} (\vec L + \vec M),  \qquad \vec B = \tfrac{1}{2}(\vec L - \vec M)

Then we get

\begin{array}{ccl}  \{ A_j, A_k\} &=&  \epsilon_{jk\ell} A_\ell \\  \{ B_j, B_k\} &=&  \epsilon_{jk\ell} B_\ell  \\  \{ A_j, B_k\} &=& 0  \end{array}

So, the observables A_j and B_k contain the same information as the angular momentum and eccentricity vectors, but now they commute with each other!

What does this mean?

Well, when you’re first learning math the Levi–Civita symbols \epsilon_{jk\ell} may seem like just a way to summarize the funny rules for cross products in 3-dimensional space. But as you proceed, you ultimately learn that \mathbb{R}^3 with its cross product is the Lie algebra of the Lie group \mathrm{SO}(3) of rotations in 3-dimensional space. From this viewpoint, the Levi–Civita symbols are nothing but the structure constants for the Lie algebra \mathfrak{so}(3): that is, a way of describing the bracket operation in this Lie algebra in terms of basis vectors.

So, what we’ve got here are two commuting copies of \mathfrak{so}(3), one having the A_j as a basis and the other having the B_k as a basis, both with the Poisson bracket as their Lie bracket.

A better way to say the same thing is that we’ve got a single 6-dimensional Lie algebra

\mathfrak{so}(3) \oplus \mathfrak{so}(3)

having both the A_j and B_k as basis. But then comes the miracle:

\mathfrak{so}(3) \oplus \mathfrak{so}(3) \cong \mathfrak{so}(4)

The easiest way to see this is to realize that S^3, the unit sphere in 4 dimensions, is itself a Lie group with Lie algebra isomorphic to \mathfrak{so}(3). Namely, it’s the unit quaternions!—or equivalently, the Lie group \mathrm{SU}(2). The Lie algebra of \mathrm{SU}(2) is called \mathfrak{su}(2). Thanks to another miracle, this isomorphic to \mathfrak{so}(3):

\mathfrak{so}(3) \cong \mathfrak{su}(2)

This will be very important in what’s to come.

Like any Lie group, \mathrm{SU}(2) acts on itself via left and right translations, which commute. But these are actually ways of rotating S^3. So, you get a map of Lie algebras from \mathfrak{so}(3) \oplus \mathfrak{so}(3) to \mathfrak{so}(4), and you can check that this is an isomorphism.

So in this approach, the 4th dimension pops out of the fact that the Kepler problem has conserved quantities that give two commuting copies of \mathfrak{so}(3). By Noether’s theorem, it follows that conservation of angular momentum and the eccentricity vector must come from a hidden symmetry: symmetry under some group whose Lie algebra is \mathfrak{so}(4).

And indeed, it turns out that the group \mathrm{SO}(4) acts on the bound states of the Kepler problem in a way that commutes with time evolution!

But how can we understand this fact?

Historically, it seems that the first explanation was found in the quantum-mechanical context. In 1926, even before Schrödinger came up with his famous equation, Pauli used conservation of angular momentum and the eccentricity to determine the spectrum of hydrogen. But I believe he was using what we now call Lie algebra methods, not bringing in the group \mathrm{SO}(4).

In 1935, Vladimir Fock, famous for the ‘Fock space’ in quantum field theory, explained this 4-dimensional rotation symmetry by setting up an equivalence between hydrogen atom bound states and functions on the 3-sphere! In the following year, Valentine Bargmann, later famous for being Einstein’s assistant, connected Pauli and Fock’s work using group representation theory.

All this is quantum mechanics. It seems the first global discussion of this symmetry in the classical context was given by Bacry, Ruegg, and Souriau in 1966, leading to important work by Souriau and Moser in the early 1970s. Since then, much more has been done. You can learn about a lot of it from these two books, which are my constant companions these days:

• Victor Guillemin and Shlomo Sternberg, Variations on a Theme by Kepler, Providence, R.I., American Mathematical Society, 1990.

• Bruno Cordani, The Kepler Problem: Group Theoretical Aspects, Regularization and Quantization, with Application to the Study of Pertubation, Birkhäuser, Boston, 2002.

But let me try to summarize a bit of this material.

One way to understand the \mathrm{SO}(4) symmetry for bound states of the Kepler problem is the result of Hamilton that I explained last time: for a particle moving around an elliptical orbit in the Kepler problem, its momentum moves round and round in a circle.

I’ll call these circles Hamilton’s circles. Hamilton’s circles are not arbitrary circles in \mathbb{R}^3. Using the inverse of stereographic projection, we can map \mathbb{R}^3 to the unit 3-sphere:

\begin{array}{rccl}  f \colon &\mathbb{R}^3 &\to & S^3    \subset \mathbb{R}^4  \\  \\  & \vec p    &\mapsto &  \displaystyle{\left(\frac{p^2 - 1}{p^2 +1}, \frac{2 \vec p}{p^2 + 1}\right).}  \end{array}

This map sends Hamilton’s circles in \mathbb{R}^3 to great circles in S^3. Furthermore, this construction gives all the great circles in S^3 except those that go through the north and south poles, (\pm 1, 0,0,0). These missing great circles correspond to periodic orbits in the Kepler problem where a particle starts with momentum zero, falls straight to the origin, and bounces back the way it came. If we include these degenerate orbits, every great circle on the unit 3-sphere is the path traced out by the momentum in some solution of the Kepler problem.

Let me reemphasize: in this picture, points of S^3 correspond not to positions but to momenta in the Kepler problem. As time passes, these points move along great circles in S^3... but not at constant speed.

How is their dynamics related to geodesic motion on the 3-sphere?
We can understand this as follows. In Part 2 we saw that

L^2 + M^2 =  - \frac{1}{2E}

and using the fact that \vec L \cdot \vec M = 0, an easy calculation gives

E \; = \; -\frac{1}{8A^2} \; = \; -\frac{1}{8B^2}

In the 3-sphere picture, the observables A_j become functions on the cotangent bundle T^\ast S^3. These functions are just the components of momentum for a particle on S^3, defined using a standard basis of right-invariant vector fields on S^3 \cong \mathrm{SU}(2). Similarly, the observables B_j are the components of momentum using a standard basis of left-invariant vector fields. It follows that

K = 8A^2 = 8B^2

is the Hamiltonian for a nonrelativistic free particle on S^3 with an appropriately chosen mass. Such a particle moves around a great circle on S^3 at constant speed. Since the Kepler Hamiltonian E is a function of K, particles governed by this Hamiltonian move along the same trajectories—but typically not at constant speed!

Both K and the Kepler Hamiltonian E = -1/K are well-defined smooth functions on the symplectic manifold that Souriau dubbed the Kepler manifold:

T^+ S^3 = \{ (x,p) : \; x \in S^3, \, p \in T_x S^3, \, p \ne 0 \}

This is the cotangent bundle of the 3-sphere with the zero cotangent vectors removed, so that E = -1/K is well-defined.

All this is great. But even better, there’s yet another picture of what’s going on, which brings relativity into the game!

We can also think of T^+ S^3 as a space of null geodesics in the Einstein universe: the manifold \mathbb{R} \times S^3 with the Lorentzian metric

dt^2 - ds^2

where dt^2 is the usual Riemannian metric on the real line (‘time’) and ds^2 is the usual metric on the unit sphere (‘space’). In this picture x \in S^3 describes the geodesic’s position at time zero, while the null cotangent vector p + \|p\| dt describes its 4-momentum at time zero. Beware: in this picture two geodesics count as distinct if we rescale p by any positive factor other than 1. But this is good: physically, it reflects the fact that in relativity, massless particles can have different 4-momentum even if they trace out the same path in spacetime.

In short, the Kepler manifold T^+ S^3 also serves as the classical phase space for a free massless spin-0 particle in the Einstein universe!

And here’s the cool part: the Hamiltonian for such a particle is

\sqrt{K} = \sqrt{-1/E}

So it’s a function of both the Hamiltonians we’ve seen before. Thus, time evolution given by this Hamiltonian carries particles around great circles on the 3-sphere… at constant speed, but at a different speed than the nonrelativistic free particle described by the Hamiltonian K.

In the next episode, I’ll quantize this whole story. We’ll get an interesting outlook on the quantum mechanics of the hydrogen atom.

August 02, 2025

John BaezThe Kepler Problem (Part 8)

Now comes the really new stuff. I want to explain how the hydrogen atom is in a certain sense equivalent to a massless spin-½ particle in the ‘Einstein universe’. This is the universe Einstein believed in before Hubble said the universe was expanding! It has a 3-sphere S^3 for space, and this sphere stays the same size as time passes.

Today I’ll just lay the groundwork. To study relativistic spin-½ quantum particles, we need to understand the Dirac operator. So, we need to bring out the geometrical content of what we’ve already done.

The main trick is to see the 3-sphere as the group \text{SU}(2), which acts on itself in two ways, via left and right translations. We get all the rotational symmetries of the 3-sphere this way. In Part 4 we studied operators called A_i and B_i on L^2(S^3), which are the self-adjoint generators of these left and right translations. We saw that

A^2 = B^2

Today we’ll see that A^2 = B^2 is proportional to the Laplacian on the unit 3-sphere!

But then I want to look at spinor fields on the 3-sphere. We can think of elements of L^2(S^3) \otimes \mathbb{C}^2 as spinor fields on the 3-sphere if we trivialize the spinor bundle using the action of \text{SU}(2) as right translations on S^3 \cong \text{SU}(2). You could use left translations, but you have to pick one or the other, and we’ll use right translations.

In some notes from a course he gave at Harvard, Peter Kronheimer used this trick to study the Dirac operator \partial\!\!\!/ on these spinor fields:

• Peter Kronheimer, Bott periodicity and harmonic theory on the 3-sphere.

I’ll explain the geometry behind his computations using some ideas I got from Paul Schwahn. Then I’ll show that the hydrogen atom Hamiltonian, thought of as an operator on L^2(S^3) \otimes \mathbb{C}^2, is

\displaystyle{ H = - \frac{1}{2 (\partial\!\!\!/ - \frac{1}{2})^2} }

Next time we’ll use this to relate hydrogen to the massless relativistic spin-½ particle on the Einstein universe.

Okay, on to business!

The Laplacian on the 3-sphere

Let’s start with the Laplacian on the 3-sphere. From what we’ve already seen, the operators -iB_j are a basis of left-invariant vector fields on S^3. Each vector field -iB_j gives a tangent vector at the identity of \text{SU}(2), namely

-\frac{i}{2} \sigma_j \in \mathfrak{su}(2)

What is the length of this vector if we give \text{SU}(2) the usual Riemannian metric on the unit 3-sphere? Exponentiating this vector we get

\exp(-\frac{i}{2} \sigma_j t)

which is the identity precisely when t is an integer times 4\pi. Since a great circle on the unit sphere has circumference 2\pi, this vector must have length ½. It follows that the vector fields

X_j = -2i B_j

have unit length everywhere, and one can check that they form an orthonormal basis of vector fields on S^3. We thus define the (positive definite) Laplacian on S^3 to be the differential operator

\displaystyle{ \Delta = - \sum_{i = 1}^3 X_j^2 = 4B^2 }

In Part 5 we saw that

\displaystyle{ L^2(S^3) \cong \bigoplus_j V_j \otimes V_j }

where V_j is the spin-j representation of \text{SU}(2). We also saw that

\phi \in V_j \otimes V_j \implies B^2 \phi = j(j+1) \phi

It follows that

\phi \in V_j \otimes V_j \implies \Delta \phi = 4j(j+1) \phi

But chemists like to work with n = 2j+1 instead: they call this the ‘principal quantum number’ for a state of hydrogen atom. Since

4j(j+1) = 4j^2 + 4j = n^2 - 1

it follows that

\phi \in V_j \otimes V_j \implies \Delta \phi = (n^2 - 1) \phi

so the eigenvalues of the Laplacian on the unit 3-sphere are n^2 - 1 where n ranges over all positive integers.

Tensoring \Delta with the identity we obtain a differential operator on L^2(S^3) \otimes \mathbb{C}^2, which by abuse of notation we again call \Delta. We know from Part 7 that the hydrogen atom Hamiltonian is

\displaystyle{ H = - \frac{1}{8(B^2 + \frac{1}{4})} }

but now we know B^2 = \Delta, so

\displaystyle{ H = - \frac{1}{2(\Delta + 1)} }

The Dirac operator on the 3-sphere

Next we turn to the Dirac operator.

Up to isomorphism there is only one choice of spin structure on S^3, namely the trivial bundle. To get this can trivialize the tangent bundle of S^3 \cong \text{SU}(2) using left translations. This lets us identify the oriented orthonormal frame bundle of S^3 with the trivial bundle

S^3 \times \text{SO}(3) \to S^3

This gives a way to identify the spin bundle on S^3 with the trivial bundle

S^3 \times \text{SU}(2) \to S^3

This in turn lets us identify spinor fields on S^3 with \mathbb{C}^2-valued functions.

There are at least two important connections on the tangent bundle of S^3:

• One is the Cartan connection: a vector field is covariantly constant with respect to this connection if and only if it is invariant under left translations on S^3 \cong \text{SU}(2).

• The other is the Levi–Civita connection: the unique torsion-free connection for which parallel translation preserves the metric.

Parallel translation with respect to the Cartan connection also preserves the metric, but the Cartan connection is flat and has torsion, while the Levi–Civita connection is curved and torsion-free.

Each of these connections lifts uniquely to a connection on the spin bundle which then gives a Dirac-like operator. The Cartan connection gives covariant derivative operators \nabla^c_j on L^2(S^3) \otimes \mathbb{C}^2 with

\nabla^c_j = X_j \otimes 1

while the Levi–Civita connection gives covariant derivative operators \nabla_j with

\nabla_j = X_j \otimes 1 + \tfrac{i}{2}(1 \otimes \sigma_j)

We can define a Dirac operator \partial\!\!\!/ on L^2(S^3) \otimes \mathbb{C}^2 using the Levi–Civita connection:

\partial\!\!\!/ = -i (1 \otimes \sigma_j) \nabla_j

Here I’m summing over repeated indices, and I’m not worrying about superscripts versus subcripts because we can raise and lower indices to our heart’s content using the standard metric on the unit 3-sphere.

I should warn you that this Dirac operator has an i in it, to make it self-adjoint! This may be nonstandard, but it will make our life easier.

On the other hand, Kronheimer defined a Dirac-like operator D using the Cartan connection:

D = -i (1 \otimes \sigma_j) \nabla^c_j

An easy calculation shows how \partial\!\!\!/ and D are related:

\begin{array}{ccl}  \partial\!\!\!/ &=& -i (1 \otimes \sigma_j) \nabla_j \\ [3pt]  &=& -i (1 \otimes \sigma_j) \left( \nabla^c_j + \frac{i}{2}(1 \otimes \sigma_j)\right) \\ [3pt]  &=& D + \frac{3}{2}  \end{array}

where we use \sigma_j^2 = 1 and the 3-dimensionality of space.

Let us compute D^2. Using the identities

\sigma_j \sigma_k = \delta_{jk} + i \epsilon_{jk\ell} \sigma_\ell

\epsilon_{jk\ell} X_j X_k = 2 X_\ell

D = -i X_j \otimes \sigma_j

we obtain

\begin{array}{ccl}  D^2 &=& -X_j X_k \otimes \sigma_j \sigma_k \\ [2pt]  &=& -X_j X_k \otimes (\delta_{jk} + i \epsilon_{jk\ell} \sigma_\ell ) \\ [2pt]  &=& 4X_j X_j \otimes 1 - 2i X_\ell \otimes \sigma_\ell \\ [2pt]  &=& \Delta - 2 D  \end{array}

It follows that \Delta = D(D+2), so

\Delta + 1 = (D+1)^2 = (\partial\!\!\!/ - \frac{1}{2})^2

Combining this with our earlier formula for the hydrogen atom Hamiltonian:

\displaystyle{ H = - \frac{1}{2(\Delta + 1)} }

we can now express the Hamiltonian for the hydrogen atom in terms of the Dirac operator on the 3-sphere:

\displaystyle{ H \; = \; - \frac{1}{2(\partial\!\!\!/ - \frac{1}{2})^2} }

This is pretty cool. We will exploit this later.

Further details

That was the main result we’ll need, but while working on this I got interested in understanding the eigenvectors and eigenvalues of the Dirac operator in more detail. Here are some facts about those.

Since \partial\!\!\!/ maps each finite-dimensional subspace

V_j \otimes V_j \otimes \mathbb{C}^2 \subset L^2(S^3) \otimes \mathbb{C}^2

to itself, and it is self-adjoint on these subspaces, each of these subspaces has an orthonormal basis of eigenvectors. So, consider an eigenvector: suppose \psi \in V_j \otimes V_j \otimes \mathbb{C}^2 has

\partial\!\!\!\psi = \lambda \psi.

Then we must have

(\lambda - \frac{1}{2})^2 \psi = (\partial\!\!\!/ - \frac{1}{2})^2 \psi = (\Delta + 1)\psi = n^2 \psi

where n = 2j+1 as usual, so we must have \lambda - \frac{1}{2} = \pm n. Thus, the only possible eigenvalues of \partial\!\!\!/ on the subspace V_j \otimes V_j \otimes \mathbb{C}^2 are \pm n + \frac{1}{2}, or in other words:

\psi \in V_j \otimes V_j \otimes \mathbb{C}^2 \implies \partial\!\!\!/ \psi =  \lambda \psi \text{ for } \lambda = \pm (2j+1) + \frac{1}{2}

To go further, we can use some results from Kronheimer. First, he shows the spectrum of \partial\!\!\!/ is symmetric about the origin. To do this he identifies \mathbb{C}^2 with the quaternions, and thus L^2(S^3) \otimes \mathbb{C}^2 with a space of quaternion-valued functions on the 3-sphere. Then quaternionic conjugation gives a conjugate-linear operator

\dagger \colon L^2(S^3) \otimes \mathbb{C}^2 \to L^2(S^3) \otimes \mathbb{C}^2

with \dagger^2 = 1. He then proves a result, using his Dirac-like operator D, that implies

\partial\!\!\!/ \psi = \lambda \psi \; \iff \; \partial\!\!\!/ \psi^\dagger = -\lambda \psi^\dagger

Thus the Dirac operator \partial\!\!\!/ has as a negative eigenvalue for each positive eigenvalue, and their multiplicities are the same!

Second, he proved a result which implies that the eigenspace

F_\lambda = \{ \psi \in L^2(S^3) \otimes \mathbb{C}^2 : \; \partial\!\!\!/ \psi = \lambda \psi \}

has dimension

\text{dim}(F_\lambda) = (\lambda+\frac{1}{2})(\lambda-\frac{1}{2})

when \lambda \in \mathbb{Z} + \frac{1}{2} and zero otherwise. Thus every number that’s an integer plus \frac{1}{2} is an eigenvalue of the Dirac operator on the 3-sphere—except \pm\frac{1}{2}. Also, while we’ve already seen that

V_j \otimes V_j \otimes \mathbb{C}^2 = F_{n+\frac{1}{2}} \oplus F_{-n +\frac{1}{2}}

where n = 2j+1, this additional result implies that these two summands have different dimensions, namely n(n+1) and n(n-1), respectively. Their total dimension is 2n^2, as we already knew! We knew it because this is the number of electron states in the shell with principal quantum number n.

So, a bit more than half the electron states in the nth shell are positive eigenvectors of the Dirac operator on the 3-sphere, while a bit fewer than half are negative eigenvectors. Weird but true!

For an explicit basis of eigenvectors of the Dirac operator on S^3, see:

• Fabio Di Cosmo and Alessandro Zampini, Some notes on Dirac operators on the S^2 and S^3 spheres.

John BaezThe Kepler Problem (Part 7)

I’ve explained a cool way to treat bound states of the hydrogen atom as wavefunctions on a sphere in 4-dimensional space. But so far I’ve been neglecting the electron’s spin. Now let’s throw that in too!

This will wind up leading us in some surprising directions. So far I’ve just been reviewing known ideas, but now we’re getting into my new paper:

Second quantization for the Kepler problem.

It starts out being quite routine: to include spin, we just tensor our previous Hilbert space L^2(S^3) with a copy of \mathbb{C}^2 describing the electron’s spin. The resulting space

L^2(S^3) \otimes \mathbb{C}^2

is the Hilbert space of bound states of a spinor-valued version of the Schrödinger equation for the hydrogen atom.

Beware: this is a simplification of a more careful treatment of hydrogen using the Dirac equation: it neglects all spin-dependent terms in Hamiltonian, like spin-orbit interactions. These spin-dependent terms give corrections that go to zero in the limit where the speed of light approaches infinity. So what we’re doing now is giving a nonrelativistic treatment of the hydrogen atom, but taking into account the fact that the electron is a spin-½ particle.

Things get fun now. The Hilbert space L^2(S^3) \otimes \mathbb{C}^2 becomes a unitary representation of \text{SU}(2) in three important ways. The first two come from the actions of \text{SU}(2) on L^2(S^3) by left and right translation, which I explained in Part 5. The third comes from the natural action of \text{SU}(2) on \mathbb{C}^2. All three of these actions of \text{SU}(2) on L^2(S^3) \otimes \mathbb{C}^2 commute with each other. We thus get a unitary representation of \text{SU}(2) \times \text{SU}(2) \times \text{SU}(2) on L^2(S^3) \otimes \mathbb{C}^2.

It is useful to spell this out at the Lie algebra level. In Part 5, I introduced self-adjoint operators A_j and B_j on L^2(S^3): the self-adjoint generators of the left and right translation actions of \text{SU}(2), respectively. Now we’ll tensor these operators with the identity on \mathbb{C}^2 and get operators on L^2(S^3) \otimes \mathbb{C}^2, which by abuse of notation we’ll denote with the same names: A_j and B_j. But we’ll also introduce spin angular momentum operators

S_j = 1 \otimes \frac{1}{2} \sigma_j

on L^2(S^3) \otimes \mathbb{C}^2. These operators obey the following commutation relations:

\begin{array}{cclcccl}    [A_j, A_k] &=&  i\epsilon_{jk\ell} A_\ell  &\quad &  [A_j, B_k] &=& 0 \\ [2pt]    [B_j, B_k] &=&  i\epsilon_{jk\ell} B_\ell &&  [A_j, S_k] &=& 0  \\ [2pt]   [S_j, S_k] &=&  i\epsilon_{jk\ell} S_\ell && [B_j, S_k] &=& 0  \end{array}

Once we have 3 commuting actions of \text{SU}(2) on a Hilbert space we can get more by mixing and matching them. I won’t go overboard and describe all 23 = 8 of them, but I’ll mention some that we need for physics. First we can define orbital angular momentum operators

L_j = A_j + B_j

These obey

\begin{array}{ccl}     [L_j, L_k] &=&  i\epsilon_{jk\ell} L_\ell \\  [2pt]    [S_j, S_k] &=& i \epsilon_{jk\ell} S_\ell \\  [2pt]    [L_j, S_k] &=&  0  \end{array}

Physically speaking, the L_j generate an action of \text{SU}(2) that rotates the position of the electron in space while not changing its spin state, just as the S_j rotate the electron’s spin state while not changing its position.

Adding the spin and orbital angular momentum, we get total angular momentum operators

J_j = L_j + S_j

which obey

[J_j, J_k] = i \epsilon_{jk\ell} J_\ell

These generate an action of \text{SU}(2) that rotates the electron’s wavefunction along with its spin state!

Finally, we define a Hamiltonian for our new hydrogen atom with spin:

\displaystyle{   H \; = \; - \frac{1}{8(A^2 + \frac{1}{4})} \; = \; - \frac{1}{8(B^2 + \frac{1}{4})} }

This is just the Hamiltonian H_0 for the simplified hydrogen atom neglecting spin that we studied in Part 5, tensored with the identity operator on \mathbb{C}^2. Thus it has the same spectrum, but the multiplicity of each eigenvalue has doubled. This Hamiltonian H commutes with all the operators A_j, B_j, S_j, and thus also L_j and J_j.

Now we can reuse our work from Part 5 and decompose our new Hilbert space into eigenspaces of the Hamiltonian H, labeled by n = 1, 2, 3, \dots, and the orbital angular momentum operator J^2, labeled by \ell = 0 , \dots, n-1. We get this:

\displaystyle{  L^2(S^3) \otimes \mathbb{C}^2 \cong      \bigoplus_{n = 1}^\infty \bigoplus_{\ell = 0}^{n-1} V_\ell \otimes \mathbb{C}^2 }

where V_\ell is the spin-\ell representation of the \text{SU}(2) that rotates the electron’s position but not its spin.

In Part 5 we saw a basis |n, \ell, m \rangle of L^2(S^3). If we tensor that with the standard basis of \mathbb{C}^2, we get an orthonormal basis |n , \ell, m, s \rangle of L^2(S^3) \otimes \mathbb{C}^2 where:

• the principal quantum number n ranges over positive integers;

• the azimuthal quantum number \ell ranges from 0 to n-1 in integer steps;

• the magnetic quantum number m ranges from -\ell to \ell in integer steps;

• the spin quantum number s is +\frac{1}{2} or -\frac{1}{2}.

The calculations we did in Part 5 now imply that

\begin{array}{ccl}     A^2 |n, \ell, m, s \rangle &=& B^2 |n, \ell, m, s \rangle \; =  \;    \frac{1}{4}( n^2 - 1) |n, \ell, m, s\rangle  \\  [8pt]  H |n, \ell, m, s \rangle &=& \displaystyle{ - \frac{1}{2n^2}\,  |n, \ell, m, s \rangle } \\ [12pt]      L^2 |n, \ell, m, s\rangle &=& \ell(\ell + 1) |n , \ell, m, s \rangle  \\ [3pt]      L_3 |n , \ell, m, s \rangle &=& m |n , \ell, m, s \rangle  \\ [3pt]      S^2 |n , \ell, m, s \rangle &=& \frac{3}{4} |n , \ell, m, s \rangle  \\  [3pt]      S_3 |n , \ell, m, s \rangle &=& s |n , \ell, m, s \rangle    \end{array}

Combining this with the textbook treatment of the hydrogen atom, it follows that L^2(S^3) \otimes \mathbb{C}^2 is indeed unitarily equivalent to the subspace of L^2(\mathbb{R}^3) \otimes \mathbb{C}^2 consisting of bound states of the spinor-valued Schrödinger equation

i \frac{\partial \psi}{\partial t} = -\frac{1}{2} \nabla^2 \psi - \frac{1}{r} \psi

with the operators H, L_j and S_j having their usual definitions:

\begin{array}{ccl}      H &=&  -\frac{1}{2} \nabla^2 - \frac{1}{r}   \\ [12pt]  L_j &=&  -i\epsilon_{jk\ell} x_k \frac{\partial}{\partial x_\ell}  \\ [10pt]  S_j &=& \frac{1}{2} \sigma_j     \end{array}

In short, the Hamiltonian H on L^2(S^3) \otimes \mathbb{C}^2 is unitarily equivalent to the Hamiltonian on bound states of the hydrogen atom defined in the usual way! We’ve turned hydrogen into a festival of commuting \text{SU}(2) actions.

Next we’ll do something a bit wild, and new.

n-Category Café Jack Morava

Today I heard from David Benson that Jack Morava died yesterday. This comes as such a huge shock that I can’t help but hope Benson was somehow misinformed. Morava has been posting comments to the n-Café and sending emails to me even very recently.

This is all I know, now.

John BaezPolarities (Part 6)

I’ve been working with Adittya Chaudhuri on some ideas related to this series of blog articles, and now our paper is done!

• John Baez and Adittya Chaudhuri, Graphs with polarities.

Abstract. In fields ranging from business to systems biology, directed graphs with edges labeled by signs are used to model systems in a simple way: the nodes represent entities of some sort, and an edge indicates that one entity directly affects another either positively or negatively. Multiplying the signs along a directed path of edges lets us determine indirect positive or negative effects, and if the path is a loop we call this a positive or negative feedback loop. Here we generalize this to graphs with edges labeled by a monoid, whose elements represent ‘polarities’ possibly more general than simply ‘positive’ or ‘negative’. We study three notions of morphism between graphs with labeled edges, each with its own distinctive application: to refine a simple graph into a complicated one, to transform a complicated graph into a simple one, and to find recurring patterns called ‘motifs’. We construct three corresponding symmetric monoidal double categories of ‘open’ graphs. We study feedback loops using a generalization of the homology of a graph to homology with coefficients in a commutative monoid. In particular, we describe the emergence of new feedback loops when we compose open graphs using a variant of the Mayer–Vietoris exact sequence for homology with coefficients in a commutative monoid.


Read the whole series:

Part 1: Causal loop diagrams, and more generally graphs with edge labeled by elements of a monoid.

Part 2: graphs with edges labeled by elements of a ring.

Part 3: hyperrings and hyperfields.

Part 4: rigs from hyperrings.

Part 5: pulling back and pushing forwards edge labels on labeled graphs.

Part 6: a paper called ‘Graphs with polarities’ with Adittya Chaudhuri, summarizing some of the work here but also much more.

n-Category Café Jack Morava

Today I heard from David Benson that Jack Morava died yesterday. This comes as such a huge shock that I can’t help but hope Benson was somehow misinformed. Morava has been posting comments to the n-Café and sending emails to me even very recently.

This is all I know, now.

August 01, 2025

Jordan EllenbergJohn O’Hara, “Wise Guy”

The story opens:

Most of the people in the damn place were hacking away at their disgusting lunch, but I was still drinking Martinis and sitting alone in this thing that I guess could be called a booth, although it wasn’t even the height of my shoulder….. Those that came in together would blab-blab about what they were going to drink, and then, when they would order their drinks, they would have the same things they always had. Those that came in by themselves would light their silly cigarettes and bore the bartender with their phony politeness, just to prove to anybody at all that they knew the bartender.

This reads pretty differently than most of the stories in the anthology I’m reading (Hellbox). I would imagine that anybody who reads mid-20c American fiction would have the same reaction I did to this scene of a man drinking alone in New York, moodily hating all the chatty phonies within earshot — oh, this is a Catcher in the Rye imitation thing. But “Wise Guy” was published in the New Yorker on May 18, 1945 — Salinger’s first story in Caulfield’s voice, “I’m Crazy,” doesn’t come out until December.

I don’t think Salinger was imitating O’Hara. My sense is that the germ of Catcher in the Rye already existed for Salinger during the war.

But even if he was — I mean, the O’Hara is fine, but you go back and look at any of Catcher-era Salinger and it’s like nothing else. Yes, superficially, it’s cranky like this O’Hara passage but every sentence sings with life, joyous despite itself.

Matt von HippelMicrodosing Vibe Physics

Have you heard of “vibe physics”?

The phrase “vibe coding” came first. People have been using large language models like ChatGPT to write computer code (and not the way I did last year). They chat with the model, describing what they want to do and asking the model to code it up. You can guess the arguments around this, from people who are convinced AI is already better than a human programmer to people sure the code will be riddled with errors and vulnerabilities.

Now, there are people claiming not only to do vibe coding, but vibe physics: doing theoretical physics by chatting with an AI.

I think we can all agree that’s a lot less plausible. Some of the people who do vibe coding actually know how to code, but I haven’t seen anyone claiming to do vibe physics who actually understands physics. They’re tech entrepreneurs in the most prominent cases, random people on the internet otherwise. And while a lot of computer code is a minor tweak on something someone has already done, theoretical physics doesn’t work that way: if someone has already come up with your idea, you’re an educator, not a physicist.

Still, I think there is something to keep in mind about the idea of “vibe physics”, related to where physics comes from.

Here’s a question to start with: go back a bit before the current chat-bot boom. There were a ton of other computational and mathematical tools. Theorem-proving software could encode almost arbitrary mathematical statements in computer code and guarantee their accuracy. Statistical concepts like Bayes’ rule described how to reason from evidence to conclusions, not flawlessly but as well as anyone reliably can. We had computer simulations for a wealth of physical phenomena, and approximation schemes for many others.

With all those tools, why did we still have human physicists?

That is, go back before ChatGPT, before large language models. Why not just code up a program that starts with the evidence and checks which mathematical model fits it best?

In principle, I think you really could have done that. But you could never run that program. It would take too long.

Doing science 100% correctly and reliably is agonizingly slow, and prohibitively expensive. You cannot check every possible model, nor can you check those models against all the available data. You must simplify your problem, somehow, even if it makes your work less reliable, and sometimes incorrect.

And for most of history, humans have provided that simplification.

A physicist isn’t going to consider every possible model. They’re going to consider models that are similar to models they studied, or similar to models others propose. They aren’t going to consider all the evidence. They’ll look at some of the evidence, the evidence other physicists are talking about and puzzled by. They won’t simulate the consequences of their hypotheses in exhaustive detail. Instead, they’ll guess, based on their own experience, a calculation that captures what they expect to be relevant.

Human physicists provided the unreliable part of physics, the heuristics. The “vibe physics”, if you will.

AI is also unreliable, also heuristic. But humans still do this better than AI.

Part of the difference is specificity. These AIs are trained on all of human language, and then perhaps fine-tuned on a general class of problems. A human expert has spent their life fine-tuning on one specific type of problem, and their intuitions, their heuristics, their lazy associations and vibes, all will be especially well-suited to problems of that type.

Another part of the difference, though, is scale.

When you talk to ChatGPT, it follows its vibes into paragraphs of text. If you turn on reasoning features, you make it check its work in the background, but it still is generating words upon words inside, evaluating those words, then generating more.

I suspect, for a physicist, the “control loop” is much tighter. Many potential ideas get ruled out a few words in. Many aren’t even expressed in words at all, just concepts. A human physicist is ultimately driven by vibes, but they check and verify those vibes, based on their experience, at a much higher frequency than any current AI system can achieve.

(I know almost nothing about neuroscience. I’m just basing this on what it can feel like, to grope through a sentence and have it assemble itself as it goes into something correct, rather than having to go back and edit it.)

As companies get access to bigger datacenters, I suspect they’ll try to make this loop tighter, to get AI to do something closer to what (I suspect, it appears) humans do. And then maybe AI will be able to do vibe physics.

Even then, though, you should not do vibe physics with the AI.

If you look at the way people describe doing vibe physics, they’re not using the AI for the vibes. They’re providing the vibes, and the AI is supposed to check things.

And that, I can confidently say, is completely ass-backwards. The AI is a vibe machine, it is great at vibes. Substituting your vibes will just make it worse. On the other hand, the AI is awful at checking things. It can find published papers sometimes, which can help you check something. But it is not set up to do the math, at least not unless the math can be phrased as a simple Python script or an IMO problem. In order to do anything like that, it has to call another type of software to verify. And you could have just used that software.

Theoretical physics is still not something everyone can do. Proposing a crackpot theory based on a few papers you found on Google and a couple YouTube videos may make you feel less confident than proposing a crackpot theory based on praise from ChatGPT and a list of papers it claims have something to do with your idea, which makes it more tempting. But it’s still proposing a crackpot theory. If you want to get involved, there’s still no substitute for actually learning how physics works.

Scott Aaronson Quantum Complexity Theory Student Project Showcase #5 (2025 Edition)!

Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (!), at a math camp in St. Louis that was also attended by my 8-year-old son. Soon I’ll accompany my 12-year-old daughter to a science camp in Connecticut, where I’ll also give lectures.

There’s a great deal to say about these experiences, but for now: it’s been utterly transformative and life-affirming to spend my days in teaching precocious, enthusiastic, non-jaded children the material I love in the real world, rather than (let’s say) in scrolling through depressing world news and social media posts and emails from hateful trolls on my phone. It’s made me feel 25 years younger (well, at least until I need to walk up a flight of stairs). It’s made me want to go back to actual research and teaching, which besides family and friends have been the main sources of joy in my life.


So on that note, and without further ado: I hereby present the latest Quantum Complexity Theory Student Project Showcase! As the name suggests, this is where I share a selection of the best research projects, from the students who took my graduate class on Quantum Complexity Theory at UT Austin this past spring.

See here for the four previous iterations of the Showcase:

(As you can see, the timing hasn’t been 100% consistent.)

I expect that, as in past editions, many of this year’s projects will lead to published research papers, or at the very least, preprints on the arXiv.


And now, really without further ado, the projects!

(1) Quantum Hermite Transform and Gaussian Goldreich-Levin, by Vishnu Iyer and Siddhartha Jain.

Vishnu and Sid propose a new primitive for quantum algorithms—the Hermite transform, as opposed to the Fourier transform—and give at least one successful example of its use. They’d be eager to know if anyone can think of other applications!

(2) Quantum Statistical Witness Indistinguishability, by Shafik Nassar and Ronak Ramachandran.

In modern cryptography, even if it isn’t statistical zero-knowledge (SZK), a proof protocol might have the weaker property of being statistically witness-indistinguishable (SWI): that is, Arthur can’t tell which of two possible yes-witnesses Merlin holds. Here Shafik and Ronak initiate the study of quantum SWI, and prove the basic properties of this notion, such as the equivalence of honest and dishonest verifier. Hopefully this will serve as a springboard for someone to find an actual QSWI protocol for an interesting problem.

(3) A Zero-Knowledge Protocol for Keyed Unitary Families, by David Joy and Angela Zhang.

Continuing the theme of quantum zero-knowledge, David and Angela give a protocol by which Merlin can convince Arthur that he knows a unitary relating one pure state to another, without revealing the unitary. Again continuing a theme, applications of this protocol are sought!

(4) On Query Lower Bounds for Aaronson-Kuperberg Unitary Synthesis Circuits, by Arko Banerjee.

Back in 2006, when we formulated our so-called “Unitary Synthesis Conjecture,” Greg Kuperberg and I showed that if a quantum algorithm applies an n-qubit unitary U(f) after making a single query to a Boolean function f, then as we range over f’s, there can be at most 4n possible values of U(f). Here Arko improves our bound to 2n, which is tight. He also tries extremely hard to generalize our bound to the two-query case, not quite succeeding but proving partial results that hopefully will be helpful to others.

(5) Quantum Search with Non-Interacting Bosons and Fermions, by Aravind Karthigeyan.

This one really made me think. Aravind studies the problem of search for a single marked vertex, on the complete graph with N vertices, using either M bosons or M fermions that can hop between the vertices. With M bosons, he shows that the search succeeds in Θ(√(N/M)) time with high probability, which is just the usual runtime for Grover search with M parallel searchers. With fermions, by contrast, he shows that more time is needed. Why? Because of the Pauli Exclusion Principle! The fermions all “step on each others’ toes,” competing to be the one that jumps onto the marked vertex, which limits the advantage of having M fermions searching in parallel.

(6) Limits to Pseudodeterminism in Quantum Communication Protocols, by Jiawei Li.

Jiawei revisits the famous Hidden Matching Problem, which is known to have an exponential gap between its randomized one-way communication complexity of ~√n, and its quantum one-way communication complexity of ~log(n). He makes a new observation about this problem: namely, if you want the exponential quantum communication advantage, then you must content yourself with a protocol that can generate many different possible correct answers with appreciable probabilities (i.e., that generates large min-entropy). In other words, no good quantum protocol for the problem is pseudodeterministic. This complements, for example, my and Shih-Han Hung’s work, which showed the same statement for quantum supremacy experiments based on Random Circuit Sampling, or the long line of works that showed it for experiments that violate the Bell/CHSH inequality.

Congratulations to my students for their accomplishments, and thanks to them for giving me permission to include their work in this showcase!

July 31, 2025

Tommaso DorigoExtrasensorial Plot Premonition

In the previous article here, I tangentially examined a situation that arises often in collaborative data analysis: the digestion of the results in scientific graphs. The focus of that discussion was the building of a sceptical thinking attitude in my student - it is a really important asset in experimental science.

read more

Jordan EllenbergOne more observation about Tom Lehrer

For every insult, it is possible to conceive of an ideal type who displays all the features the insult conveys, but in whom they are somehow virtues rather than deficits, and Tom Lehrer was that for “smart-ass.”

Also, I thought I’d seen just about every speck of Tom Lehrer content there was, but nope — here he is doing a promo for the new Dodge models of 1967.

July 29, 2025

David Hoggintegrating out nuisances

Further insipired by yesterday's post about binary fitting, I worked today on the treatment of nuisance parameters that have known distributions. These can be treated as noise sometimes. Let me explain:

If I had to cartoon inference (or measurement) in the face of nuisance parameters, I would say that frequentists profile (optimize) over the nuisances and Bayesians marginalize (integrate) over the nuisances. In general frequentists cannot integrate over anything, because there is no measure in any of the parameter spaces. But sometimes there is a measure. In particular, when there is a compact symmetry:

We know (or very strongly believe) that all possible orientations of a binary-star orbit are equally likely. In this model (or under this normal assumption) we have a distribution over two angles (theta and phi for that orbit pole, say); it is the distribution set by the compact group SO(2). Thus we can treat the orientation as a noise source with known distribution and integrate over it, just like we would any other noise source. So, in this case (and many cases like it) we can integrate (marginalize) even as frequentists. That is, there are frequentism-safe marginalizations possible in binary-star orbit fitting. This should drop the 12-parameter fits (for ESA Gaia data) down to 8-parameter, if I have done my math right.

July 28, 2025

n-Category Café The Duflo Isomorphism and the Harmonic Oscillator Hamiltonian

Quick question. Classically the harmonic oscillator Hamiltonian is often written 12(p 2+q 2)\frac{1}{2}(p^2 + q^2), while quantum mechanically it gets some extra ‘ground state energy’ making the Hamiltonian

H=12(p 2+q 2+1) H = \frac{1}{2}(p^2 + q^2 + 1)

I’m wondering if there’s any way to see the extra +12+ \frac{1}{2} here as arising from the Duflo isomorphism. I’m stuck because this would seem to require thinking of HH as lying in the center of the universal enveloping algebra of some Lie algebra, and while it is in the center of the universal enveloping algebra of the Heisenberg algebra, that Lie algebra is nilpotent, so it seems the Duflo isomorphism doesn’t give any corrections.

Whenever someone says “quick question”, I’m unable to give them a quick answer. Is that the case here?

David Hoggbinary stars with periods of exactly one year

On Friday, Kareem El-Badry (Caltech) gave a seminar about looking for (and finding!) stars in binary orbits around dark or much darker companions, like black holes, neutron stars, and white dwarfs. He showed results that involve ESA Gaia astrometry, where he noted that the Gaia Mission has no sensitivity to periods right at (or within an inverse mission-length frequency difference of) one-year periods (inverse year frequencies). After the talk I objected that these are not exactly degenerate; El-Badry said that the inferences blow up there.

I spent some time on the weekend thinking about this point, and I now understand it: There is a particular one-year orbit that a star can have (around a darker companion) such that the photocenter of the system makes a motion that is identical to the apparent parallax motion. Thus there is an exact degeneracy between the parallax and a certain one-year orbit.

Does that mean that we can't measure orbits at one year (or, for that matter, parallaxes)? No, it does not. After all, the parallax ellipse has a particular celestial (angular) shape and phase. But it might require some kind of reparameterization of orbits near one-year periods. I think I know how to do that. Should we find the missing binaries? (Oh and by the way, this degeneracy means that, in a strict frequentist sense, Gaia can't measure parallaxes at all without additional information.)

John PreskillLittle ray of sunshine

A common saying goes, you should never meet your heroes, because they’ll disappoint you. But you shouldn’t trust every common saying; some heroes impress you more, the better you know them. Ray Laflamme was such a hero.

I first heard of Ray in my undergraduate quantum-computation course. The instructor assigned two textbooks: the physics-centric “Schumacher and Westmoreland” and “Kaye, Laflamme, and Mosca,” suited to computer scientists. Back then—in 2011—experimentalists were toiling over single quantum logic gates, implemented on pairs and trios of qubits. Some of today’s most advanced quantum-computing platforms, such as ultracold atoms, resembled the scrawnier of the horses at a racetrack. My class studied a stepping stone to those contenders: linear quantum optics (quantum light). Laflamme, as I knew him then, had helped design the implementation. 

Imagine my awe upon meeting Ray the following year, as a master’s student at the Perimeter Institute for Theoretical Physics. He belonged to Perimeter’s faculty and served as a co-director of the nearby Institute for Quantum Computing (IQC). Ray was slim, had thinning hair of a color similar to mine, and wore rectangular glasses frames. He often wore a smile, too. I can hear his French-Canadian accent in my memory, but not without hearing him smile at the ends of most sentences.

Photo credit: IQC

My master’s program entailed a research project, which I wanted to center on quantum information theory, one of Ray’s specialties. He met with me and suggested a project, and I began reading relevant papers. I then decided to pursue research with another faculty member and a postdoc, eliminating my academic claim on Ray’s time. But he agreed to keep meeting with me. Heaven knows how he managed; institute directorships devour one’s schedule like ravens dining on a battlefield. Still, we talked approximately every other week.

My master’s program intimidated me, I confessed. It crammed graduate-level courses, which deserved a semester each, into weeks. My class raced through Quantum Field Theory I and Quantum Field Theory II—a year’s worth of material—in part of an autumn. General relativity, condensed matter, and statistical physics swept over us during the same season. I preferred to learn thoroughly, deeply, and using strategies I’d honed over two decades. But I didn’t have time, despite arriving at Perimeter’s library at 8:40 every morning and leaving around 9:30 PM.

In response, Ray confessed that his master’s program had intimidated him. Upon completing his undergraduate degree, Ray viewed himself as a nobody from nowhere. He chafed in the legendary, if idiosyncratically named, program he attended afterward: Part III of the Mathematical Tripos at the University of Cambridge. A Cambridge undergraduate can earn a master’s degree in three steps (tripos) at the Department of Applied Mathematics and Theoretical Physics. Other students, upon completing bachelor’s degrees elsewhere, undertake the third step to earn their master’s. Ray tackled this step, Part III.

He worked his rear off, delving more deeply into course material than lecturers did. Ray would labor over every premise in a theorem’s proof, including when nobody could explain the trickiest step to him.1 A friend and classmate helped him survive. The two studied together, as I studied with a few fellow Perimeter students; and Ray took walks with his friend on Sundays, as I planned lunches with other students on weekends.

Yet the program’s competitiveness appalled Ray. All students’ exam scores appeared on the same piece of paper, posted where everyone could read it. The department would retain the highest scorers in its PhD program; the other students would have to continue their studies elsewhere. Hearing about Ray’s program, I appreciated more than ever the collaboration characteristic of mine.

Ray addressed that trickiest proof step better than he’d feared, come springtime: his name appeared near the top of the exam list. Once he saw the grades, a faculty member notified him that his PhD advisor was waiting upstairs. Ray didn’t recall climbing those stairs, but he found Stephen Hawking at the top.

As one should expect of a Hawking student, Ray studied quantum gravity during his PhD. But by the time I met him, Ray had helped co-found quantum computation. He’d also extended his physics expertise as far from 1980s quantum gravity as one can, by becoming an experimentalist. The nobody from nowhere had earned his wings—then invented novel wings that nobody had dreamed of. But he descended from the heights every other week, to tell stories to a nobody of a master’s student.

The author’s copy of “Kaye, Laflamme, and Mosca”…
…in good company.

Seven and a half years later, I advertised openings in the research group I was establishing in Maryland. A student emailed from the IQC, whose co-directorship Ray had relinquished in 2017. The student had seen me present a talk, it had inspired him to switch fields into quantum thermodynamics, and he asked me to co-supervise his PhD. His IQC supervisor had blessed the request: Ray Laflamme.

The student was Shayan Majidy, now a postdoc at Harvard. Co-supervising him with Ray Laflamme reminded me of cooking in the same kitchen as Julia Child. I still wonder how I, green behind the ears, landed such a gig. Shayan delighted in describing the difference between his supervisors’ advising styles. An energetic young researcher,2 I’d respond to emails as early as 6:00 AM. I’d press Shayan about literature he’d read, walk him through what he hadn’t grasped, and toss a paper draft back and forth with him multiple times per day. Ray, who’d mellowed during his career, mostly poured out support and warmth like hollandaise sauce. 

Once, Shayan emailed Ray and me to ask if he could take a vacation. I responded first, as laconically as my PhD advisor would have: “Have fun!” Ray replied a few days later. He elaborated on his pleasure at Shayan’s plans and on how much Shayan deserved the break.

When I visited Perimeter in 2022, Shayan insisted on a selfie with both his PhD advisors.

This June, an illness took Ray earlier than expected. We physicists lost an intellectual explorer, a co-founder of the quantum-computing community, and a scientist of my favorite type: a wonderful physicist who was a wonderful human being. Days after he passed, I was holed up in a New York hotel room, wincing over a web search. I was checking whether a quantum system satisfies certain tenets of quantum error correction, and we call those tenets the Knill–Laflamme conditions. Our community will keep checking the Knill–Laflamme conditions, keep studying quantum gates implementable with linear optics, and more. Part of Ray won’t leave us anytime soon—the way he wouldn’t leave a nobody of a master’s student who needed a conversation.

1For the record, some of the most rigorous researchers I know work in Cambridge’s Department of Applied Mathematics and Theoretical Physics today. I’ve even blogged about some

2As I still am, thank you very much.

July 27, 2025

Jordan EllenbergRIPPP Tom Lehrer

“Rest, Imagining Poisoning Park Pigeons,” that is.

On the occasion of Lehrer’s passing at 97 let us remember the most gloriously comic and enthusiastic celebration of mass death ever put to jaunty music.

July 26, 2025

Jordan EllenbergDoing less with less

I wrote an opinion piece in the Milwaukee Journal-Sentinel expressing my view that attacks on science funding and international students are bad. This will probably not be very controversial to readers of this blog. But I do think it’s worthwhile to say this in public, because I do think science is in general pretty popular and well-liked, certainly more so than politicians. “Does anybody really care what’s published in local papers?” I’ve been told, again and again, that yes, people do care. And my own experience with gerrymandering is that it really did make a difference, in the long term, that gerrymandering went from something very obscure to something people had heard of and generally agreed was kind of scummy. Long slow work of letters to the editor and public conversations were part of that. As Bryna Kra just reminded me, the actual financial situation of NSF and NIH is not determined when the President signs a bill; it has a lot more to do with the messy, political, hard-to-predict appropriations process, which has months to go still. So it’s far from too late to be talking about this — I encourage you to write your own!

Science Homecoming is an interesting project in this vein, asking scientists, wherever they now work, to write letters to their hometown papers. Click on the county you grew up in, it’ll suggest papers to submit to. (But don’t you already know what your hometown paper is? Sadly, if you’re around my age, your memory of what newspapers exist in your county of birth may be out of date.)

July 25, 2025

Clifford JohnsonFantastic Collaboration!

Well, I can now officially mention that I've been part of the filmmaking team (in a way) working hard to bring you an enjoyable and interesting Fantastic Four movie! I think it has been about two and a half years (?) since this all began. This was a nearly perfect model of how science consulting can work in film. I worked with everyone, wherever I was needed, with the director, writers, producers, director of photography, VFX teams, set design, and so on. They made me feel welcome and part of whatever creative team I was talking to, which was great. They were open to lots of ideas right from when they were starting out thinking about tone, story ideas, and so forth, right through to final (key) tweaks right at the end of the process as recently as mere weeks ago.

It began early on with with having great conversations Matt Shakman and his writing team about the fact that Reed Richards is first and foremost a curiosity-driven physicist (and so quite different from the engineer we have in Tony Stark that we see RdJ bring out so well), and how things like his dedication to his work (and his outlook on things that comes from such work) might play out in terms of family dynamic, personal relationships, etc., - Without it turning into the tedious cliches about scientists somehow not being able to navigate the world of human relationships. Obviously, I could speak to this as a physicist who works on precisely the things Reed works on, as well as a family man, and as well as someone who remembers that it's still all about telling a story. And there are so many stories to tell at that intersection... Anyway, I think these early conversations (as well as suggestions I made in many sets of notes along the way) helped inform (even if only a little bit? who knows?) what Pedro Pascal brought to the character. This aspect of the film is one of the things I'm most pleased about seeing up on screen.

Beyond that, you'll see lots of things I gave them that I'm also delighted to see made it to the film, in many scenes. This includes (but not limited to!): [...] Click to continue reading this post

The post Fantastic Collaboration! appeared first on Asymptotia.

n-Category Café 2-Rig Conjectures Proved?

Kevin Coulembier has come out with a paper claiming to prove some conjectures that Todd Trimble, Joe Moeller and I made in 2-Rig extensions and the splitting principle:

The conjectures concern 2-rigs over a field k of characteristic zero. Here they are:

Conjecture 8.6. Rep(M(n,k))\mathsf{Rep}(\text{M}(n,k)) is the free 2-rig on an object of bosonic subdimension n.

Conjecture 8.7. Rep(GL(n,k))\mathsf{Rep}(\text{GL}(n,k)) is the free 2-rig on an object of bosonic dimension n.

Conjecture 8.8. If an object has bosonic dimension n, then it also has bosonic subdimension n. If an object has fermionic dimension n, then it also has fermionic subdimension n.

However, Coulembier says he needed to fix our definition of ‘bosonic dimension’ to prove some of these conjectures. We had said

• a 2-rig over a field k is a Cauchy complete k-linear symmetric monoidal category

and an object x in a 2-rig

  • is a line object if there’s an object y with xyIx \otimes y \cong I.

  • is a bosonic line object if it is a line object and the symmetry xxxx x \otimes x \to x \otimes x is the identity morphism.

  • is a fermionic line object if it is a line object and the symmetry xxxxx \otimes x \to x \otimes x is minus the identity morphism.

  • has bosonic dimension n if its nth exterior power is a bosonic line object.

  • has fermionic dimension n if its nth symmetric power is a fermionic line object.

  • has bosonic subdimension n if its (n+1)st exterior power vanishes.

  • has fermionic subdimension n if its (n+1)st symmetric power vanishes.

However, Coulembier noted that with these definitions, we get some very strange objects of bosonic dimension n. Namely, any fermionic line object has bosonic dimension n for every even natural number n. The reason is that its second tensor power, and thus all its even tensor powers, are bosonic line objects. However, a fermionic line object does not have bosonic subdimension n for any natural number n, since its exterior powers are the same as its tensor powers, and none of these vanish.

So, Coulembier rules out this case: he defines an object to have bosonic dimenion n if its (n+1)st exterior power vanishes and it is not a fermionic line object.

If one does this, one should similarly change the definition of an object with ‘fermionic dimension n’ to rule out bosonic line objects.

It’s great to see some interest in 2-rig theory! Anyone who wants more conjectures to tackle can try the big conjecture in the introduction to our paper, or Conjectures 34–37 in Tannaka reconstruction and the monoid of matrices.

By the way, Todd and I have already proved Conjecture 8.6 in that paper. But we used very different techniques than Coulembier’s, so it’s interesting to see his new proof.

Matt von HippelValue in Formal Theory Land

What makes a physics theory valuable?

You may think that a theory’s job is to describe reality, to be true. If that’s the goal, we have a whole toolbox of ways to assess its value. We can check if it makes predictions and if those predictions are confirmed. We can assess whether the theory can cheat to avoid the consequences of its predictions (falsifiability) and whether its complexity is justified by the evidence (Occam’s razor, and statistical methods that follow from it).

But not every theory in physics can be assessed this way.

Some theories aren’t even trying to be true. Others may hope to have evidence some day, but are clearly not there yet, either because the tests are too hard or the theory hasn’t been fleshed out enough.

Some people specialize in theories like these. We sometimes say they’re doing “formal theory”, working with the form of theories rather than whether they describe the world.

Physics isn’t mathematics. Work in formal theory is still supposed to help describe the real world. But that help might take a long time to arrive. Until then, how can formal theorists know which theories are valuable?

One option is surprise. After years tinkering with theories, a formal theorist will have some idea of which sorts of theories are possible and which aren’t. Some of this is intuition and experience, but sometimes it comes in the form of an actual “no-go theorem”, a proof that a specific kind of theory cannot be consistent.

Intuition and experience can be wrong, though. Even no-go theorems are fallible, both because they have assumptions which can be evaded and because people often assume they go further than they do. So some of the most valuable theories are valuable because they are surprising: because they do something that many experienced theorists think is impossible.

Another option is usefulness. Here I’m not talking about technology: these are theories that may or may not describe the real world and can’t be tested in feasible experiments, they’re not being used for technology! But they can certainly be used by other theorists. They can show better ways to make predictions from other theories, or better ways to check other theories for contradictions. They can be a basis that other theories are built on.

I remember, back before my PhD, hearing about the consistent histories interpretation of quantum mechanics. I hadn’t heard much about it, but I did hear that it allowed calculations that other interpretations didn’t. At the time, I thought this was an obvious improvement: surely, if you can’t choose based on observations, you should at least choose an interpretation that is useful. In practice, it doesn’t quite live up to the hype. The things it allows you to calculate are things other interpretations would say don’t make sense to ask, questions like “what was the history of the universe” instead of observations you can test like “what will I see next?” But still, being able to ask new questions has proven useful to some, and kept a community interested.

Often, formal theories are judged on vaguer criteria. There’s a notion of explanatory power, of making disparate effects more intuitively part of the same whole. There’s elegance, or beauty, which is the theorist’s Occam’s razor, favoring ideas that do more with less. And there’s pure coolness, where a bunch of nerds are going to lean towards ideas that let them play with wormholes and multiverses.

But surprise, and usefulness, feel more solid to me. If you can find someone who says “I didn’t think this was possible”, then you’ve almost certainly done something valuable. And if you can’t do that, “I’d like to use this” is an excellent recommendation too.

David Hogghow significant is your anomaly?

So imagine that you have a unique data set Y, and in that data set Y you measure a bunch of parameters θ by a bunch of different methods. Then you find, in your favorite analysis, your estimate of one particular parameter is way out of line: All of physics must be wrong! How do you figure out the significance of your result?

If you only ever have data Y, you can't answer this question very satisfactorily: You searched Y for an anomaly, and now you want to test the significance. That's why so many a posteriori anomaly results end up going away: That search probably tested way more hypotheses than you think it did, so any significances should be reduced accordingly.

The best approach is to use only part of your data (somehow) to search, and then use a found anomaly to propose a hypothesis test, and then test that test in the held-out or new data. But that often isn't possible, or it is already too late. But if you can do this, then there is usually a likelihood ratio that is decisive about the significance of the anomaly!

I discussed all these issues today with Kate Storey-Fisher (Stanford) and Abby Williams (Chicago) today, as we are trying to finish a paper on the anomalous amplitude of the kinematic dipole in quasar samples.

July 24, 2025

Tommaso DorigoReference Letters

Lately I have been writing lots of reference letters for students who are applying to Ph.D. positions in Physics, and in so doing I have found myself pondering on the dubious usefulness of that exercise. So let me share a bit of my thoughts on the matter here.

Reference letters are meant to be an important input for academic selections, because they provide first-hand information on the previous experience of the candidates, from scholars who are supposed to be authoritative enough to be trusted, and unconcerned enough to provide a unbiased assessment. 

read more

David Hoggfinding emission lines (and other oddities) in hot stars

I showed my robust spectral decomposition (dimensionality reduction) and residuals to the MPIA Binaries group today. There was much useful feedback (including that my H-gamma was actually H-delta; embarassing!). One comment was that the model isn't truly a causal separation between star and lines, so there will be some mean lines in the star model; lines aren't entirely outliers. That's true! The group suggested that I iterate to remove stars with lines from the training set.

After the meeting, I implemented some of that, but problems like this have a pathology: If you carefully remove stars with high residuals at some wavelength, then the training data will be deficient, or low, at that wavelength. And then the model will go lower, and then more stars will have excess at that wavelength and: Disaster. So when I implemented, I required a 2-sigma deviation, and I removed both high and low outliers. I don't know if this will work, but I am testing now.

July 23, 2025

Doug NatelsonResearch experience for teachers - why NSF education funds matter

The beginning of a RET poster session
Research Experience for Teachers (RET) programs are an example of the kind of programs that the National Science Foundation funds which are focused on K12 (and broader) education. This summer I hosted a high school physics teacher in my lab for 6 weeks, where he worked on a brief project, with one of my doctoral students helping out in a mentoring role.  Just yesterday was the big poster session for all of the participants in the program, and it was very enjoyable to talk with a whole cadre of high school science teachers from across the greater Houston area about their projects and their experiences.  

Readers may be more familiar with the sibling Research Experience for Undergraduates (REU) programs, which give undergraduate students the chance to work for 10 weeks or so in a lab that is very likely not at their home institution.  REUs are a great way for students interested in research to get broad exposure to new topics, meet people and acquire new skills, and for some, figure out whether they like research (and maybe which topics are exciting to them).  The educational goal of REUs is clear:  providing direct research experience to interested undergrads, ideally while advancing a research project and for some small fraction of students resulting in an eventual publication.  

RET programs are different:  They are intended as professional development.  The teachers are exposed to new topics, hopefully a fun research environment, and they are encouraged to think carefully about how they can take the concepts they learn and translate those for the classroom.  I am very much not an expert in education research, but there is evidence (see here, for example) that teachers who participate in these programs get a great deal of satisfaction and have lower attrition from teaching professions.  (Note that it's hard to do statistics well on questions like that, since the population of teachers that seek out opportunities like this may be a special subset of the total population of teachers.)  An idea that makes sense to me:  Enhancing the motivation and job satisfaction of a teacher can have a larger cumulative impact on educating students than an individual research project for a single student.

It would be a great shame if RET and REU programs are victims of large-scale cuts at NSF.  The NSF is the only science agency with education as part of its mission (at least historically).  All the more reason to try to persuade appropriators to not follow the draconian presidential budget request for the agency.


Mark GoodsellEntangled colliders

There are several interesting papers on the arXiv today. One of them, arXiv:2507.15949, involves my former PhD supervisor. It's on the subject of Quantum Entanglement at collider experiments, and relates back to a paper of his from 1992 that I didn't know about (there's a great line in the new paper where the authors complain that their earlier paper was ignored). (Quantum) Entanglement is the phenomenon where two or more particles are in a special state so that their properties are related, but we don't know what those properties are until we measure them. In Quantum Mechanics we would say that the actual state is not decided until we measure them, and this leads to 'spooky action at a distance' because by measuring one particle we appear to set the corresponding property of the other. An alternative explanation would be that there is some hidden quantity or 'hidden variable' where both particles secretly know all along what state they are in. However, surprisingly it's possible to discriminate between these two cases, and set up quantitative tests known as 'Bell inequalities': you can make a measurement and, if the result of that measurement is less than a certain value, then a hidden variable theory cannot explain it. Experiments to test this using photons at low energies were performed in the early 80s by Alain Aspect and others that violated Bell inequalities and thus confirming the Quantum Mechanical interpretation. 

In recent years, experimentalists have become interested in performing similar tests using different particles at higher energies; it is legitimate to ask "is this true for fermions?" or "does this break down at high energy?" Apparently similar questions were asked in the early 90s at LEP where electrons and positrons were collided (instead of protons at the LHC) and the 1992 paper pointed out that they were not really testing Bell Inequalities. The new paper revisits the older argument, and applies it to the new case of e.g. proton collisions producing a top-antitop pair. They argue that the quantity of interest for the Bell Inequality is the spin density matrix:

But what can actually be measured is the differential cross-section (the rate of production of particles in a certain angular volume):

The symbols B and C appear in both expressions: when performing experimental tests of Bell inequalities they are identified with each other. Since the differential cross-section can be measured, the measurement for the Bell Inequality can then be made and tested. However, the authors of the new paper claim that, in order to identify the two sets of symbols, it is necessary to use Quantum Field Theory: the second equation is a prediction based on QFT from the first. In other words, the logic is circular, and Quantum Mechanics has been assumed -- so it's not surprising that the Bell inequality is violated!

I haven't worked on this topic myself, so it will be interesting to see if there is some pushback from the authors of papers such as arXiv:2003.02280 (who proposed such top-antitop studies). 


Fermi decay constant -- at three loops!

 I also want to point out arXiv:2507.15946 by Stephen Martin, who has performed a three-loop computation of the decay rate of the muon in the Standard Model at three loops. This quantity is incredibly important; it is measured very precisely, and so we use it to extract the underlying parameters of the Standard Model -- or, any theory beyond it. But since it's a complicated process, this is a tricky computation, even at low loop order. The results in this paper will be useful for all sorts of calculations, such as extracting the Higgs boson's self-coupling -- and working out whether the universe is metastable!

July 22, 2025

David Hoggwrote like the wind; frequentist vs Bayes on sparsity

My goal this year in Heidelberg is to move forward all writing projects. I didn't really want to start new projects, but of course I can't help myself, hence the previous post. But today I crushed the writing: I wrote four pages in the book that Rix (MPIA) wants me to write, and I got more than halfway done with a Templeton Foundation pre-proposal that I'm thinking about, and I partially wrote up the method of the robust dimensionality reduction that I was working on over the weekend. So it was a good day.

That said, I don't think that the iteratively reweighted least squares implementation that I am using in my dimensionality reduction has a good probabilistic interpretation. That is, it can't be described in terms of a likelihood function. This is related to the fact that frequentist methods that enforce sparsity (like L1 regularization) don't look anything like Bayesian methods that encourage sparsity (like massed priors). I don't know how to present these issues in any paper I try to write.

Justin WilsonWelcome to the Quantum World: Where Certainty Ends and Possibility Begins

1. The Classical vs. Quantum World

In our everyday experience of the world, things have precise positions, speeds, and outcomes. You throw a baseball—you know where it’s going. But when we zoom in to the world of atoms and particles, things get weird — and the rules change.

Thanks for reading Quantum Matters! Subscribe for free to receive new posts and support my work.

2. The Probabilistic Nature (Uncertainty and Superposition)

🗨️ Metaphor:

"Imagine flipping a coin, while it is spinning in mid-air, it spins in mid-air being both at heads and tails at the same time, with the probability of being heads or tails is still 50-50. At this point, if we want to describe the state of this system (the coin), it would be a combination of both heads and tails — until you look, and then you can say whether the coin landed on heads or tails. That’s how particles behave in the quantum world: they exist in a state made of both heads and tails, a superposition of states, until they’re measured.

🎯 Main idea:

  • Quantum Particles don’t have exact positions or velocities—just probabilities.

  • Measurement collapses the particle’s wavefunction to a definite value.

Let’s look more closely at the idea that Particles behave probabilistically

In classical mechanics, we think of a particle as a tiny object with a definite position and velocity at any time. But in quantum mechanics, particles like electrons that are described by a wavefunction, a mathematical function that tells you the probability of finding the particle in different places. You can think of the particle not as a dot but as a fuzzy cloud, where he denser the cloud in one spot, the more likely the particle is to be found there.

This is why we say: "Particles don't have exact positions or velocities—just probabilities."

🎵 The Wave Nature of Matter

In our everyday life, we see systems that exhibit wave properties. Things like sound waves, water waves (surface waves), waves on a cable (vibrating), or if you live in certain places, you may experience seismic waves. These are all classical physics examples that are described by wave equations, where the disturbance propagates through a medium or field, transferring energy without necessarily transferring matter.

For example, when waves meet (i.e., waves in water), they combine through a process called interference. This can take a few forms:

· Constructive Interference: When the crests (high points) and troughs (low points) of two waves line up, they reinforce each other, creating a larger wave. Think of two ripples on a pond colliding and forming a bigger splash.

· Destructive Interference: When a crest meets a trough, they cancel out to some extent—sometimes completely—resulting in a smaller or flat wave.

This blending of energy is happening constantly in light, sound, water waves, and even quantum systems.

Below in Figure 1, is an example of superpositions of waves. The top image highlights full constructive interference and the bottom image shows destructive interference. You can see that the maximum of the two waves is 1 and its minimum is -1, where 1 and -1 are called the wave's amplitude. For these two points, for complete constructive interference, the superposition of these waves yields 2 (superposition means at each position point you add the two waves together) for the maximum and -2 for the minimum. For complete destructive interference, you can see the waves when at each point you add them together (superposition), completely cancel out (equal 0). This situation is often called completely out-of-phase . Using the same two points as in our constructive interference example, you now see that wave 1 equals 1 and wave 2 equals -1. In fact, for all the points, the two waves are equal but of opposite sign (meaning one is positive, say +1, and the other is -1). The superposition of these two waves produces 0 for all points.

Figure 1: Top image showing complete constructive interference, while the bottom image displays complete destructive interference.

Below in Figure 2, the waves are slightly shifted along the position axis (x-axis). Using our same points as before, you can see that the superposition wave doesn’t quite equal 2 and -2; they are less than 2 and greater than -2 (-2 is less than -1.9, say, meaning -2 does not get more negative, it is heading upwards towards 0). This is because each wave’s maximum and minimum values occur at different points in space, and this is true for the values of the superposition wave at all points in space. Imagine you fix wave 2, and you slowly pull wave 1 to the right (wave 1 could be referred to as phase-shifted relative to wave 2). The superposition wave continues to have positive values and negative values going towards 0. Once the maximums of wave 1 line up with the minimums of wave 2, the superposition wave is 0 for all points. This is the complete destructive interference as we saw in Figure 1. Now, if you continue to pull wave 1 to the right, the superposition wave starts growing, and if you keep pulling to the right, it will reach the complete constructive interference pattern like in Figure 1.

Figure 2: Two waves shifted relative to one another along the x-axis (position axis).

Notice the superposition wave (like the other waves) starts to repeat the pattern. The point where the pattern repeats itself would define the superposition wave’s wavelength 𝛌. Now imagine, if you had lots of waves where some are shifted relative to our wave 1, at some points in position, we will get a maximum amplitude resulting from constructive interference but necessarily complete constructive interference, giving the highest point of a wave (crest of water wave), while for others, we may get destructive interference, leading to the minimum amplitude (trough of a wave) and other intermediate amplitude that help to make-up the entire wave. Hopefully, this simplistic model helps us to understand how waves form and how you can get a big wave from many small waves.

Another feature of waves is that they have a wavelength that describes how far they propagate in space before repeating the same pattern over and over. If you remember what the mathematical functions, sine and cosine, they are waves that repeat in space and have a wavelength. Now the important part is that the momentum, p, of these waves is inversely proportional to their wavelength, that is, p=1/𝛌. So if you have a short wavelength, you have a large momentum, and vice versa.

These waves follow classical equations — disturbances that move through a medium, transferring energy. But in quantum mechanics, the wave isn't a ripple in water or air — it’s a probability wave.

Now comes the key idea: wave-particle duality. Particles act like waves. And waves behave very differently from particles in one crucial way:

A wave that's localized in space (i.e., sharply peaked in position) must be made by combining many different wavelengths. Think of a big wave in the ocean; it is formed by lots of waves coming together to form this big wave. This combining of waves also means you have a wide range of momenta.

Correspondingly, a wave with a defined momentum (i.e., well-defined momentum) must be spread out in space.

For example, let’s look at music and a pure note on a tuning fork (single frequency = defined momentum) lasts long but is hard to pin down in time (spread out). However, a short drumbeat is localized in time (defined position) but contains a spread of frequencies (momentum uncertainty).

For example, let’s look at music and a pure note on a tuning fork (single frequency = defined momentum) lasts long but is hard to pin down in time (spread out). However, a short drumbeat is localized in time (defined position) but contains a spread of frequencies (momentum uncertainty).

This is a fundamental mathematical property of waves called the Fourier transform. A Fourier transform contains both sine and cosine, just as waves, but is a more complicated function that involves complex numbers. The point about the Fourier transform is that you can obtain sine and cosine from it.

3. The Heisenberg Uncertainty Principle: Knowing Less to Understand More

One of the most famous — and misunderstood — ideas in quantum mechanics is the Heisenberg Uncertainty Principle.

It’s often summed up like this: You can’t know both where something is and how fast it’s moving — at the same time — with perfect precision.

At first glance, that sounds like a problem with our measuring tools, as if we just need better microscopes or sensors. But that’s not it.

This principle isn’t about technological limitations — it’s a fundamental property of nature.

What does it mean?

In classical physics, if you know where a car is and how fast it’s going, you can predict exactly where it’ll be a few seconds later. But in the quantum world, if you try to pin down the position of a particle more precisely, you automatically become less certain about its momentum (its speed and direction) — and vice versa.

It’s not because the particle is misbehaving — it’s because particles aren’t like tiny billiard balls. They behave like waves, and waves don’t have sharp edges.


🌀 Wave Metaphor

Think of a musical note. If a sound wave is spread out in time — like a long, steady tone — it has a very precise frequency (pitch). But if it’s a short, sharp “ping,” its frequency becomes less certain. You trade time for pitch.

In the same way, if a particle’s wave is sharply localized in space (you know where it is), the range of its momentum values must broaden. If the wave is spread out (you don’t know exactly where it is), the momentum is better defined.


🔬 So what’s uncertain?

It’s not that the particle is jittering around randomly. Instead:

  • Before measurement, a particle’s position and momentum are both described by a range of probabilities.

  • The more tightly you narrow one, the more uncertain the other becomes.

The Heisenberg Uncertainty Principle can be written down as,

𝚫p𝚫x ≤ ℏ/2

  • 𝚫x is the uncertainty in position

  • 𝚫p is the uncertainty in momentum

  • ℏ is Planck’s constant (a very small number)

Let’s try to understand this formula a little better. In quantum mechanics, particles like electrons aren’t just little dots — they also act like waves. This means we describe them with wave packets, which are like short-lived ripples or pulses spread out over space.

To make a wave packet that’s narrow in space (so we know roughly where the particle is), we have to combine many different waves (i.e., sine waves) with various wavelengths and frequencies (think back to our above example of waves).

That’s because a single sine wave, for example, stretches out infinitely — it doesn’t give you a clear position. Only by mixing waves with different wavelengths (and therefore different momenta) can we build a localized bump.

So: Precise position → requires many different wavelengths → high momentum uncertainty.

Now reverse it. If we only use one sine wave, it has a very clear wavelength (momentum), but it stretches out forever — the particle could be anywhere.

So: Precise momentum → means the particle is spread out → high position uncertainty.

This trade-off is at the heart of the uncertainty principle:

𝚫p𝚫x ≤ ℏ/2

Here, 𝚫x is the uncertainty in position, 𝚫p is the uncertainty in momentum, and ℏ is a very tiny constant from quantum physics.

The key message: > The more precisely you know where something is, the less precisely you can know how fast it's going — and vice versa.

Imagine building a short splash on a pond with water waves (see Figure-3):

  • A small, sharp splash uses many different ripple sizes (frequencies).

  • A pure, smooth ripple has just one frequency but spreads out.

That’s the uncertainty principle in action, hiding in the rhythm of waves.

Figure 3: The left figure shows the sharp splash, while the right figure illustrates the smooth ripple.

So what that tells us is that as we become more and more certain about the location of a particle (𝚫x is getting smaller and smaller, heading to 0), 𝚫p is getting larger and larger, heading to ∞. This tells us that if we knew x exactly, then we would not know the momentum p of the particle, since the uncertainty 𝚫p is infinite.

The Core Idea:

You can’t precisely know both where something is (position) and how fast it’s going or in what direction (momentum) at the same time. The more accurately you try to measure one, the fuzzier the other becomes.

🧠 Everyday Analogy:

Imagine you're trying to photograph a speeding car at night.

  • If you use a fast shutter, you can see exactly where the car is, but the picture will be blurry — you can’t tell how fast it was going.

  • If you use a slow shutter, you get a motion blur — which tells you how fast it was moving, but now you don’t know exactly where it was.

That’s the uncertainty principle in action: precision in one area means fuzziness in the other.

Again, this isn’t just a limitation of our instruments — it's a fundamental property of nature. It's like the universe itself has this built-in fuzziness at tiny scales.

This principle also tells us why electrons just don't spiral into the nucleus of an atom.

Because you can’t precisely know both the position and momentum of a particle at the same time.

If an electron got too close to the nucleus, its position would be very well known (i.e., tightly confined in space). According to the uncertainty principle, this would mean its momentum becomes highly uncertain. Because the kinetic energy is directly calculated from the momentum, and since you have large momentum fluctuations, you will have large kinetic energy.

This tells us that confining the electron too tightly costs energy — a lot of energy. That energy cost balances out the attractive pull of the nucleus. The result? The electron occupies a fuzzy “cloud” of most likely locations (remember it is based on probabilities)— what we call an orbital — and it doesn't just fall in.

This quantum balancing act gives rise to stable atoms, the periodic table, chemistry, etc.

Wave-particle duality

Wave-particle duality is one of the most astonishing ideas in modern physics. It says that tiny things—like electrons and light—can behave like particles and waves, depending on how you look at them.

  • Waves (like ocean waves, or ripples in a pond, or even sound waves) are spread out, continuous disturbances. They travel, they can interfere with each other (creating bigger or smaller waves), and they bend around corners. You can't point to "one wave" and say it's at a single, precise location.

  • Particles (like a baseball, or a tiny pebble) are distinct, localized objects. They have a definite position, mass, and can be tracked as they move from one point to another.

The Classical Difference: In our ordinary experience, something is clearly either a wave or a particle. Never both.

🌍 In the Classical World

In everyday experience:

  • Objects are either particles (like baseballs) or waves (like sound or water ripples).

  • Particles have defined positions and travel along clear paths.

  • Waves are spread out, overlap, and interfere, but they don't "exist" in a single spot.

Think of throwing a rock into a pond—either you're dealing with the rock or the ripples it creates, never both at once.

⚛️ In the Quantum World

The Quantum Twist: Wave-Particle Duality

But when we zoom down to the incredibly tiny, fundamental level of reality – the quantum realm – things get weird. Particles like electrons, and even light itself (which we classically considered a wave), don't always fit neatly into one category. This is wave-particle duality:

  • Light, for instance, can behave like a spread-out wave (which is why it can create interference patterns, just like water waves). But it can also act like a stream of tiny, discrete particles called photons (which is how it knocks electrons off a metal surface in the photoelectric effect, acting like tiny billiard balls).

  • Similarly, electrons (which we think of as particles making up atoms) can, under certain experimental conditions, exhibit wave-like behavior, creating interference patterns as if they were spread out and passing through multiple places at once. Yet, when we try to pinpoint their location, they act like a localized particle.

This means a single electron, shot toward a double slit, doesn't just go through one slit—it behaves as if it explores all possibilities at once, producing an interference pattern typical of waves.

🤔 So What Does This Mean?

The amazing part is that a quantum entity isn't just sometimes a wave and sometimes a particle. Instead, it possesses both wave-like and particle-like properties simultaneously, and the act of observation or the type of experiment we perform determines which aspects we will observe. You can't observe both characteristics at the same exact time in the same experiment.

This seemingly paradoxical idea is a cornerstone of quantum mechanics and is absolutely essential for understanding how the universe works at its most fundamental level. It underpins all modern technologies from lasers and transistors to medical imaging and the very concept of quantum computing.

The objects aren't just "here or there"—they are probabilistic ripples, until observed.

Wave-particle duality is nature’s way of whispering: “The world is more nuanced than it seems.”

Thanks for reading Quantum Matters! Subscribe for free to receive new posts and support my work.

July 18, 2025

Doug NatelsonThe latest on US science funding

The US House and Senate appropriations subcommittees have now completed their markups on the bills relevant to the FY26 appropriations for NSF, NASA, and NIST.  The AAAS has an interactive dashboard with current information here if you want to click and look at all the science-related agencies.   Other agencies still need to go through the Senate subcommittees. 

Just a reminder of how this is supposed to work.  The House and Senate mark up their own versions of the detailed appropriations bills.  In principle these are passed by each chamber (with the Senate versions for practical purposes requiring 60/100 votes of support because of the filibuster).  Then a conference committee hashes out the differences between the bills, and the conference version of the bills is then voted on by each chamber (again, needing 60/100 votes to pass in the Senate).  Finally, the president signs the spending bills.  In the fantasy land of Schoolhouse Rock, which largely described events until the 1990s, these annual spending bills are supposed to be passed in time for the start of the new fiscal year on October 1.  In practice, Congress has been deeply dysfunctional for years, and there have been a lot of continuing resolutions, late budgets, and mammoth omnibus spending bills.  

To summarize:

  • NSF - House recommendation = $6.997B (a 20.7% cut from FY25), Senate = $9B (a 2% increase from FY25).  These are in sharp contrast to the presidential budget request (PBR) of a 55.8% cut.
  • NASA - House = flat from FY25, Senate = $24.9B (0.2% increase).  
  • NIST - House = $1.28B (10.6% increase from FY25), Senate = $1.6B (38.3% increase from FY25)
  • NOAA - House = $5.7B (28.3% increase from FY25), Senate = $6.1B (36.3% increase from FY25)
DOE has gone through the House, where the Office of Science is recommending a 1.9% increase, in contrast to a 13.9% cut in the PBR.  

If you are eligible and able to do so, please keep pushing.  As I wrote a few days ago, this is a long-term project, since appropriations happen every year.  As long as you're making your opinions known, it's good to push on representatives and senators that they need to hold the agency leadership accountable to actually spend what congress appropriates. 

A science post soon....

Matt von HippelHype, Incentives, and Culture

To be clear, hype isn’t just lying.

We have a word for when someone lies to convince someone else to pay them, and that word is fraud. Most of what we call hype doesn’t reach that bar.

Instead, hype lives in a gray zone of affect and metaphor.

Some hype is pure affect. It’s about the subjective details, it’s about mood. “This is amazing” isn’t a lie, or at least, isn’t a lie you can check. They might really be amazed!

Some hype relies on metaphor. A metaphor can’t really be a lie, because a metaphor is always incomplete. But a metaphor can certainly be misleading. It can associate something minor with something important, or add emotional valence that isn’t really warranted.

Hype lies in a gray zone…and precisely because it lives in a gray zone, not everything that looks like hype is intended to be type.

We think of hype as a consequence of incentives. Scientists hype their work to grant committees to get grants, and hype it more to the public for prestige. Companies hype their products to sell them, and their business plans to draw in investors.

But what looks like hype can also be language, and culture.

To many people in the rest of the world, the way Americans talk about almost everything is hype. Everything is bigger and nicer and cooler. This isn’t because Americans are under some sort of weird extra career incentives, though. It’s just how they expect to talk, how they learned to talk, how everyone around them normally talks.

Similarly, people in different industries are used to talking differently. Depending on what work you do, you interpret different metaphors in different ways. What might seem like an enthusiastic endorsement in one industry might be dismissive in another.

In the end, it takes two to communicate: a speaker, and an audience. Speakers want to get their audience excited, and hopefully, if they don’t want to hype, to understand something of the truth. That means understanding how the audience communicates enthusiasm, and how it differs from the speaker. It means understanding language, and culture.

Justin WilsonScientists Discover a New Phase of Game Show!

You’ve seen the headlines: “Scientists Discover a New Phase of Matter!” They usually go something like, “You’ve heard of solids, liquids, and gases—but now there’s a fourth (fifth) phase: [insert buzzword here].”1 You might think about these phases in terms of temperature: heat up ice and it melts, a phase change! And yes, that is a phase transition. But temperature is just one knob we can turn, and phases are far richer than just “solid, liquid, and gas.” In fact, new phases are surprisingly common, and to understand why, let’s play a little game.

Thanks for reading Quantum Matters! Subscribe for free to receive new posts and support our work.

What is the Percolating Phase?

Imagine you’re on a game show called Are You Likeable? (the least-likeable game show). The rules of the game are simple:

  1. You stand on the stage and try to win over the audience.

  2. Each audience member votes whether they like you or not.

  3. But the twist: votes aren’t tallied—they control a system of pipes above your head.

That system of pipes looks something like this2

A game where you get soaked or remain dry based on whether the audience votes, which randomly turns a spigot on or off. Image generated by ChatGPT.

Each “like” turns a spigot off, stopping water from flowing through one pipe in a grid overhead3. Once voting ends, water is dumped into the system. If it can find a path to the bottom, you get soaked. The better your “likeability,” the less likely spigots open a path for water to flow and the drier you stay. That’s your prize for this game show (and hey, you also get the knowledge that people out there like you).

This system models a type of phase transition known as percolation4.

But where is the phase transition?

Aside from asking when my game show will be green-lit5, we can ask: When are you most likely to get wet? If the audience is huge, and fewer than 50% of them like you, it’s nearly guaranteed that the water will find a path—you’ll be soaked. But if more than 50% like you, chances are good that you’ll stay dry.

This is a phase transition6: a sudden change in the system’s behavior. For a moderately sized audience, it looks something like this:

The phase diagram for when your likeability gets you soaked in this game show. The line represents your chance of getting soaked.

Your probability of getting soaked forms a curve that sharpens up with increasing audience size—becoming a near step at 50%. That is known as the percolation threshold.

This is hard to visualize, though; luckily this problem admits some very nice pictures. For instance, here is the problem with a large number of pipes:

Animated gif of the water flowing through with certain “Likeability” scores. (Refresh if the animation stopped.)

If a pipe is blue, water is in it, and all the blue clusters flow down from the top. Notice what happens around 50%: even though spigots are randomly being turned off, the flow from top to bottom is entirely stopped around this value. Something is happening within the system that is allowing water to pass through on one side of the transition and not the other.

A closer look at the transition

To dig deeper, we simulate what happens at this threshold. Each spigot is either open or closed (randomly determined). If we visualize the grid (say 1024×1024 spigots), it looks like visual static: black and white dots with no obvious pattern7:

But now let’s color each connected cluster of open spigots—where water could flow and fill up this section of pipes. Suddenly, structure emerges. Some clusters are small, others large. If one spans from top to bottom, water flows and we’re in the percolating phase. If not, we’re in the non-percolating phase. At the transition (within the static above, we get this for the twelve largest clusters:

At exactly the percolation threshold (50% for the pipes above), there’s no single dominant cluster, but also no clear typical size. Instead, there’s a wide distribution of cluster sizes. The critical state behaves differently than either phase.

Scale Invariance: A Hallmark of Criticality

Let’s zoom out. Suppose we double the grid to 2048×2048.

The largest clusters are definitely larger—the largest here is 400,000 pipes/spigots large, while for the previous 1024×1024 case the largest was 180,000 large—but the pattern still looks… the same. We doubled the size, but if we slightly blur our vision, we cannot distinguish these two plots (even though one is quadruple the area of the other). Look at 512×512—even that looks similar:

You would be hard-pressed to say which one is larger if you blurred your eyes. This problem is apparent even down to 256×256 or 128×128.

This is called scale invariance—there is no characteristic length scale at the phase transition. It’s one of the defining features of what are known as second-order phase transitions.

It also explains why, at the threshold, you have a 50% chance of getting soaked. The largest cluster might span the system, but it might just as well fall short. There’s no guarantee either way.

Fractals in the Flow

These clusters within the above pictures don’t look like regular 2D structures or even 1D lines; they are, in fact, fractals. They aren’t exactly self-similar but they do behave the same at different scales. They fill space with a fractal dimension: not quite 1D, not quite 2D. In two-dimensional percolation, the clusters have a dimension of 91/48 ≈ 1.896—a universal number shared by all systems in this class, regardless of lattice type or other microscopic details.

This is part of the beauty of percolation: It shows us visually the underlying mathematical structure and even reveals some universality of phase transitions.

Why This Matters

Percolation is just one example, but it captures the essence of what physicists mean when they talk about “phases of matter.” It isn’t always about exotic particles or extreme temperatures turning gas into plasma. Sometimes, it’s about whether a liquid can find its way through a series of pipes. It’s about symmetry, structure, and emergence.

You’ve experienced water’s phases: ice, liquid, steam. But nature offers many more—some with no neat label like “solid” or “gas.” The theory of phase transitions explains them. Percolation is a window into that wider world.

1

What’s funny to me about this is how we use ice/water/vapor to expound on this. But water’s phase diagram is complex and deserves its own post. One point of interest: water and water vapor can be smoothly connected to each other without going through any phase transition. That and there’s something like 20 phases of ice.

2

Forgive me the ChatGPT weirdness: the drip on the left pipe and the weird long pipe on the right are just LLM quirks. Kind of like how hard it is to get AI to draw a full wine glass. (Or maybe now it can?)

3

Audience members can’t influence each other here. Assume the spigots are randomized and a stern librarian keeps everyone silent.

4

Technically, this is bond percolation on a square lattice.

5

NBC, call me!

6

A second-order phase transition is one where the change is continuous, but its derivatives (like heat capacity or cluster size) diverge. Percolation is a particularly visually clean example.

7

I have switched from bond percolation to site percolation to make plotting and cluster finding easier. The universal features do not depend on this.

July 17, 2025

July 15, 2025

Mark GoodsellThe Ant Mill

Jesper Grimstrup kindly sent me an electronic copy of his new book, The Ant Mill. He was also kind enough to give me some feedback on a first version of this review.


It has a foreword by Peter Woit, who has commented briefly about the book on his blog; the author also has a substack. The subtitle is 'How theoretical high-energy physics descended into groupthink, tribalism and mass production of research' so you would expect it to be the sort of thing that I would take a strong objection to. However, I am the sort of person who likes to read things that challenge me; the only thing that gets under my skin in this book is attacking the whole of academia in public.

The story is an interweaving of the author's personal experiences in academia with his general observations. This personal story and his experiences are interesting, much like I expect those of everyone who has spent several years in academia would be. And he has clearly spent a lot of time thinking about thinking about research. I love meta-activities of this sort; the best example that I know of is You and Your Research by Hamming, which I stumbled on as a postdoc. Indeed, the existence of these sorts of things that are shared by young researchers is actually evidence against the central thesis of Grimstrup's book.

The market attacking High-Energy Physics seems to be burgeoning. On the one hand Hossenfelder believes that we have become 'lost in math,' and on the other Woit believes we are not mathematical enough; both attack string theory as a failed program. Grimstrup's book is in the mathematical camp, with the novelty that he piles scorn on all popular approaches to quantum gravity, in particular loop quantum gravity and noncommutative geometry, since he has come into closest contact with them. His observations about string theorists are mainly about the shoddy way that he was treated during his time at the NBI, with several egregious examples of bad behaviour. We are lead to conclude that it is not just string theorists who have formed a closed tribe, but that there are several such groups crowding out innovation.

Grimstrup refers to his own research program and gives examples of how it has just generally been ignored within academia. For example, he starts the book with a copy of a grant application by a 31-year-old Niels Bohr for an entire institute, and contrasts this with a grant application of his that was refused that effectively ended his career within academia (my understanding is that at the NBI in Copenhagen it is common to ask for and obtain grants to pay your own salary and prolong temporary contracts). He writes that he does not do this to compare himself to Niels Bohr, but inadvertently this is the impression I got from the book -- that he was doing this not in a self-aggrandising way, but in the sense that you can almost feel his frustration coming through the pages that his expectations did not meet reality. It seems like bait at times, inviting anyone who disagrees with the general thesis to attack him personally. Instead, I will have a look at his papers with an open mind, after writing this review, and keep my thoughts on them to myself.

The book made me think of how many of us enter academia. We grow up reading popular science accounts idolising physicists from a century ago. And it made me think more of the self-actualisation messages that were rammed down all our throats in the popular culture in the 80s and 90s: follow your dreams, stick to your principles, be true to yourself, this is the most important thing in life and you shouldn't worry about money, just be happy. And: working hard and getting good grades is the way to get to the top. The problem is that this is largely obsolete: it's based on the world that existed post world war two when there was a scarcity of labour and an economic boom. Then -- if you were from the right background and your face fit -- you could work hard, get a PhD and walk into a permanent academic job (yes this is a caricature). Science was respected and so were scientists; high-energy physics was at the top of the tree because of the connection with technological advancements and nuclear weapons. That world doesn't exist any more; while in many ways for the better, it is undeniable that we live in a world of much greater competition and public skepticism about science is increasing.

The scientific community has expanded, as has the population; and more importantly education throughout the world and global travel and communication has meant that the number of people around the world who are involved in research is much greater than it was. Grimstrup notes that increasing the size of the academic community has led to fundamental changes of behaviour: professionalisation of research and group think, and that this leads to an increasing incentive to work on mainstream topics. He has done bibliographical research to demonstrate this (in papers and presented in the book). It is clearly true that the Matthew effect exists in many branches of society, and therefore also in academia; governments wanting to exert some form of oversight in exchange for the funds that they provide has definitely led to changes in incentives for researchers. One aspect of this is that it is hard to judge the work of people from other fields, but we are required to do so; and then it is difficult to argue with quantitative measures such as number of papers, citations, h-indices. Then of course the measure becomes the target for certain people. 

Grimstrup rails against all these changes; he clearly believed that the correct thing to do for an aspiring researcher would be to work on their own ideas, stick to their principles and not compromise. They should work for a long time, in isolation, on a major paper, put it on arxiv.org and the next day their colleagues would read it and ask interesting questions about them. Fame and fortune would follow. The thing that shocked Grimstrup was that not only did people not even care about any papers he posted, a young competitor even once told him some ideas are simply not worth pursuing even though they may be interesting. For sure, this is horrible and shocking behaviour, and does not reflect well on the anonymous person who said it.

For my part I am still naive enough to think that if new ideas are good, someone will recognise them as such, and network effects will make them known. I know that many researchers already think more deeply about what they are doing than he gives us credit for: and we discuss it, during seminars, over a drink with colleagues, in the coffee-breaks of conferences, during our annual or five-year reviews, or in grant applications. When I discussed this review with a string-theorist colleague they remarked "of course we know the situation sucks!''  I think Grimstrup is therefore wrong to tar everyone with the same brush: the diversity in our community has increased greatly with time, and this means that there are indeed strong incentives to take a risk on a novel idea, because the rewards of opening a new research direction are immense! Being the originator of an idea, or the first to recognise the merit in even an old forgotten idea, can yield tremendous results and even greater recognition nowadays thanks to the same effects. Hence, starting a new field, or even a subfield, is something that most researchers aspire to; the rewards for doing so are even greater now than in times gone by, and the evidence that this is possible is even given in this book: the existence of several communities working on different approaches to quantum gravity. He argues that these are now old and stale, but my point is that the way that they were able to take root at all is an example of how this can happen. There are many subfields that have sprung up more recently, and in other branches of HEP there are of course many examples. Nowadays things can change very quickly: a new good idea will be very rapidly jumped on once it is recognised, and people are constantly on the lookout. 

Grimstrup also, like Lee Smolin, divides researchers into visionaries and technicians. He then complains that the technicians have taken over, with lots of disparaging comments about them digging endless holes. He then complains that there is an incentive to collaborate in modern research, only collaborators survive in the system: he has evidence that being a lone wolf is a poor survival strategy. He believes that we should work on our own; yet at the same time visionaries need to collaborate with technicians. I found this very jarring. Other than the facile placing of people into boxes, he is overlooking the benefits of collaboration -- his opinion is that it is just about inflating the number of papers one person can sign (and for sure there are people who cynically do this). But to me, discussing with other people, even just explaining something, is often the quickest way to generate genuinely new ideas or solutions to problems that we may never have come up with alone. At the same time, there are plenty of people who do write papers alone; to take a leaf from his book and share a personal story, I once had a comment on a postdoc application that I had no single-author papers and therefore did not demonstrate independence. Hence, there are incentives and a good reason for young researches to work alone sometimes. I then wrote a single-author paper, as I have occasionally done since (and got the fellowship next time I applied); I would agree that there is a pleasure and some advantages in doing this, but to do this all the time would mean I would risk missing out on lots of new ideas and other perspectives, as well as the pleasure of regular interactions with collaborators, and it would also limit the scope of my projects, where I benefit from others' expertise. Or collaborations may just be working with a student, pursuing my ideas (hopefully they contribute some of their own!) and imparting my knowledge in the process. This is why I do not think that encouraging people to predominantly cloister themselves away to work alone for a long time is the most productive or healthy one. 

The book also has a very narrow focus as to the goal of high-energy physics. For the author, the quest is a "the next theory," but in essence this means a theory of quantum gravity, which he acknowledges would be far from being able to be tested with any present or near-future data. Otherwise, we should look for a mathematically rigorous definition of quantum field theory; he hopes these will be one and the same thing. This latter problem has proven to be both very hard and not obviously useful -- it is certainly not obvious that the solution should even be unique, for example a theory of strings would cure ultra-violet divergences, and the question of whether strings should be necessary for such a theory is one that I know people have tried to explore. I also recently attended a talk by Michael Douglas where he reviewed recent attempts on rigorous QFT, so it is a subject that is regarded as important but very difficult, and still being explored by a small number of people. Regarding quantum gravity, some people in the community have taken the opinion that if you have no data, it is not a good problem, and are working on other things. Or people try to make contact with data using e.g. EFT approaches to measuring quantum effects of gravity. The string theory community might say that we do have a theory of quantum gravity, in fact we have a landscape of them, and try e.g. to use it to answer questions about black hole information. But at the same time some people then complain that the leading string theorists have moved on to other things: there are lots of important open fundamental problems, and we just do not know how they are interlinked, if at all!

Grimstrup's insistence that the solution to what he sees as problems is to shrink competition and also encourage research outside of academia, reminded me of another Dane, subject of another book I read recently: king Cnut, famous for (presumably apocryphally) standing on the beach in front of his ministers and commanding the tide to turn back. Otherwise Grimstrup hopes for a crisis, perhaps one provoked by his book. He explicitly states that he does not want to fuel the anti-establishment or ant-academic movements, but I suspect that the only crises we might suffer would not be good for the field.  Perhaps one is already taking place in the US; perhaps people will take his message to heart despite his protests and start a DOGE-style decimation of research. Necessarily, in science we mark our own homework: only other scientists are capable of judging the claims of their peers. If we start opening this up to question then we will only end with government appointees deciding what are acceptable topics and directions, or shutting public funding down altogether. What would be left over would surely be even greater competition for scarce resources.

For me, the solution to the problems in the book, to the extent that I agree with them, is to regularly remind ourselves that we should always maintain a childlike curiosity and not close our minds to new ideas and new possibilities. This is the message from the text of Hamming, and very well put in the writings of Feynman (who Grimstrup bizarrely dismisses as a technician compared to Bohr). Otherwise of course in science it is necessary to have a community spirit, to realise that we are all trying to make progress in the best way we know how, and to help each other do so; and it is necessary to maintain healthy competition as a motivator. But both conflicting instincts -- to compete and to group into communities -- are vital parts of human nature and denying this has been the mistake of utopians throughout history. 

I am also sure that many of the complaints that Grimstrup assigns to high-energy physics could also be applied to society more generally. So instead of trying to hold back or reverse the societal changes of the last century we should try to work with them as best we can. We have to accept that we live now in an attention economy; and this gives new opportunities: blogging, social media, writing articles in science magazines or popular press, etc. Since Grimstrup is now, interestingly, an independent scientist, perhaps tying his own research program so closely with his book is embracing the modern world at last, and creating a brand as a radical outside thinker, that will be attractive to private backers. He promotes the path that he has followed, crowdfunding his research or seeking support of patrons, as a possible path for the independently minded once they have completed their training in academia, and in this I wish him well: he is clearly serious, determined and sincere. But while this is now part of twenty-first century society, many people have noticed that this modern trend is a return to the nineteenth century (or even earlier, e.g. Leonardo da Vinci being invited to France by François 1) where a wealthy patron was the only source of funding. 



July 14, 2025

Clifford JohnsonThe Power of the String Equation

[More technical post follows.] I've been working on this project with (UCSB postdoc) Maciej Kolanowski on and off for a while now, but only in the last couple of weeks did I have the time to hunker down and help push the writing of the results to the finish. For your Sunday reading pleasure, it is already up on the arXiv here (it came out Thursday but I've been too busy to pause to post about it - partly because I've begun work on writing up the next paper in the backlog). The title is "Extended JT supergravity and random matrix models: The power of the string equation", and it is co-authored with Maciej Kolanowski.

In a way, it is a natural continuation of work I've described here from 2023 and 2024, described here and here. At a meeting at the Institute for Advanced Study in December 2023 I described in a talk (YouTube video here, look in particular from minute 35) something miraculous I'd discovered concerning capturing certain special supergravity (and black hole) behaviour using a random matrix model. The effective physics is [...] Click to continue reading this post

The post The Power of the String Equation appeared first on Asymptotia.

July 11, 2025

Matt von HippelDid the South Pole Telescope Just Rule Out Neutrino Masses? Not Exactly, Followed by My Speculations

Recently, the South Pole Telescope’s SPT-3G collaboration released new measurements of the cosmic microwave background, the leftover light from the formation of the first atoms. By measuring this light, cosmologists can infer the early universe’s “shape”: how it rippled on different scales as it expanded into the universe we know today. They compare this shape to mathematical models, equations and simulations which tie together everything we know about gravity and matter, and try to see what it implies for those models’ biggest unknowns.

Some of the most interesting such unknowns are neutrino masses. We know that neutrinos have mass because they transform as they move, from one type of neutrino to another. Those transformations let physicists measure the differences between neutrino masses, but but themselves, they don’t say what the actual masses are. All we know from particle physics, at this point, is a minimum: in order for the neutrinos to differ in mass enough to transform in the way they do, the total mass of the three flavors of neutrino must be at least 0.06 electron-Volts.

(Divided by the speed of light squared to get the right units, if you’re picky about that sort of thing. Physicists aren’t.)

Neutrinos also influenced the early universe, shaping it in a noticeably different way than heavier particles that bind together into atoms, like electrons and protons, did. That effect, observed in the cosmic microwave background and in the distribution of galaxies in the universe today, lets cosmologists calculate a maximum: if neutrinos are more massive than a certain threshold, they could not have the effects cosmologists observe.

Over time as measurements improved, this maximum has decreased. Now, the South Pole Telescope has added more data to the pool, and combining it with prior measurements…well, I’ll quote their paper:

Ok, it’s probably pretty hard to understand what that means if you’re not a physicist. To explain:

  1. There are two different hypotheses for how neutrino masses work, called “hierarchies”. In the “normal” hierarchy, the neutrinos go in the same order as the particles they interact with with the weak nuclear force: electron-neutrinos are lighter than muon neutrinos, which are lighter than tau neutrinos. In the “inverted” hierarchy, they come in the opposite order, and the electron neutrino is the heaviest. Both of these are consistent with the particle-physics data.
  2. Confidence is a statistics thing, which could take a lot of unpacking to define correctly. To give a short but likely tortured-sounding explanation: when you rule out a hypothesis with a certain confidence level, you’re saying that, if that hypothesis was true, there would only be a 100%-minus-that-chance chance that you would see what you actually observed.

So, what are the folks at the South Pole Telescope saying? They’re saying that if you put all the evidence together (that’s roughly what that pile of acroynms at the beginning means), then the result would be incredibly uncharacteristic for either hypothesis for neutrino masses. If you had “normal” neutrino masses, you’d only see these cosmological observations 2.1% of the time. And if you had inverted neutrino masses instead, you’d only see these observations 0.01% of the time!

That sure makes it sound like neither hypothesis is correct, right? Does it actually mean that?

I mean, it could! But I don’t think so. Here I’ll start speculating on the possibilities, from least likely in my opinion to most likely. This is mostly my bias talking, and shouldn’t be taken too seriously.

5. Neutrinos are actually massless

This one is really unlikely. The evidence from particle physics isn’t just quantitative, but qualitative. I don’t know if it’s possible to write down a model that reproduces the results of neutrino oscillation experiments without massive neutrinos, and if it is it would be a very bizarre model that would almost certainly break something else. This is essentially a non-starter.

4. This is a sign of interesting new physics

I mean, it would be nice, right? I’m sure there are many proposals at this point, tweaks that add a few extra fields with some hard-to-notice effects to explain the inconsistency. I can’t rule this out, and unlike the last point there isn’t anything about it that seems impossible. But we’ve had a lot of odd observations, and so far this hasn’t happened.

3. Someone did statistics wrong

This happens more often. Any argument like this is a statistical argument, and while physicists keep getting better at statistics, they’re not professional statisticians. Sometimes there’s a genuine misunderstanding that goes in to testing a model, and once it gets resolved the problem goes away.

2. The issue will go away with more data

The problem could also just…go away. 97.9% confidence sounds huge…but in physics, the standards are higher: you need 99.99994% to announce a new discovery. Physicists do a lot of experiments and observations, and sometimes, they see weird things! As the measurements get more precise, we may well see the disagreement melt away, and cosmology and particle physics both point to the same range for neutrino masses. It’s happened to many other measurements before.

1. We’re reaching the limits of our current approach to cosmology

This is probably not actually the most likely possibility, but it’s my list, what are you going to do?

There are basic assumptions behind how most theoretical physicists do cosmology. These assumptions are reasonably plausible, and seem to be needed to do anything at all. But they can be relaxed. Our universe looks like it’s homogeneous on the largest scales: the same density on average, in every direction you look. But the way that gets enforced in the mathematical models is very direct, and it may be that a different, more indirect, approach has more flexibility. I’ll probably be writing about this more in future, hopefully somewhere journalistic. But there are some very cool ideas floating around, gradually getting fleshed out more and more. It may be that the answer to many of the mysteries of cosmology right now is not new physics, but new mathematics: a new approach to modeling the universe.

Justin WilsonTwo Dimensional Materials have gone crazy!

There are a ton of two-dimensional materials these days. You’ve probably heard of graphene, a single layer of carbon atoms arranged in a hexagonal grid.

a close up of a woven surface
In graphene, carbon atoms sit at the vertices of these hexagons. Photo by Andrew Draper on Unsplash

In 2018, everything changed when two layers of graphene were twisted to reveal superconductivity! The twist itself is interesting (I briefly discussed it in a previous post), but the key takeaway is that these materials now come with an extra knob for accessing new phases of matter. It’s remarkable. We can first think of these materials like Lego blocks:

blue, red, and white artwork
Photo by Omar Flores on Unsplash

Each layer is a different material: mix and match, and you might discover an exotic new phase. This “Lego” idea had already been in the air before 2018, but the physics since then has shown that it’s not just about stacking—we can twist too, creating not just patterns, but new ways for electrons to move.

Subscribe now

Two hexagonal layers twisted on top of each other, creating a moiré pattern.

We knew these patterns would occur, but we didn’t realize we could make it superconduct. Now we can stack and twist to great effect. Of course, twisted bilayer graphene isn’t about to revolutionize high-speed trains (it goes superconducting at only 4K1), but the way it goes superconducting is eerily reminiscent of higher-temperature superconductors. That means it might help us understand those other materials better.

And once people started twisting, they didn’t stop. We now have twisted multilayers of graphene, transition-metal dichalcogenide (TMD) bilayers2, and more. But it doesn’t end there; you can also apply magnetic fields, electric fields, and pattern the lattice in sophisticated ways. With all that in mind, here’s a short and incomplete survey of some of the exotic phases in these materials:

“Fractional… what now?”

All of these phases are exceptionally hard to understand and model. Some of the best minds in the field are actively working on them. One particularly exciting phase is the fractional Chern insulator, which could be useful for quantum computing.

But even setting aside applications, what’s astonishing is that all of these phenomena come from nothing more than electrons moving on a lattice and experiencing a few fields. Nature seems to treat electrons like Play-Doh, shaping them into wildly different quantum phases.

This is a deep and fundamental question: What can be accomplished using electrons alone?

1

That’s -452.47 degrees Fahrenheit.

2

To this day, I still can’t say the full name, so I just say “TMD.”

Doug NatelsonUS science funding - now time to push on the House appropriators

Some not-actively-discouraging news out of Washington DC yesterday:  The Senate appropriations committee is doing its markups of the various funding bills (which all technically originated in the House), and it appears that they have pushed to keep the funding for NASA and NSF (which are bundled in the same bill with the Department of Justice for no obvious reason) at FY24 levels.  See here as well.  

This is not yet a done deal within the Senate, but it's better than many alternatives.  If you are a US citizen or permanent resident and one of your senators is on the appropriations committee, please consider calling them to reinforce how devastating massive budget cuts to these agencies would be.  I am told that feedback to any other senators is also valuable, but appropriators are particularly important here.

The House appropriations committee has not yet met to mark up their versions.  They had been scheduled to do so earlier this week but punted it for an unknown time.  Their relevant subcommittee membership is here.  Again, if you are a constituent of one of these representatives, your calls would be particularly important, though it doesn't hurt for anyone to make their views heard to their representative.  If the House version aligns with the presidential budget request, then a compromise between the two might still lead to 30% cuts to NSF and NASA, which would (IMO) still be catastrophic for the agencies and US science and competitiveness.

This is a marathon, not a sprint.  There are still many looming difficulties - staffing cuts are well underway.   Spending of already appropriated funds at agencies like NSF is way down, leading to the possibility that the executive branch may just order (or not-order-but-effectively-order) agencies not to spend and then claw back the funds.  This year and in future years they could decide to underspend appropriations knowing that any legal resistance will take years and cost a fortune to work its way through the courts.  This appropriations battle is also an annual affair - even if the cuts are forestalled for now (it is unlikely that the executive would veto all the spending bills over science agency cuts), this would have to happen again next year, and so on.

Still, right now, there is an opportunity to push against funding cuts.  Failing to try would be a surrender.

(Obligatory notice:  yes, I know that there are large-scale budgetary challenges facing the US; I don't think destroying government investment in science and engineering research is an intelligent set of spending cuts.)

July 10, 2025

Scott Aaronson Trump and Iran, by popular request

I posted this on my Facebook, but several friends asked me to share more widely, so here goes:

I voted against Trump three times, and donated thousands to his opponents. I’d still vote against him today, seeing him as a once-in-a-lifetime threat to American democracy and even to the Enlightenment itself.

But last night I was also grateful to him for overruling the isolationists and even open antisemites in his orbit, striking a blow against the most evil regime on the planet, and making it harder for that regime to build nuclear weapons. I acknowledge that his opponents, who I voted for, would’ve probably settled for a deal that would’ve resulted in Iran eventually getting nuclear weapons, and at any rate getting a flow of money to redirect to Hamas, Hezbollah, and the Houthis.

May last night’s events lead to the downfall of the murderous ayatollah regime altogether, and to the liberation of the Iranian people from 46 years of oppression. To my many, many Iranian friends: I hope all your loved ones stay safe, and I hope your great people soon sees better days. I say this as someone whose wife and 8-year-old son are right now in Tel Aviv, sheltering every night from Iranian missiles.

Fundamentally, I believe not only that evil exists in the world, but that it’s important to calibrate evil on a logarithmic scale. Trump (as I’ve written on this blog for a decade) terrifies me, infuriates me, and embarrasses me, and through his evisceration of American science and universities, has made my life noticeably worse. On the other hand, he won’t hang me from a crane for apostasy, nor will he send a ballistic missile to kill my wife and son and then praise God for delivering them into his hands.


Update: I received the following comment on this post, which filled me with hope, and demonstrated more moral courage than perhaps every other anonymous comment in this blog’s 20-year history combined. To this commenter and their friends and family, I wish safety and eventually, liberation from tyranny.

I will keep my name private for clear reasons. Thank you for your concern for Iranians’ safety and for wishing the mullah regime’s swift collapse. I have fled Tehran and I’m physically safe but mentally, I’m devastated by the war and the internet blackout (the pretext is that Israeli drones are using our internet). Speaking of what the mullahs have done, especially outrageous was the attack on the Weizmann Institute. I hope your wife and son remain safe from the missiles of the regime whose thugs have chased me and my friends in the streets and imprisoned my friends for simple dissent. All’s well that ends well, and I hope this all ends well.

July 09, 2025

Doug NatelsonNew updates + tetrahedra, tunneling times, and more

Here are a number of items from the past week or so that I think readers of this blog might find interesting:
  • Essentially all the news pertaining to the US federal funding of science continues to be awful.  This article from Science summarizes the situation well, as does this from The Guardian and this editorial in the Washington Post. I do like the idea of a science fair of cancelled grants as a way to try to get alleged bipartisan appropriator notice of just how bad the consequences would be of the proposed cuts.  
  • On a more uplifting note, mathematicians have empirically demonstrated a conjecture originally made by John Conway, that it is possible to make a tetrahedral pyramid that, under gravity, has only one stable orientation.  Quanta has a nice piece on this with a cool animated gif, and here is the actual preprint about it.  It's all about mass distributions and moments of inertia about edges.  As others have pointed out including the authors, this could be quite useful for situations like recent lunar lander attempts that seem to have a difficult time not topping over.
  • A paper last week in Nature uses photons and a microcavity to try to test how long it takes photons to tunnel through a classically forbidden region.  In this setup, it is mathematically legit to model the photons as if they have an effective mass, and one can model the barrier they need to traverse in terms of an effective potential energy.  Classically, if the kinetic energy of the particle of interest is less than the potential energy of the barrier, the particle is forbidden inside the barrier.  I've posted about the issue of tunneling time repeatedly over the years (see here for a 2020 post containing links), because I think it's a fascinating problem both conceptually and as a puzzle for experimentalists (how does one truly do a fair test of this?).  The take-away from this paper is, the more classically forbidden the motion, the faster the deduced tunneling time.  This has been seen in other experiments testing this idea.  A key element of novelty in the new paper is the claim that the present experiment seems (according to the authors) to not be reasonably modeled by Bohmian mechanics.  I'd need to read this in more depth to better understand it, as I had thought that Bohmian mechanics applied to problems like this is generally indistinguishable in predictions from conventional quantum mechanics, basically by design.
  • In other non-condensed matter news, there is an interstellar comet transiting the solar system right now.  This is very cool - it's only the third such object detected by humans, but to be fair we've only really been looking for a few years.  This suggests that moderately sized hunks of material are likely passing through from interstellar space all the time, and the Vera C. Rubin Observatory will detect a boatload of them.  My inner science fiction fan is hoping that the object changes its orbit at perihelion by mysterious means.  
This week is crunch time for a final push on US congressional appropriators to try to influence science agency budgets in FY26.  I urge you to reach out if this matters to you.  Likewise, I think it's more than reasonable to ask congress why the NSF is getting kicked out of its headquarters with no plan for an alternative agency location, so that the HUD secretary can have a palatial second home in that building.

July 08, 2025

Terence TaoSalem Prize now accepting nominations for 2025

The Salem prize was established in 1968 and named in honor of Raphaël Salem (1898-1963), a mathematician famous notably for his deep study of the links between Fourier series and number theory and for pioneering applications of probabilistic methods to these fields. It was not awarded from 2019-2022, due to both the COVID pandemic and the death of Jean Bourgain who had been almost single-handedly administering the prize, but is now active again, being administered by Akshay Ventakesh and the IAS. I chair the scientific committee for this prize, whose other members are Guy David and Mikhail Sodin. Last year, the prize was awarded to Miguel Walsh and Yilin Wang.

Nominations for the 2025 Salem Prize are now open until September 15th. Nominations should include a CV of the nominee and a nomination letter explaining the significance of the nominee’s work. Supplementary documentation, such as supporting letters of recommendation or key publications, can additionally be provided, but are not required.

Nominees may be individuals from any country or institution. Preference will be given to nominees who have received their PhD in the last ten years, although this rule may be relaxed if there are mitigating personal circumstances, or if there have been few Salem prize winners in recent years.  Self-nominations will not be considered, nor are past Prize winners or Scientific Committee members eligible.

The prize does not come with a direct monetary award, but winners will be invited to visit the IAS and to give a lecture associated with the award of the prize.

See also the previous year’s announcement of the Salem prize nomination period.

July 07, 2025

Matt Strassler Extreme and Dumb Cuts to US Science

As many of you are no doubt aware, in the past few days the US Congress voted to make major cuts to scientific research, and the president signed the bill. The government’s National Science Foundation has been cut by more than half, which means that its actual science budget has been cut by much more than that after you account for fixed costs. So vast, sudden and draconian are these cuts that it will take a long time for me and others in the field to figure out what has actually happened.

The reductions seem extreme, quite arbitrary and very poorly thought out. As an example, half of the LIGO observatory (the Laser Interferometer Gravitational-Wave Observatory, whose amazing discoveries, such as this one and this one, earned the United States a Nobel Prize in 2017) is being hit hard. There are currently two interferometers, one in Washington state and one in Lousiana, but one has been largely defunded in this bill, if I understand correctly.

I can see the logic: the scientists have two interferometers, but in tough times they ought to be able to get along with just one, right?

Well, that’s like cutting off one of a runner’s legs. Two were built because two were needed.

With just one, the signal from most gravitational wave events is so weak that you can’t distinguish it from noise. Other interferometers around the world just aren’t working well enough to make up for throwing away one of LIGOs. (And besides, you need three or four interferometers around the world to be able to know precisely in the sky where the waves are coming from, knowledge which can make other major discoveries possible.)

According to Science magazine, “In a two-sentence email to Science, an NSF spokesperson said the plan reflects `a strategic alignment of resources in a constrained fiscal environment.’ “

This is not strategic. This is stupid. The amount of money saved, less than 10 cents per year per US citizen, is very small compared to what we as a nation have already spent on this wonderful facility, and cutting LIGO in half makes it dramatically less than half as good — so this is actually a big waste of money both past and future. The decision to make this cut in this way is nothing short of ridiculous and incompetent.

[Not to mention that “constrained fiscal environment” is quite a phrase when you’re increasing the budget deficit rather than shrinking it.]

I fear there are many other similar examples to be found.

July 06, 2025

Tommaso DorigoHighlights From MODE And EUCAIF

After a month of intense travel, which among other things included attendance to the MODE Workshop in Crete and the EUCAIF conference in Sardinia, I am back to northern Sweden. Besides significantly improving my well-being, given the horrible heat wave that hit Southern and Central Europe in the past few weeks, the move north allows me to finally give a relaxed look back at the most relevant information I gathered at those events, and other relevant things.

read more

June 29, 2025

Scott Aaronson BusyBeaver(6) is really quite large

For overdetermined reasons, I’ve lately found the world an increasingly terrifying and depressing place. It’s gotten harder and harder to concentrate on research, or even popular science writing. Every so often, though, something breaks through that wakes my inner child, reminds me of why I fell in love with research thirty years ago, and helps me forget about the triumphantly strutting factions working to destroy everything I value.

Back in 2022, I reported an exciting advance in BusyBeaverology: namely, whereas we previously knew merely that BB(6) > 1036,534, Pavel Kropitz managed to show that

BB(6) > 1510.

For those tuning in from home, here BB(6) is the 6th Busy Beaver number, i.e. the maximum number of steps that a 6-state Turing machine with a {0,1} alphabet can take before halting, when run on an initially all-0 input tape. Also, the left-superscript means tetration, or iterated exponentiation: for example, 1510 means 10 to the 10 to the 10 and so on 15 times.

By comparison, last year the international “BBchallenge” team determined that BB(5) is “merely” 47,176,870 (see also Quanta magazine’s superb feature article on that milestone). So, between 5 and 6 is where the Busy Beaver function makes its leap, from the millions to beyond the bounds of observable reality.

But if you thought that was the end of the BB(6) story, think again! Eleven days ago, Tristan Sterin, who organized the BBchallenge the team, emailed to tell me that a team member with the handle “mxdys” improved the BB(6) bound yet further, to

BB(6) > 10,000,00010

(i.e., 10 to the 10 to the 10 and so on 10 million times), with a correctness proof in Coq. Then, three days ago, Tristan wrote again to say that mxdys has improved the bound again, to

$$ BB(6) \gt ^{^{{^9}2}2}2 $$

I.e., BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. So in particular, BB(6) is at least 2 pentated to the 5, where pentation is iterated tetration, i.e. the operation that is to tetration as tetration is to exponentiation, exponentiation is to multiplication, and multiplication is to addition.

Last week, when we “merely” knew that BB(6) > 10,000,00010, I talked to a journalist who asked me to give an intuitive sense of how big such a number is. So I said, imagine you had 10,000,00010 grains of sand. Then you could … well, uh … you could fill about 10,000,00010 copies of the observable universe with that sand. I hope that helps people visualize it!

The journalist also asked: have these new discoveries about BB(6) caused me to rethink any broader beliefs about the Busy Beaver function? And I mean, yes and no: it was always completely within the realm of possibility that BB(6) would already be, not some puny little thing like 1036,534, but way out in iteration land. Now that we know for sure that it is, though, maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9, rather than when it’s 20 or 30 or whatever. (Currently, we know that BB(n) becomes independent of ZFC only when n=643.)


Unrelated Update: I’m just now returning to the US from STOC’2025 in Prague, where I saw lots of old friends and learned many interesting new things, again helping to distract me from the state of the world! Many I’ll write about some of those things in a future post. For now, though, anyone who’s interested in my STOC plenary lecture, entitled “The Status of Quantum Speedups,” can check out the PowerPoint slides here.

June 27, 2025

Justin WilsonWater and its phases

I’m working on a much longer post on phases and phase transitions for next week1, but in the meantime, let me share with you some cool facts about water and its “phases.”

\We all know about solids, liquids, and gases from school. Heat up ice, and you get water; heat up water, and you get vapor. We may even have been slightly baffled if we saw this phase diagram with “pressure” added to the mix

Thanks for reading Quantum Matters! Subscribe for free to receive new posts and support my work.

Phase diagram of water. This file is licensed under CC BY-SA 3.0 .

I see here a solid phase, a liquid phase, and a gas phase, but what is this “Critical point”? If you tune your temperature and pressure just right you can smoothly cross over from liquid to gas without ever undergoing a phase transition. Without getting into the molecular details, we can think of phases as particular valleys between mountains, and water wants to reach the absolute lowest point. Sometimes there are two valleys, but one is lower, and sometimes there is just one valley.

In fact, this “number of valleys” is why we see this odd behavior. If we sit at 100 degrees C and decrease or increase the pressure, there are two energy minima2—two valleys. At small pressure, the deepest valley is on the gas side, and at large pressure, the deepest valley is on the liquid side. As you then tune pressure across that one-bar point, one valley gets deeper than the other—it’s the true minimum! Yet, to get from one valley to the next, you need some energy to get you over that mountain in between. That’s the phase transition. However, that's not the only option. As the temperature increases, the mountain in between gets smaller and smaller until, at the critical point, it finally disappears, and the two valleys merge.

Without two distinguished valleys, there is no need to scale the mountain and no need for a phase transition. Liquid smoothly and easily becomes gas. At the temperatures above the critical point, you cannot meaningfully distinguish water and gas. OK, so perhaps we only have two phases?

Not quite; look at this more fleshed-out version of the phase diagram:

When ice forms, it adopts a low-energy crystal structure. However, there are numerous crystal structures to choose from. In fact, as you change pressure and temperature, it can completely reorganize how the ice bonds together into a crystal. This leads to over 20 phases of ice, labeled by some of the Roman numerals above.3

Then what are the phases? Solids undergo their own phase transitions—structural phase transitions. Are these not phases of matter? If they are, then we have already exceeded our three phases of matter just within water. But phases go beyond temperature and pressure. They also possess a multitude of interesting properties, particularly at that critical point. We'll cover some of that in detail next week.

1

We’ll be making our own phase! Related, of course, to a known phase transition.

2

For most of the phase diagram, there is one absolute minimum, and the other is a “metastable” or local minimum.

3

For those interested, this Wikipedia article has a lot of information on the phases of ice.

June 25, 2025

Scott Aaronson Guess I’m A Rationalist Now

A week ago I attended LessOnline, a rationalist blogging conference featuring many people I’ve known for years—Scott Alexander, Eliezer Yudkowsky, Zvi Mowshowitz, Sarah Constantin, Carl Feynman—as well as people I’ve known only online and was delighted to meet in person, like Joe Carlsmith and Jacob Falkovich and Daniel Reeves. The conference was at Lighthaven, a bewildering maze of passageways, meeting-rooms, sleeping quarters, gardens, and vines off Telegraph Avenue in Berkeley, which has recently emerged as the nerd Shangri-La, or Galt’s Gulch, or Shire, or whatever. I did two events at this year’s LessOnline: a conversation with Nate Soares about the Orthogonality Thesis, and an ask-me-anything session about quantum computing and theoretical computer science (no new ground there for regular consumers of my content).

What I’ll remember most from LessOnline is not the sessions, mine or others’, but the unending conversation among hundreds of people all over the grounds, which took place in parallel with the sessions and before and after them, from morning till night (and through the night, apparently, though I’ve gotten too old for that). It felt like a single conversational archipelago, the largest in which I’ve ever taken part, and the conference’s real point. (Attendees were exhorted, in the opening session, to skip as many sessions as possible in favor of intense small-group conversations—not only because it was better but also because the session rooms were too small.)

Within the conversational blob, just making my way from one building to another could take hours. My mean free path was approximately five feet, before someone would notice my nametag and stop me with a question. Here was my favorite opener:

“You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”

“Yes,” I replied, not bothering to correct the “physicist” part.

One night, I walked up to Scott Alexander, who sitting on the ground, with his large bald head and a blanket he was using as a robe, resembled a monk. “Are you enjoying yourself?” he asked.

I replied, “you know, after all these years of being coy about it, I think I’m finally ready to become a Rationalist. Is there, like, an initiation ritual or something?”

Scott said, “Oh, you were already initiated a decade ago; you just didn’t realize it at the time.” Then he corrected himself: “two decades ago.”

The first thing I did, after coming out as a Rationalist, was to get into a heated argument with Other Scott A., Joe Carlsmith, and other fellow-Rationalists about the ideas I set out twelve years ago in my Ghost in the Quantum Turing Machine essay. Briefly, my argument was that the irreversibility and ephemerality of biological life, which contrasts with the copyability, rewindability, etc. of programs running on digital computers, and which can ultimately be traced back to microscopic details of the universe’s initial state, subject to the No-Cloning Theorem of quantum mechanics, which then get chaotically amplified during brain activity … might be a clue to a deeper layer of the world, one that we understand about as well as the ancient Greeks understood Newtonian physics, but which is the layer where mysteries like free will and consciousness will ultimately need to be addressed.

I got into this argument partly because it came up, but partly also because this seemed like the biggest conflict between my beliefs and the consensus of my fellow Rationalists. Maybe part of me wanted to demonstrate that my intellectual independence remained intact—sort of like a newspaper that gets bought out by a tycoon, and then immediately runs an investigation into the tycoon’s corruption, as well as his diaper fetish, just to prove it can.

The funny thing, though, is that all my beliefs are the same as they were before. I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all).

Given how much I resonate with what the Rationalists are trying to do, why did it take me so long to identify as one?

Firstly, while 15 years ago I shared the Rationalists’ interests, sensibility, and outlook, and their stances on most issues, I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?

Suffice it to say that empirical developments have since caused me to withdraw my objection. Sometimes weird people are weird merely because they see the future sooner than others. Indeed, it seems to me that the biggest thing the Rationalists got wrong about AI was to underestimate how soon the revolution would happen, and to overestimate how many new ideas would be needed for it (mostly, as we now know, it just took lots more compute and training data). Now that I, too, spend some of my time working on AI alignment, I was able to use LessOnline in part for research meetings with colleagues.

A second reason I didn’t identify with the Rationalists was cultural: they were, and are, centrally a bunch of twentysomethings who “work” at an ever-changing list of Berkeley- and San-Francisco-based “orgs” of their own invention, and who live in group houses where they explore their exotic sexualities, gender identities, and fetishes, sometimes with the aid of psychedelics. I, by contrast, am a straight, monogamous, middle-aged tenured professor, married to another such professor and raising two kids who go to normal schools. Hanging out with the Rationalists always makes me feel older and younger at the same time.

So what changed? For one thing, with the march of time, a significant fraction of Rationalists now have marriages, children, or both—indeed, a highlight of LessOnline was the many adorable toddlers running around the Lighthaven campus. Rationalists are successfully reproducing! Some because of explicit pronatalist ideology, or because they were persuaded by Bryan Caplan’s arguments in Selfish Reasons to Have More Kids. But others simply because of the same impulses that led their ancestors to do the same for eons. And perhaps because, like the Mormons or Amish or Orthodox Jews, but unlike typical secular urbanites, the Rationalists believe in something. For all their fears around AI, they don’t act doomy, but buzz with ideas about how to build a better world for the next generation.

At a LessOnline parenting session, hosted by Julia Wise, I was surrounded by parents who worry about the same things I do: how do we raise our kids to be independent and agentic yet socialized and reasonably well-behaved, technologically savvy yet not droolingly addicted to iPad games? What schooling options will let them accelerate in math, save them from the crushing monotony that we experienced? How much of our own lives should we sacrifice on the altar of our kids’ “enrichment,” versus trusting Judith Rich Harris that such efforts quickly hit a point of diminishing returns?

A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future. Taking what Rationalists call the “outside view,” how good is the track record for this sort of thing?

OK, but what did I actually see at Lighthaven? I saw something that seemed to resemble a cult only insofar as the Beatniks, the Bloomsbury Group, the early Royal Society, or any other community that believed in something did. When Eliezer himself—the bearded, cap-wearing Moses who led the nerds from bondage to their Promised Land in Berkeley—showed up, he was argued with like anyone else. Eliezer has in any case largely passed his staff to a new generation: Nate Soares and Zvi Mowshowitz have found new and, in various ways, better ways of talking about AI risk; Scott Alexander has for the last decade written the blog that’s the community’s intellectual center; figures from Kelsey Piper to Jacob Falkovich to Aella have taken Rationalism in new directions, from mainstream political engagement to the … err … statistical analysis of orgies.

I’ll say this, though, on the naysayers’ side: it’s really hard to make dancing to AI-generated pop songs about Bayes’ theorem and Tarski’s definition of truth not feel cringe, as I can now attest from experience.

The cult thing brings me to the deepest reason I hesitated for so long to identify as a Rationalist: namely, I was scared that if I did, people whose approval I craved (including my academic colleagues, but also just randos on the Internet) would sneer at me. For years, I searched of some way of explaining this community’s appeal so reasonable that it would silence the sneers.

It took years of psychological struggle, and (frankly) solidifying my own place in the world, to follow the true path, which of course is not to give a shit what some haters think of my life choices. Consider: five years ago, it felt obvious to me that the entire Rationalist community might be about to implode, under existential threat from Cade Metz’s New York Times article, as well as RationalWiki and SneerClub and all the others laughing at the Rationalists and accusing them of every evil. Yet last week at LessOnline, I saw a community that’s never been thriving more, with a beautiful real-world campus, excellent writers on every topic who felt like this was the place to be, and even a crop of kids. How many of the sneerers are living such fulfilled lives? To judge from their own angry, depressed self-disclosures, probably not many.

But are the sneerers right that, even if the Rationalists are enjoying their own lives, they’re making other people’s lives miserable? Are they closet far-right monarchists, like Curtis Yarvin? I liked how The New Yorker put it in its recent, long and (to my mind) devastating profile of Yarvin:

The most generous engagement with Yarvin’s ideas has come from bloggers associated with the rationalist movement, which prides itself on weighing evidence for even seemingly far-fetched claims. Their formidable patience, however, has also worn thin. “He never addressed me as an equal, only as a brainwashed person,” Scott Aaronson, an eminent computer scientist, said of their conversations. “He seemed to think that if he just gave me one more reading assignment about happy slaves singing or one more monologue about F.D.R., I’d finally see the light.”

The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a pro-abundance agenda that would resonate with the public and finally defeat MAGA.

But surely the Rationalists are incels, bitter that they can’t get laid? Again, the closest I saw was a session where Jacob Falkovich helped a standing-room-only crowd of mostly male nerds confront their fears around dating and understand women better, with Rationalist women eagerly volunteering to answer questions about their perspective. Gross, right? (Also, for those already in relationships, Eliezer’s primary consort and former couples therapist Gretta Duleba did a session on relationship conflict.)

So, yes, when it comes to the Rationalists, I’m going to believe my own lying eyes over the charges of the sneerers. The sneerers can even say about me, in their favorite formulation, that I’ve “gone mask off,” confirmed the horrible things they’ve always suspected. Yes, the mask is off—and beneath the mask is the same person I always was, who has an inordinate fondness for the Busy Beaver function and the complexity class BQP/qpoly, and who uses too many filler words and moves his hands too much, and who strongly supports the Enlightenment, and who once feared that his best shot at happiness in life would be to earn women’s pity rather than their contempt. Incorrectly, as I’m glad to report. From my nebbishy nadir to the present, a central thing that’s changed is that, from my family to my academic colleagues to the Rationalist community to my blog readers, I finally found some people who want what I have to sell.


Unrelated Announcements:

My replies to comments on this post might be light, as I’ll be accompanying my daughter on a school trip to the Galapagos Islands!

A few weeks ago, I was “ambushed” into leading a session on philosophy and theoretical computer science at UT Austin. (I.e., asked to show up for the session, but thought I’d just be a participant rather than the main event.) The session was then recorded and placed on YouTube—and surprisingly, given the circumstances, some people seemed to like it!

Friend-of-the-blog Alon Rosen has asked me to announce a call for nominations for a new theoretical computer science prize, in memory of my former professor (and fellow TCS blogger) Luca Trevisan, who was lost to the world too soon.

And one more: Mahdi Cheraghchi has asked me to announce the STOC’2025 online poster session, registration deadline June 12; see here for more. Incidentally, I’ll be at STOC in Prague to give a plenary on quantum algorithms; I look forward to meeting any readers who are there!

June 24, 2025

Clifford JohnsonSuper-Fun!

image of completed paper, with pencilIn January 2024 I wrote a paper showing how to define the Supersymmetric Virasoro Minimal String* (SVMS) as a random matrix model, compute many of its properties, and indeed predict many aspects of its physics. This was the first time the SVMS had been constructed. Despite that, a recent paper found it necessary to specifically single out my paper disparagingly as somehow not being a string theory paper, in service of (of course) their own work trying to formulate it. Odd - and disappointingly unkind - behaviour. But I’m used to it.

Anyway, since it remains the case that there is no other working definition of the SVMS out there, I thought I’d revisit the matter, clean up some unpublished work of mine (defining the 0B version) and develop the whole formalism much more. Might be useful for people pursuing other approaches. What I thought would be at most a 10 page paper turned into a 19 page one, packed with lots of fun results.

In particular it is now clear to me how the type 0A vs 0B choices, usually done at the level of perturbative worldsheet CFT methods, show up fully at the level of matrix model string equation solutions. It is often said that random matrix model methods can rather obscure issues like worldsheet supersymmetry, making it unclear what structures pertain to what features in other approaches. That can be true, so these new observations clear show that this is not always the case. (This is true quite generally, beyond this particular family of models.)

Also (and this is lots of fun!) I demonstrate that the basic loop observables of the SVMS .... Click to continue reading this post

The post Super-Fun! appeared first on Asymptotia.

John PreskillCongratulations, class of 2025! Words from a new graduate

Editor’s note (Nicole Yunger Halpern): Jade LeSchack, the Quantum Steampunk Laboratory’s first undergraduate, received her bachelor’s degree from the University of Maryland this spring. Kermit the Frog presented the valedictory address, but Jade gave the following speech at the commencement ceremony for the university’s College of Mathematical and Natural Sciences. Jade heads to the University of Southern California for a PhD in physics this fall.

Good afternoon, everyone. My name is Jade, and it is my honor and pleasure to speak before you. 

Today, I’m graduating with my Bachelor of Science, but when I entered UMD, I had no idea what it meant to be a professional scientist or where my passion for quantum science would take me. I want you to picture where you were four years ago. Maybe you were following a long-held passion into college, or maybe you were excited to explore a new technical field. Since then, you’ve spent hours titrating solutions, debugging code, peering through microscopes, working out proofs, and all the other things our disciplines require of us. Now, we’re entering a world of uncertainty, infinite possibility, and lifelong connections. Let me elaborate on each of these.

First, there is uncertainty. Unlike simplified projectile motion, you can never predict the exact trajectory of your life or career. Plans will change, and unexpected opportunities will arise. Sometimes, the best path forward isn’t the one you first imagined. Our experiences at Maryland have prepared us to respond to the challenges and curveballs that life will throw at us. And, we’re going to get through the rough patches.

Second, let’s embrace the infinite possibilities ahead of us. While the concept of the multiverse is best left to the movies, it’s exciting to think about all the paths before us. We’ve each found our own special interests over the past four years here, but there’s always more to explore. Don’t put yourself in a box. You can be an artist and a scientist, an entrepreneur and a humanitarian, an athlete and a scholar. Continue to redefine yourself and be open to your infinite potential.

Third, as we move forward, we are equipped not only with knowledge but with connections. We’ve made lasting relationships with incredible people here. As we go from place to place, the people who we’re close to will change. But we’re lucky that, these days, people are only an email or phone call away. We’ll always have our UMD communities rooting for us.

Now, the people we met here are certainly not the only important ones. We’ve each had supporters along the various stages of our journeys. These are the people who championed us, made sacrifices for us, and gave us a shoulder to cry on. I’d like to take a moment to thank all my mentors, teachers, and friends for believing in me. To my mom, dad, and sister sitting up there, I couldn’t have done this without you. Thank you for your endless love and support. 

To close, I’d like to consider this age-old question that has always fascinated me: Is mathematics discovered or invented? People have made a strong case for each side. If we think about science in general, and our future contributions to our fields, we might ask ourselves: Are we discoverers or inventors? My answer is both! Everyone here with a cap on their head is going to contribute to both. We’re going to unearth new truths about nature and innovate scientific technologies that better society. This uncertain, multitudinous, and interconnected world is waiting for us, the next generation of scientific thinkers! So let’s be bold and stay fearless. 

Congratulations to the class of 2024 and the class of 2025! We did it!

Author’s note: I was deeply grateful for the opportunity to serve as the student speaker at my commencement ceremony. I hope that the science-y references tickle the layman and SME alike. You can view a recording of the speech here. I can’t wait for my next adventures in quantum physics!

June 23, 2025

John PreskillA (quantum) complex legacy: Part trois

When I worked in Cambridge, Massachusetts, a friend reported that MIT’s postdoc association had asked its members how it could improve their lives. The friend confided his suggestion to me: throw more parties.1 This year grants his wish on a scale grander than any postdoc association could. The United Nations has designated 2025 as the International Year of Quantum Science and Technology (IYQ), as you’ve heard unless you live under a rock (or without media access—which, come to think of it, sounds not unappealing).

A metaphorical party cracker has been cracking since January. Governments, companies, and universities are trumpeting investments in quantum efforts. Institutions pulled out all the stops for World Quantum Day, which happens every April 14 but which scored a Google doodle this year. The American Physical Society (APS) suffused its Global Physics Summit in March with quantum science like a Bath & Body Works shop with the scent of Pink Pineapple Sunrise. At the summit, special symposia showcased quantum research, fellow blogger John Preskill dished about quantum-science history in a dinnertime speech, and a “quantum block party” took place one evening. I still couldn’t tell you what a quantum block party is, but this one involved glow sticks.

Google doodle from April 14, 2025

Attending the summit, I felt a satisfaction—an exultation, even—redolent of twelfth grade, when American teenagers summit the Mont Blanc of high school. It was the feeling that this year is our year. Pardon me while I hum “Time of your life.”2

Speakers and organizer of a Kavli Symposium, a special session dedicated to interdisciplinary quantum science, at the APS Global Physics Summit

Just before the summit, editors of the journal PRX Quantum released a special collection in honor of the IYQ.3 The collection showcases a range of advances, from chemistry to quantum error correction and from atoms to attosecond-length laser pulses. Collaborators and I contributed a paper about quantum complexity, a term that has as many meanings as companies have broadcast quantum news items within the past six months. But I’ve already published two Quantum Frontiers posts about complexity, and you surely study this blog as though it were the Bible, so we’re on the same page, right? 

Just joshing. 

Imagine you have a quantum computer that’s running a circuit. The computer consists of qubits, such as atoms or ions. They begin in a simple, “fresh” state, like a blank notebook. Post-circuit, they store quantum information, such as entanglement, as a notebook stores information post-semester. We say that the qubits are in some quantum state. The state’s quantum complexity is the least number of basic operations, such as quantum logic gates, needed to create that state—via the just-completed circuit or any other circuit.

Today’s quantum computers can’t create high-complexity states. The reason is, every quantum computer inhabits an environment that disturbs the qubits. Air molecules can bounce off them, for instance. Such disturbances corrupt the information stored in the qubits. Wait too long, and the environment will degrade too much of the information for the quantum computer to work. We call the threshold time the qubits’ lifetime, among more-obscure-sounding phrases. The lifetime limits the number of gates we can run per quantum circuit.

The ability to perform many quantum gates—to perform high-complexity operations—serves as a resource. Other quantities serve as resources, too, as you’ll know if you’re one of the three diehard Quantum Frontiers fans who’ve been reading this blog since 2014 (hi, Mom). Thermodynamic resources include work: coordinated energy that one can harness directly to perform a useful task, such as lifting a notebook or staying up late enough to find out what a quantum block party is. 

My collaborators: Jonas Haferkamp, Philippe Faist, Teja Kothakonda, Jens Eisert, and Anthony Munson (in an order of no significance here)

My collaborators and I showed that work trades off with complexity in information- and energy-processing tasks: the more quantum gates you can perform, the less work you have to spend on a task, and vice versa. Qubit reset exemplifies such tasks. Suppose you’ve filled a notebook with a calculation, you want to begin another calculation, and you have no more paper. You have to erase your notebook. Similarly, suppose you’ve completed a quantum computation and you want to run another quantum circuit. You have to reset your qubits to a fresh, simple state

Three methods suggest themselves. First, you can “uncompute,” reversing every quantum gate you performed.4 This strategy requires a long lifetime: the information imprinted on the qubits by a gate mustn’t leak into the environment before you’ve undone the gate. 

Second, you can do the quantum equivalent of wielding a Pink Pearl Paper Mate: you can rub the information out of your qubits, regardless of the circuit you just performed. Thermodynamicists inventively call this strategy erasure. It requires thermodynamic work, just as applying a Paper Mate to a notebook does. 

Third, you can

Suppose your qubits have finite lifetimes. You can undo as many gates as you have time to. Then, you can erase the rest of the qubits, spending work. How does complexity—your ability to perform many gates—trade off with work? My collaborators and I quantified the tradeoff in terms of an entropy we invented because the world didn’t have enough types of entropy.5

Complexity trades off with work not only in qubit reset, but also in data compression and likely other tasks. Quantum complexity, my collaborators and I showed, deserves a seat at the great soda fountain of quantum thermodynamics.

The great soda fountain of quantum thermodynamics

…as quantum information science deserves a seat at the great soda fountain of physics. When I embarked upon my PhD, faculty members advised me to undertake not only quantum-information research, but also some “real physics,” such as condensed matter. The latter would help convince physics departments that I was worth their money when I applied for faculty positions. By today, the tables have turned. A condensed-matter theorist I know has wound up an electrical-engineering professor because he calculates entanglement entropies.

So enjoy our year, fellow quantum scientists. Party like it’s 1925. Burnish those qubits—I hope they achieve the lifetimes of your life.

1Ten points if you can guess who the friend is.

2Whose official title, I didn’t realize until now, is “Good riddance.” My conception of graduation rituals has just turned a somersault. 

3PR stands for Physical Review, the brand of the journals published by the APS. The APS may have intended for the X to evoke exceptional, but I like to think it stands for something more exotic-sounding, like ex vita discedo, tanquam ex hospitio, non tanquam ex domo.

4Don’t ask me about the notebook analogue of uncomputing a quantum state. Explaining it would require another blog post.

5For more entropies inspired by quantum complexity, see this preprint. You might recognize two of the authors from earlier Quantum Frontiers posts if you’re one of the three…no, not even the three diehard Quantum Frontiers readers will recall; but trust me, two of the authors have received nods on this blog before.

June 17, 2025

June 08, 2025

Tommaso DorigoWin A MSCA Post-Doctoral Fellowship!

Applications for MSCA Post-doctoral fellowships are on, and will be so until September 10 this year. What that means is that if you have less than 8 years of experience after your Ph.D., you can pair up with a research institute in Europe to present a research plan, and the European Commission may decide to fund it for two years (plus 6 months in industry in some cases).

In order for your application to have a chance to win funding, you need to: 
  1. have a great research topic in mind, 
  2. be ready to invest some time in writing a great application, and 
  3. pair up with an outstanding supervisor at a renowned research institute. 

read more

June 05, 2025