On my right sat a quantum engineer. She was facing a melanoma specialist who works at a medical school. Leftward of us sat a networks expert, a flatworm enthusiast, and a condensed-matter theorist.

Farther down sat a woman who slices up mouse brains.

Welcome to “Coherent Spins in Biology,” a conference that took place at the University of California, Los Angeles (UCLA) this past December. Two southern Californians organized the workshop: Clarice Aiello heads UCLA’s Quantum Biology Tech lab. Thorsten Ritz, of the University of California, Irvine, cofounded a branch of quantum biology.

Quantum biology served as the conference’s backdrop. According to conventional wisdom, quantum phenomena can’t influence biology significantly: Biological systems have high temperatures, many particles, and fluids. Quantum phenomena, such as entanglement (a relationship that quantum particles can share), die quickly under such conditions.

Yet perhaps some survive. Quantum biologists search for biological systems that might use quantum resources. Then, they model and measure the uses and resources. Three settings (at least) have held out promise during the past few decades: avian navigation, photosynthesis, and olfaction. You can read about them in this book, cowritten by a conference participant for the general public. I’ll give you a taste (or a possibly quantum smell?) by sketching the avian-navigation proposal, developed by Thorsten and colleagues.

Birds migrate southward during the autumn and northward during the spring. How do they know where to fly? At least partially by sensing the Earth’s magnetic field, which leads compass needles to point northward. How do birds sense the field?

Possibly with a protein called “cryptochrome.” A photon (a particle of light) could knock an electron out of part of the protein and into another part. Each part would have one electron that lacked a partner. The electrons would share entanglement. One electron would interact with the Earth’s magnetic field differently than its partner, because its surroundings would differ. (Experts: The electrons would form a radical pair. One electron would neighbor different atoms than the other, so the electron would experience a different local magnetic field. The discrepancy would change the relative phase between the electrons’ spins.) The discrepancy could affect the rate at which the chemical system could undergo certain reactions. Which reactions occur could snowball into large and larger effects, eventually signaling the brain about where the bird should fly.

Quantum mechanics and life rank amongst the universe’s mysteries. How could a young researcher resist the combination? A postdoc warned me away, one lunchtime at the start of my PhD. Quantum biology had enjoyed attention several years earlier, he said, but noise the obscured experimental data. Controversy marred the field.

I ate lunch with that postdoc in 2013. Interest in quantum biology is reviving, as evidenced in the conference. Two reasons suggested themselves: new technologies and new research avenues. For example, Thorsten described the disabling and deletion of genes that code for cryptochrome. Such studies require years’ more work but might illuminate whether cryptochrome affects navigation.

The keynote speaker, Harvard’s Misha Lukin, illustrated new technologies and new research avenues. Misha’s lab has diamonds that contain quantum defects, which serve as artificial atoms. The defects sense tiny magnetic fields and temperatures. Misha’s group applies these quantum sensors to biology problems.

For example, different cells in an embryo divide at different times. Imagine reversing the order in which the cells divide. Would the reversal harm the organism? You could find out by manipulating the temperatures in different parts of the embryo: Temperature controls the rate at which cells divide.

Misha’s team injected nanoscale diamonds into a worm embryo. (See this paper for a related study.) The diamonds reported the temperature at various points in the worm. This information guided experimentalists who heated the embryo with lasers.

The manipulated embryos grew into fairly normal adults. But their cells, and their descendants’ cells, cycled through the stages of life slowly. This study exemplified, to me, one of the most meaningful opportunities for quantum physicists interested in biology: to develop technologies and analyses that can answer biology questions.

I mentioned, in an earlier blog post, another avenue emerging in quantum biology: Physicist Matthew Fisher proposed a mechanism by which entanglement might enhance coordinated neuron firing. My collaborator Elizabeth Crosson and I analyzed how the molecules in Matthew’s proposal—Posner clusters—could process quantum information. The field of Posner quantum biology had a population of about two, when Elizabeth and I entered, and I wondered whether anyone would join us.

The conference helped resolve my uncertainty. Three speakers (including me) presented work based on Matthew’s; two other participants were tilling the Posner soil; and another speaker mentioned Matthew’s proposal. The other two Posner talks related data from three experiments. The experimentalists haven’t finished their papers, so I won’t share details. But stay tuned.

Clarice and Thorsten’s conference reminded me of a conference I’d participated in at the end of my PhD: Last month, I moonlighted as a quantum biologist. In 2017, I moonlighted as a quantum-gravity theorist. Two years earlier, I’d been dreaming about black holes and space-time. At UCLA, I was finishing the first paper I’ve coauthored with biophysicists. What a toolkit quantum information theory and thermodynamics provide, that it can unite such disparate fields.

The contrast—on top of what I learned at UCLA—filled my mind for weeks. And reminded me of the description of asexual reproduction that we heard from the conference’s flatworm enthusiast. According to Western Michigan University’s Wendy Beane, a flatworm “glues its butt down, pops its head off, and grows a new one. Y’know. As one does.”

I hope I never flinch from popping my head off and growing a new one—on my quantum-information-thermodynamics spine—whenever new science calls for figuring out.

With thanks to Clarice, Thorsten, and UCLA for their invitation and hospitality.

One of the most suprising results of the "Machine Learning for Jets" (but really, for particle physics in general) workshop I attended in New York City two weeks ago was the outcome of a challenge that the organizers had proposed to the participants: find a hidden signal of some new physics process in a dataset otherwise made up of some physics background, when no information on the new physics was given, nor on the model of the background.<\p>

The problem is called, in statistical terms, as one of anomaly detection.

Today I’m headed to the 50th World Economic Forum in Davos, where on Tuesday I’ll participate in a panel discussion on “The Quantum Potential” with Jeremy O’Brien of the quantum computing startup PsiQuantum, and will also host an ask-me-anything session about quantum computational supremacy and Google’s claim to have achieved it.

I’m well aware that this will be unlike any other conference I’ve ever attended: STOC or FOCS it ain’t. As one example, also speaking on Tuesday—although not conflicting with my QC sessions—will be a real-estate swindler and reality-TV star who’s somehow (alas) the current President of the United States. Yes, even while his impeachment trial in the Senate gets underway. Also speaking on Tuesday, a mere hour and a half after him, will be TIME’s Person of the Year, 17-year-old climate activist Greta Thunberg.

In short, this Davos is shaping up to be an epic showdown between two diametrically opposed visions for the future of life on Earth. And your humble blogger will be right there in the middle of it, to … uhh … explain how quantum computers can sample probability distributions that are classically intractable unless the polynomial hierarchy collapses to the third level. I feel appropriately sheepish.

Since the experience will be so unusual for me, I’m planning to “live-blog Davos”: I’ll be updating this post, all week, with any strange new things that I see or learn. As a sign of my devotion to you, my loyal readers, I’ll even clothespin my nose and attend Trump’s speech so I can write about it.

And Greta: on the off chance that you happen to read Shtetl-Optimized, let me treat you to a vegan lunch or dinner! I’d like to try to persuade you of just how essential nuclear power will be to a carbon-free future. Oh, and if it’s not too much trouble, I’d also like a selfie with you for this blog. (Alas, a friend pointed out to me that it would probably be easier to meet Trump: unlike Greta, he won’t be swarmed with thousands of fans!)

Anyway, check back here throughout the week for updates. And if you’re in Davos and would like to meet, please shoot me an email. And please use the comment section to give me your advice, suggestions, well-wishes, requests, or important messages for me to fail to deliver to the “Davoisie” who run the world.

So I’ve arrived in Klosters, a village in the Swiss Alps close to Davos where I’ll be staying. (All the hotels in Davos itself were booked by the time I checked.)

I’d braced myself for the challenge of navigating three different trains through the Alps not knowing German. In reality, it was like a hundred times easier than public transportation at home. Every train arrived at the exact right second at the exact platform that was listed, bearing the exact right number, and there were clear visible signs strategically placed at exactly the places where anyone could get confused. I’d entered Bizarro Opposite World. I’m surely one of the more absentminded people on earth, as well as one of the more neurotic about being judged by bystanders if I ever admit to being lost, and it was nothing.

Snow! Once a regular part of my life, now the first I’d seen in several years. Partly because I now live in Texas, but also because even when we take the kids back to Pennsylvania for ChanuChrismaNewYears, it no longer snows like it did when I was a kid. If you show my 2-year-old, Daniel, a picture of snow-covered wilderness, he calls it a “beach.” Daniel’s soon-to-be 7-year-old sister still remembers snow from Boston, but the memory is rapidly fading. I wonder for how many of the children of the 21st century will snow just be a thing from old books and movies, like typewriters or rotary phones.

The World Economic Forum starts tomorrow afternoon. In the meantime, though, I thought I’d give an update not on the WEF itself, but on the inflight movie that I watched on my way here.

I watched Rocketman, the recent biopic/hagiography about Elton John, though as I watched I found that I kept making comparisons between Elton John and Greta Thunberg.

On the surface, these two might not seem to have a great deal of similarity.

But I gathered that they had this in common: while still teenagers, they saw a chance and they seized it. And doing so involved taking inner turmoil and then succesfully externalizing it to the whole planet. Making hundreds of millions of people feel the same emotions that they had felt. If I’m being painfully honest (how often am I not?), that’s something I’ve always wanted to achieve and haven’t.

Of course, when some of the most intense and distinctive emotions you’ve ever felt revolved around the discovery of quantum query complexity lower bounds … yeah, it might be tough to find more people than could fill a room to relive those emotional journeys with you. But a child’s joy at discovering numbers like Ackerman(100) (to say nothing of BB(100)), which are so incomprehensibly bigger than \( 9^{9^{9^{9^9}}} \) that I didn’t need to think twice about how many 9’s I put there? Or the exasperation at those who, yeah, totally get that quantum computers aren’t known to give exponential speedups for NP-complete problems, that’s a really important clarification coming from the theory side, but still, let’s continue to base our entire business or talk or article around the presupposition that quantum computers do give exponential speedups for NP-complete problems? Or even just the type of crush that comes with a ceaseless monologue about what an objectifying, misogynist pig you must be to experience it? Maybe I could someday make people vicariously experience and understand those emotions–if I could only find the right words.

My point is, this is precisely what Greta did for the burgeoning emotion of existential terror about the Anthropocene—another emotion that’s characterized my life since childhood. Not that I ever figured out anything to do about it, with the exception of Gore/Nader vote-swapping. By the standards of existential terrors, I consider this terror to be extraordinarily well-grounded. If Steven Weinberg is scared, who among us has the right to be calm?

The obvious objection to Greta—why should anyone care what a histrionic teenager thinks about a complicated scientific field that thousands of people get PhDs in?—calls for a substantive answer. So here’s mine. Like many concerned citizens, I try to absorb some of the research on ocean warming or the collapse of ice sheets and the melting permafrost leading to even more warming or the collapse of ecosystems due to changes in rainfall or bushfires or climate migrations or whatever. And whenever I do, I’m reminded of Richard Feynman’s remark, during the investigation of the Challenger disaster, that maybe it wasn’t all that interesting for the commission to spend its time reconstructing the exact details of which system caused which other system to malfunction at which millisecond, after the Space Shuttle had already started exploding. The thing was hosed at that point.

Still, even after the 80s and 90s, there remained deep open questions about the eventual shape of the climate crisis, and foremost among them was: how do you get people to stop talking about this crisis in the language of intellectual hypotheticals and meaningless virtue-signalling gestures and “those crazy scientists, who knows what they’ll say tomorrow”? How does one get people to revert to a more ancient language, the one that was used to win WWII for example, which speaks of courage and duty and heroism and defiance in the jaws of death?

Greta’s origin story—the one where the autistic girl spends months so depressed over climate inaction that she can’t eat or leave her room, until finally, no longer able to bear the psychic burden, she ditches school and carries a handmade protest sign to the front of the Swedish parliament—is not merely a prerequisite to a real contribution. It is Greta’s real contribution (so far anyway), and by that I don’t mean to diminish it. The idea was “trivial,” yes, but only in the sense that the wheel, Arabic numerals, or “personal computers will be important” were trivial ideas. Greta modeled for the rest of the world how they, too, would probably feel about climate change were they able to sync up their lizard brains with their higher brains … and crucially, a substantial segment of the world was already primed to agree with her. But it needed to see one successful example of a succesful sync between the science and the emotions appropriate to the science, as a crystal needs a seed.

The thesis of Rocketman is that Elton John’s great achievement was not only to invent a new character, but actually to become that character, since only by succesfully fusing the two could he touch the emotions of the masses. In a similar way, Greta Thunberg’s great accomplishment of her short life has been to make herself into the human race’s first Greta Thunberg.

Happy 7th birthday to my daughter Lily! (No, I didn’t miss her birthday party. We did it on the 18th, right before I flew out.)

I think my goals for Davos have been downgraded from delivering a message of peace and nerd liberation to the world’s powerful, or even getting a selfie with Greta, to simply taking in a week in an environment that’s so alien to me.

Everything in Davos is based on a tiered system of badges, which determine which buildings you can get into to participate in the sessions. I have a white badge, the highest tier, which would’ve set me back around $71,000 had WEF not thankfully waived its fees for academics. I should mention that I’m also extremely underdressed compared to most of the people here, and that I spent much of my time today looking for free food. It turns out that there’s pretty copious and excellent free food, although the sponsors sometimes ask you to leave your business card before you take any. I don’t have a business card.

The above, for me, represents the true spirit of Davos: a conference at a Swiss ski resort that costs $71,000 to attend, held on behalf of the ideal of human equality.

But maybe I shouldn’t scoff. I learned today about a war between Greece and Turkey that was averted only because the heads of the two countries talked it over at Davos, so that’s cool. At the opening ceremony today, besides a beautiful orchestral rendition of “Ode to Joy,” there were a bunch of speeches about how Davos pioneered the entire concept of corporate social responsibility. I suppose the critics might say instead that Davos pioneered the concept of corporate whitewashing—as with the wall-sized posters that I saw this afternoon, wherein a financial services corporation showcased a diverse cast of people each above their preferred pronouns (he/him, she/her, they/them). Amazing how pronouns make everything woke and social-justicey! I imagine that the truth is somewhere between these visions. Just like the easiest way for NASA to fake a moon landing was actually to send humans to the moon, sometimes the easiest way to virtue-signal is actually to become more virtuous.

Tonight I went to a reception specifically for the academics at Davos. There, for the first time since my arrival, I saw people who I knew (Shafi Goldwasser, Neha Narula…), and met someone who I’d known by reputation (Brian Schmidt, who shared the Nobel Prize in Physics for the discovery of dark energy). But even the people who I didn’t know were clearly “my people,” with familiar nerdy mannerisms and interests, and in some cases even a thorough knowledge of SlateStarCodex references. Imagine visiting a foreign country where no one spoke your language, then suddenly stumbling on the first ones who did. I found it a hundred times easier than at the main conference to strike up conversations.

Oh yeah, quantum computing. This afternoon I hosted three roundtable discussions about quantum computing, which were fun and stress-free — I spent much more of my mental energy today figuring out the shuttle buses. If you’re a regular reader of this blog or my popular articles, or a watcher of my talks on YouTube, etc., then congratulations: you’ve gotten the same explanations of quantum computing for free that others may have paid $71,000 apiece to hear! Tomorrow are my two “real” quantum computing sessions, as well as the speeches by both the Donald and the Greta (the latter being the much hotter ticket). So it’s a big day, which I’ll tell you about after it’s happened. Stay tuned!

PsiQuantum’s Jeremy O’Brien and I did the Davos quantum computing panel this morning (moderated by Jennifer Schenker). You can watch our 45-minute panel here. For regular readers of this blog, the territory will be familiar, but I dunno, I hope someone enjoys it anyway!

I’m now in the Congress Hall, in a seat near the front, waiting for Trump to arrive. I will listen to the President of the United States and not attract the Secret Service’s attention by loudly booing, but I have no intention to stand or applaud either.

Alas, getting a seat at Greta’s talk is looking like it will be difficult or impossible.

I was struck by the long runup to Trump’s address: the President of Switzerland gave a searing speech about the existential threats of climate change and ecosystem destruction, and “the politicians in many nations who appeal to fear and bigotry”—never mentioning Trump by name but making clear that she despised the entire ideology of the man people had come to hear. I thought it was a nice touch. Then some technicians spent 15 minutes adjusting Trump’s podium, then nothing happened for 20 minutes as we all waited for a tardy Trump, then some traditional Swiss singers did a performance on stage (!), and finally Klaus Schwab, director of the WEF, gave Trump a brief and coldly cordial introduction, joking about the weather in Davos.

And … now Trump is finally speaking. Once he starts, I suddenly realize that I have no idea what new insight I expected from this. He’s giving his standard stump speech, America has regained its footing after the disaster of the previous administration, winning like it’s never won before, unemployment is the lowest in recorded history, blah blah blah. I estimate that less than half of the audience applauded Trump’s entrance; the rest sat in stony silence. Meanwhile, some people were passing out flyers to the audience documenting all the egregious errors in Trump’s economic statistics.

Given the small and childish nature of the remarks (“we’re the best! ain’t no one gonna push us around!”), it feels somehow right to be looking down at my phone, blogging, rather than giving my undivided attention to the President of the United States speaking 75 feet in front of me.

Ok, I admit I just looked up, when Trump mentioned America’s commitment to developing new technologies like “5G and quantum computing” (he slowly drew out the word “quantum”).

His whole delivery is strangely lethargic, as if he didn’t sleep well last night (I didn’t either).

Trump announced that the US would be joining the WEF’s “1 trillion trees” environmental initiative, garnering the only applause in his speech. But he then immediately pivoted to a denunciation of the “doomsayers and pessimists and socialists who want to control our lives and take away our liberty” (he presumably meant people worried about climate change).

Now, I kid you not, Trump is expanding on his “optimism” theme by going on and on about the architectural achievements of Renaissance Florence.

While I wasn’t able to get in to see Greta Thunberg in person, you can watch her (along with others) here. I learned that her name is pronounced “toon-berg.”

Having now listened to Greta’s remarks, I confess that I disagree with the content of what she says. She explicitly advocates a sort of purity-based carbon absolutism—demanding that companies and governments immediately implement, not merely net zero emissions (i.e. offsetting their emissions by paying to plant trees and so forth), but zero emissions period. Since she can’t possibly mean literally zero, I’ll interpret her to mean close to zero. Even so, it seems to me that the resulting economic upheavals would provoke a massive backlash against whoever tried to enforce such a policy. Greta also dismisses the idea of technological solutions to climate change, saying that we don’t have time to invent such solutions. But of course, some of the solutions already exist—a prime example being nuclear power. And if we no longer have time to nuclearize the world, then to a great extent, that’s the fault of the antinuclear activists—an unbelievable moral and strategic failure that may have doomed our civilization, and for which there’s never been a reckoning.

Despite all my disagreements, if Greta’s strident, uncompromising rhetoric helps push the world toward cutting emissions, then she’ll have to be counted as one of the greatest people who ever lived. Of course, another possibility is the world’s leaders will applaud her and celebrate her moral courage, while not taking anything beyond token actions.

Alas, I’ve come down with a nasty cold (is there any other kind?). So I’m paring back my participation in the rest of Davos to the stuff that really interests me. The good news is that my quantum computing sessions are already finished!

This morning, as I sat in the lobby of the Congress Centre checking my email and blowing my nose, I noticed some guy playing a cello nearby. Dozens were gathered around him — so many that I could barely see the guy, only hear the music. After he was finished, I worked up the courage to ask someone what the fuss was about. Turns out that the guy was Yo-Yo Ma.

The Prince Regent of Liechtenstein was explaining to one of my quantum computing colleagues that Liechtenstein does not have much in the way of quantum.

Speaking of princes, I’m now at a cybersecurity session with Shafi Goldwasser and others, at which the attendance might be slightly depressed because it’s up against Prince Charles. That’s right: Davos is the conference where the heir apparent to the British throne speaks in a parallel session.

I’ve realized these past few days that I’m not very good at schmoozing with powerful people. On the other hand, it’s possible that my being bad at it is a sort of mental defense mechanism. The issue is that, the more I became a powerful “thought leader” who unironically used phrases like “Fourth Industrial Revolution” or “disruptive innovation,” the more I used business cards and LinkedIn to expand my network of contacts or checked my social media metrics … well, the less I’d be able to do the research that led to stuff like being invited here in the first place. I imagine that many Davos regulars started out as nerds like me, and that today, coming to Davos to talk about “disruptive innovation” is a fun kind of semi-retirement. If so, though, I’m not ready to retire just yet! I still want to do things that are new enough that they don’t need to be described using multiple synonyms for newness.

Apparently one of the hottest tickets at Davos is a post-Forum Shabbat dinner, which used to be frequented by Shimon Peres, Elie Wiesel, etc. Alas, not having known about it, I already planned my travel in a way that won’t let me attend it. I feel a little like the guy in this Onion article.

I had signed up for a session entitled What’s At Stake: The Arctic, featuring Al Gore. As I waited for them to start letting people in, I suddenly realized that Al Gore was standing right next to me. However, he was engrossed in conversation with a young woman, and even though I assumed she was just some random fan like I was, I didn’t work up the courage to interrupt them. Only once the panel had started, with the woman on it two seats from Gore, did I realize that she was Sanna Marin, the new Prime Minister of Finland (and at 34, the world’s second-youngest head of state).

You can watch the panel here. Briefly, the Arctic has lost about half of its ice cover, not merely since preindustrial times but since a few decades ago. And this is not only a problem for polar bears. It’s increasing the earth’s absorption of sunlight and hence significantly accelerating global warming, and it’s also screwing up weather patterns all across the northern hemisphere. Of course, the Siberian permafrost is also thawing and releasing greenhouse gases that are even worse than CO2, further accelerating the wonderful feedback loop of doom.

I thought that Gore gave a masterful performance. He was in total command of the facts—discoursing clearly and at length on the relative roles of CO2, SO2, and methane in the permafrost as well as the economics of oil extraction, less in the manner of thundering (or ‘thunberging’?) prophet than in the manner of an academic savoring all the non-obvious twists as he explains something to a colleague—and his every response to the other panelists was completely on point.

In 2000, there was indeed a bifurcation of the universe, and we ended up in a freakishly horrible branch. Instead of something close to the best, most fact-driven US president one could conjure in one’s mind, we got something close to the worst, and then, after an 8-year interregnum just to lull us into complacency, we got something even worse than the worst.

The other panelists were good too. Gail Whiteman (the scientist) had the annoying tic of starting sentence after sentence with “the science says…,” but then did a good job of summarizing what the science does say about the melting of the Arctic and the permafrost.

Alas, rather than trying to talk to Gore, immediately after the session ended, I headed back to my hotel to go to sleep. Why? Partly because of my cold. But partly also because of incident immediately before the panel. I was sitting in the front row, next to an empty seat, when a woman who wanted to occupy that seat hissed at me that I was “manspreading.”

If, on these narrow seats packed so tightly together that they were basically a bench, my left leg had strayed an inch over the line, I would’ve addressed the situation differently: for example, “oh hello, may I sit here?” (At which point I would’ve immediately squeezed in.) Amazingly, the woman didn’t seem to didn’t care that a different woman, the one to my right, kept her pocketbook and other items on the seat next to her throughout the panel, preventing anyone else from using the seat in what was otherwise a packed house. (Is that “womanspreading”?)

Anyway, the effect of her comment was to transform the way I related to the panel. I looked around at the audience and thought: “these activists, who came to hear a panel on climate change, are fighting for a better world. And in their minds, one of the main ways that the world will be better is that it won’t contain sexist, entitled ‘manspreaders’ like me.”

In case any SneerClubbers are reading, I should clarify that I recognize an element of the irrational in these thoughts. I’m simply reporting, truthfully, that they’re what bubbled up outside the arena of conscious control. But furthermore, I feel like the fact that my brain works this way might give me some insight into the psychology of Trump support that few Democrats share—so much that I wonder if I could provide useful service as a Democratic political consultant!

I understand the mindset that howls: “better that every tree burn to the ground, every fish get trawled from the ocean, every coastal city get flooded out of existence, than that these sanctimonious hypocrites ‘on the right side of history,’ singing of their own universal compassion even as they build a utopia with no place for me in it, should get to enjoy even a second of smug self-satisfaction.” I hasten to add that I’ve learned how to override that mindset with a broader, better mindset: I can jump into the abyss, but I can also climb back out, and I can even look down at the abyss from above and report what’s there. It’s as if I’d captured some virulent strain of Ebola in a microbiology lab of the soul. And if nearly half of American voters (more in crucial swing states) have gotten infected with that Ebola strain, then maybe my lab work could have some broader interest.

I thought about Scott Minerd, the investor on the panel, who became a punching bag for the other panelists (except for Gore, a politician in a good sense, who went out of his way to find points of agreement). In his clumsy way, Minerd was making the same point that climate activists themselves correctly make: namely, that the oil companies need to be incentivized (for example, through a carbon tax) to leave reserves in the ground, that we can’t just trust them to do the noble thing and write off their own assets. But for some reason, Minerd presented himself as a greedy fat-cat, raining on the dreams of the hippies all around him for a carbon-free future, so then that’s how the other panelists duly treated him (except, again, for Gore).

But I looked at the audience, which was cheering attacks on Minerd, and the Ebola in my internal microbiology lab said: “the way these activists see Scott Minerd is not far from how they see Scott Aaronson. You’ll never be good enough for them. The people in this room might or might not succeed at saving the world, but in any case they don’t want your help.”

After all, what was the pinnacle of my contribution to saving the world? It was surely when I was 19, and created a website to defend the practice of NaderTrading (i.e., Ralph Nader supporters in swing states voting for Al Gore, while Gore supporters in safe states pledged to vote Nader on their behalf). Alas, we failed. We did help arrange a few thousand swaps, including a few hundred swaps in Florida, but it was 538 too few. We did too little, too late.

So what would I have talked to Gore about, anyway? Would I have reminded him of the central tragedy of his life, which was also a central tragedy of recent American history, just in order to babble, or brag, about a NaderTrading website that I made half a lifetime ago? Would I have made up a post-hoc rationalization for why I work on quantum computing, like that I hope it will lead to the discovery of new carbon-capture methods? Immediately after Gore’s eloquent brief for the survival of the Arctic and all life on earth, would I have asked him for an autograph or a selfie? No, better to just reflect on his words. At a crucial pivot point in history, Gore failed by a mere 538 votes, and I also failed to prevent the failure. But amazingly, Gore never gave up-–he just kept on fighting for what he knew civilization needed to do—and yesterday I sat a few feet away while he explained why the rest of us shouldn’t give up either. And he’s right about this—if not in the sense of the outlook being especially hopeful or encouraging right now, then surely in the sense of which attitude is the useful one to adopt. And my attitude, which you might call “Many-Worlds-inflected despair,” might be epistemically sound but it definitely wasn’t useful. What further clarifications did I need?

I attended a panel discussion on quantum computing hosted by IBM. The participants were Thomas Friedman (the New York Times columnist), Arvind Krishna (a senior Vice President at IBM), Raoul Klingner (director of a European research organization), and Alison Snyder (the managing editor of Axios magazine). There were about 100 people in the audience, more than at all of my Davos quantum computing sessions combined. I sat right in front, although I don’t think anyone on the panel recognized me.

Ginni Rometty, the CEO of IBM, gave an introduction. She said that quantum will change the world by speeding up supply-chain and other optimization problems. I assume she was talking about the Grover speedup? She also said that IBM is committed to delivering value for its customers, rather than “things you can do in two seconds that are not commercially valid” (I assume she meant Google’s supremacy experiment). She asked for a show of hands of who knows absolutely nothing about the science behind quantum computing. She then quipped, “well, that’s all of you!” She may have missed two hands that hadn’t gone up (both belonging to the same person).

I accepted an invitation to this session firstly for the free lunch (which turned out to be delicious), and secondly because I was truly, genuinely curious to hear what Thomas Friedman, many of whose columns I’ve liked, had to teach me about quantum computing. The answer turns out to be this: in his travels around the world over the past 6 years, Friedman has witnessed firsthand how the old dichotomy between right-wing parties and left-wing parties is breaking down everywhere (I assume he means, as both sides get taken over by populist movements?). And this is just like how a qubit breaks down the binary dichotomy between 0’s and 1’s! Also, the way a quantum computer can be in multiple states at once, is like how the US now has to be in multiple states at once in its relationship with China.

Friedman opened his remarks by joking about how he never took a single physics course, and had no idea why he was on a quantum computing panel at all. He quickly added, though, that he toured IBM’s QC labs, where he found IBM’s leaders to be wonderful explainers of what it all means.

I’ll note that Friedman, the politics and Middle East affairs writer — not the two panelists serving the role of quantum experts — was the only one who mentioned, even in passing, the idea that the advantage of QCs depends on something called “constructive interference.”

Krishna, the IBM Vice President, explained why IBM rejects the entire concept of “quantum supremacy”: because it’s an irrelevant curiosity, and creating value for customers in the marketplace (for example by solving their supply-chain optimization problems) is the only test that matters. No one on the panel expressed a contrary view.

Later, Krishna explained why quantum computers will never replace classical computers: because if you stored your bank balance on a quantum computer, one day you’d have $1, the next day $1000, the day after that $1 again, and so forth! He explained how, where current supercomputers use the same amount of energy needed to power all of Davos to train machine learning models, quantum computers would use less than the energy needed to power a single house. New algorithms do need to be designed to run neural networks quantumly, but fortunately that’s all being done as we speak.

I got the feeling that the businesspeople who came to this session felt like they got a lot more out of it than the businesspeople who came to my and Jeremy O’Brien’s session felt like they got out of ours. After all, this session got across some big real-world takeaways—e.g., that if you don’t quantum, your business will be left in the dust, stuck with a single value at a time rather than exploring all values in parallel, and IBM can help you rather than your competitors win the quantum race. It didn’t muddy the message with all the incomprehensible technicalities about how QCs only give exponential speedups for problems with special structure.

Later Update:

Tonight I went to a Davos reception hosted by the government of Canada (). I’m not sure why exactly they invited me, although I have of course enjoyed a couple years of life “up north” (well, in Waterloo, so actually further south than a decent chunk of the US … you see that I do have a tiny speck of a Canadian in me?).

I didn’t recognize a single person at the reception. So I just ate the food, drank beer, and answered emails. But then a few people did introduce themselves (two who recognized me, one who didn’t). As they gathered around, they started asking me questions about quantum computing: is it true that QCs could crack the classically impossible Traveling Salesman Problem? That they try all possible answers in parallel? Are they going to go commercial in 2-5 years, or have they already?

It might have been the beer, but for some reason I decided to launch an all-out assault of truth bombs, one after the next, with what they might have considered a somewhat emotional delivery.

OK fine, it wasn’t the beer. That’s just who I am.

And then, improbably, I was a sort of localized “life of the party” — although possibly for the amusement / novelty value of my rant more than for the manifest truth of my assertions. One person afterward told me that it was by far the most useful conversation he’d had at Davos.

And I replied: I’m flattered by your surely inflated praise, but in truth I should also thank you. You caught me at a moment when I’d been thinking to myself that, if only I could make one or two people’s eyes light up with comprehension about the fallacy of a QC simply trying all possible answers in parallel and then magically picking the best one, or about the central role of amplitudes and interference, or about the “merely” quadratic nature of the Grover speedup, or about the specialized nature of the most dramatic known applications for QCs, or about the gap between where the experimentalists are now and what’s needed for error correction and hence true scalability, or about the fact that “quantum supremacy” is obviously not a sufficient condition for a QC to be useful, but it’s equally obviously a necessary condition, or about the fact that doing something “practical” with a QC is of very little interest unless the task in question is actually harder for classical computers, which is a question of great subtlety … I say, if I could make only two or four eyes light up with comprehension of these things, then on that basis alone I could declare that the whole trip to Davos was worth it.

And then one of the people hugged me … and that was the coolest thing that happened to me today.

I attended a second session with Al Gore, about the problem of the world filling up with plastic. I learned that the world’s plastic waste is set to double over the next 15-20 years, and that a superb solution—indeed, it seems like a crime that it hasn’t been implemented already—-would be to set up garbage booms at the mouths of a few major rivers from which something like 80% of the plastic waste in the ocean gets there.

Anyway, still didn’t introduce myself.

I wrote before about how surprisingly clear and logical the trains to Davos were, even with multiple changes. Unfortunately God’s mercy on me didn’t last. All week, I kept getting lost in warren-like buildings with dozens of “secret passageways” (often literally behind unmarked doors) and few signs—not even exit signs. In one case I missed a tram that was the only way out from somewhere because I arrived to the wrong side of the tram—and getting to the right side required entering a building and navigating another unmarked labyrinth, by which point the tram had already left. In another case, I wandered through a Davos hotel for almost an hour trying to find an exit, ricocheting like a pinball off person after person giving me conflicting directions. Only after I literally started ranting to a crowd: ”holy f-ck, is this place some psychological torture labyrinth designed by Franz Kafka? Am I the only one? Is it clear to all of you? Please, WHERE IS THE F-CKING EXIT???” until finally some local took pity and walked me through the maze. As I mentioned earlier, logistical issues like these made me about 5,000 times more anxious on this trip than the prospect of giving quantum computing talks to the world’s captains of industry. I don’t recall having had a nightmare about lecturing even once—but I’ve had never-ending nightmares about failing to show up to give a lecture because I’m wandering endlessly through an airport or a research center or whatever, always the only one who’s lost.

Define the Collatz map on the natural numbers by setting to equal when is odd and when is even, and let denote the forward Collatz orbit of . The notorious Collatz conjecture asserts that for all . Equivalently, if we define the backwards Collatz orbit to be all the natural numbers that encounter in their forward Collatz orbit, then the Collatz conjecture asserts that . As a partial result towards this latter statement, Krasikov and Lagarias in 2003 established the bound

for all and . (This improved upon previous values of obtained by Applegate and Lagarias in 1995, by Applegate and Lagarias in 1995 by a different method, by Wirsching in 1993, by Krasikov in 1989, and some by Crandall in 1978.) This is still the largest value of for which (1) has been established. Of course, the Collatz conjecture would imply that we can take equal to , which is the assertion that a positive density set of natural numbers obeys the Collatz conjecture. This is not yet established, although the results in my previous paper do at least imply that a positive density set of natural numbers iterates to an (explicitly computable) bounded set, so in principle the case of (1) could now be verified by an (enormous) finite computation in which one verifies that every number in this explicit bounded set iterates to . In this post I would like to record a possible alternate route to this problem that depends on the distribution of a certain family of random variables that appeared in my previous paper, that I called Syracuse random variables.

Definition 1 (Syracuse random variables) For any natural number , a Syracuse random variable on the cyclic group is defined as a random variable of the form

} for . In (2) the arithmetic is performed in the ring .

Thus for instance

and so forth. One could also view as the mod reduction of a -adic random variable

The probability density function of the Syracuse random variable can be explicitly computed by a recursive formula (see Lemma 1.12 of my previous paper). For instance, when , is equal to for respectively, while when , is equal to

when respectively.

The relationship of these random variables to the Collatz problem can be explained as follows. Let denote the odd natural numbers, and define the Syracuse map by

where the –valuation is the number of times divides . We can define the forward orbit and backward orbit of the Syracuse map as before. It is not difficult to then see that the Collatz conjecture is equivalent to the assertion , and that the assertion (1) for a given is equivalent to the assertion

I prove this proposition below the fold. A variant of the argument shows that for any value of , (1), (3) holds whenever , where is an explicitly computable function with as . In principle, one could then improve the Krasikov-Lagarias result by getting a sufficiently good upper bound on , which is in principle achievable numerically (note for instance that Lemma 2 implies the bound for any , since for any ).

— 1. Proof of proposition —

Assume . Let be sufficiently small, and let be sufficiently large depending on . We first establish the following proposition, that shows that elements in a certain residue class have a lot of Syracuse preimages:

Proposition 4 There exists a residue class of with the property that for all integers in this class, and all non-negative integers , there exist natural numbers with

for an absolute constant (where we use the periodicity of in to define for by abuse of notation). Hence by the pigeonhole principle we can find a residue class not divisible by such that

and hence by the triangle inequality we have

for all in this residue class.

Henceforth is assumed to be an element of this residue class. For , we see from (8)

hence by the pigeonhole principle there exists (so in particular ) such that

so the number of summands here is at least . This establishes the base case .

Now suppose inductively that , and that the claim has already been proven for . By induction hypothesis, there exists natural numbers with

for some integers . We claim that these integers lie in distinct residue classes modulo where

Indeed, suppose that for two tuples , of the above form. Then

(where we now invert in the ring ), or equivalently

By (11), (7), all the summands on the left-hand side are natural numbers of size , hence the sum also has this size; similarly for the right-hand side. From the estimates of , we thus see that both sides are natural numbers between and , by hypothesis on . Thus we may remove the modular constraint and conclude that

and then a routine induction (see Lemma 6.2 of my paper) shows that . This establishes the claim.

As a corollary, we see that every residue class modulo contains

of the at most. Since there were at least tuples to begin with, we may therefore forbid up to residue classes modulo , and still have surviving tuples with the property that avoids all the forbidden classes.

Let be one of the tuples (10). By the hypothesis , we have

Let denote the set of tuples with the additional property

for all , then by the Chernoff bound we have

for some absolute constant . Thus, by the Markov inequality, by forbidding up to classes, we may ensure that

and hence

We thus have

where run over all tuples with being one of the previously surviving tuples, and . By (11) we may rearrange this a little as

By construction, we have

for any tuple in the above sum, hence by the pigeonhole principle we may find an integer

It is a routine matter to verify that all tuples in this sum lie in and obeys the requirements (6), (7), closing the induction hypothesis.

Corollary 5 For all in the residue class from the previous proposition, and all , we have

In particular, we have

as .

Proof: For every tuple in the previous proposition, we have

for some integer . As before, all these integers are distinct, and have magnitude

From construction we also have , so that . The number of tuples is at least

which can be computed from the properties of to be of size at least . This gives the first claim, and the second claim follows by taking to be the first integer for which .

To conclude the proof of Proposition 3, it thus suffices to show that

Lemma 6 Every residue class has a non-trivial intersection with .

Indeed, if we let be the residue class from the preceding propositions, and use this lemma to produce an element of that lies in this class, then from the inclusion we obtain (3) with , and then on sending to zero we obtain the claim.

Proof: An easy induction (based on first establishing that for all natural numbers ) shows that the powers of two modulo occupy every residue class not divisible by . From this we can locate an integer in of the form . Since , the claim follows.

We remark that the same argument in fact shows (assuming of course) that

If you click on this picture, you’ll see a zoomable image of the Milky Way with 84 million stars:

But stars contribute only a tiny fraction of the total entropy in the observable Universe. If it’s random information you want, look elsewhere!

First: what’s the ‘observable Universe’, exactly?

The further you look out into the Universe, the further you look back in time. You can’t see through the hot gas from 380,000 years after the Big Bang. That ‘wall of fire’ marks the limits of the observable Universe.

But as the Universe expands, the distant ancient stars and gas we see have moved even farther away, so they’re no longer observable. Thus, the so-called ‘observable Universe’ is really the ‘formerly observable Universe’. Its edge is 46.5 billion light years away now!

This is true even though the Universe is only 13.8 billion years old. A standard challenge in understanding general relativity is to figure out how this is possible, given that nothing can move faster than light.

What’s the total number of stars in the observable Universe? Estimates go up as telescopes improve. Right now people think there are between 100 and 400 billion stars in the Milky Way. They think there are between 170 billion and 2 trillion galaxies in the Universe.

In 2009, Chas Egan and Charles Lineweaver estimated the total entropy of all the stars in the observable Universe at 10^{81} bits. You should think of these as qubits: it’s the amount of information to describe the quantum state of everything in all these stars.

But the entropy of interstellar and intergalactic gas and dust is about ten times more the entropy of stars! It’s about 10^{82} bits.

The entropy in all the photons in the Universe is even more! The Universe is full of radiation left over from the Big Bang. The photons in the observable Universe left over from the Big Bang have a total entropy of about 10^{90} bits. It’s called the ‘cosmic microwave background radiation’.

The neutrinos from the Big Bang also carry about 10^{90} bits—a bit less than the photons. The gravitons carry much less, about 10^{88} bits. That’s because they decoupled from other matter and radiation very early, and have been cooling ever since. On the other hand, photons in the cosmic microwave background radiation were formed by annihilating
electron-positron pairs until about 10 seconds after the Big Bang. Thus the graviton radiation is expected to be cooler than the microwave background radiation: about 0.6 kelvin as compared to 2.7 kelvin.

Black holes have immensely more entropy than anything listed so far. Egan and Lineweaver estimate the entropy of stellar-mass black holes in the observable Universe at 10^{98} bits. This is connected to why black holes are so stable: the Second Law says entropy likes to increase.

But the entropy of black holes grows quadratically with mass! So black holes tend to merge and form bigger black holes — ultimately forming the ‘supermassive’ black holes at the centers of most galaxies. These dominate the entropy of the observable Universe: about 10^{104} bits.

Hawking predicted that black holes slowly radiate away their mass when they’re in a cold enough environment. But the Universe is much too hot for supermassive black holes to be losing mass now. Instead, they very slowly grow by eating the cosmic microwave background, even when they’re not eating stars, gas and dust.

So, only in the far future will the Universe cool down enough for large black holes to start slowly decaying via Hawking radiation. Entropy will continue to increase… going mainly into photons and gravitons! This process will take a very long time. Assuming nothing is falling into it and no unknown effects intervene, a solar-mass black hole takes about 10^{67} years to evaporate due to Hawking radiation — while a really big one, comparable to the mass of a galaxy, should take about 10^{99} years.

If our current most popular ideas on dark energy are correct, the Universe will continue to expand exponentially. Thanks to this, there will be a cosmological event horizon surrounding each observer, which will radiate Hawking radiation at a temperature of roughly 10^{-30} kelvin.

In this scenario the Universe in the very far future will mainly consist of massless particles produced as Hawking radiation at this temperature: photons and gravitons. The entropy within the exponentially expanding ball of space that is today our ‘observable Universe’ will continue to increase exponentially… but more to the point, the entropy density will approach that of a gas of photons and gravitons in thermal equilibrium at 10^{-30} kelvin.

Of course, it’s quite likely that some new physics will turn up, between now and then, that changes the story! I hope so: this would be a rather dull ending to the Universe.

In physics we describe everything, from the smallest of particles to the largest of galaxies, with the language of mathematics. Why should that one field be able to describe so much? And why don’t we use something else?

The truth is, this is a trick question. Mathematics isn’t a language like English or French, where we can choose whichever translation we want. We use mathematics because it is, almost by definition, the best choice. That is because mathematics is the art of stating things clearly.

That was an (old) joke about infinite series of numbers. You probably learned in high school that if you add up one plus a half plus a quarter…you eventually get two. To be a bit more precise:

We say that this infinite sum limits to two.

But what does it actually mean for an infinite sum to limit to a number? What does it mean to sum infinitely many numbers, let alone infinitely many beers ordered by infinitely many mathematicians?

You’re asking these questions because I haven’t yet stated the problem clearly. Those of you who’ve learned a bit more mathematics (maybe in high school, maybe in college) will know another way of stating it.

You know how to sum a finite set of beers. You start with one beer, then one and a half, then one and three-quarters. Sum beers, and you get

What does it mean for the sum to limit to two?

Let’s say you just wanted to get close to two. You want to get close, where epsilon is the Greek letter we use for really small numbers.

For every you choose, no matter how small, I can pick a (finite!) and get at least that close. That means that, with higher and higher , I can get as close to two as a I want.

As it turns out, that’s what it means for a sum to limit to two. It’s saying the same thing, but more clearly, without sneaking in confusing claims about infinity.

These sort of proofs, with (and usually another variable, ) form what mathematicians view as the foundations of calculus. They’re immortalized in story and song.

And they’re not even the clearest way of stating things! Go down that road, and you find more mathematics: definitions of numbers, foundations of logic, rabbit holes upon rabbit holes, all from the effort to state things clearly.

That’s why I’m not surprised that physicists use mathematics. We have to. We need clarity, if we want to understand the world. And mathematicians, they’re the people who spend their lives trying to state things clearly.

Patrick Meade pointed out some new papers about an experimental anomaly, starting with his own. The KOTO experiment at J-PARC in Japan (where they are also building a \( g-2 \) experiment) has seen 3 events when looking for the rare process \( K_L \rightarrow \pi_0 + \mathrm{invisible} \), when they expect a background of \( 0.05 \pm 0.02 \) Update: it was pointed out to me that the effective background rate is \( 0.1 \pm 0.02 \) as in Meade's paper, because the Standard Model rate is \( 0.049 \pm 0.01 \). For more details see the slides of the talk where the results are reported; there is currently no paper about the excess. This is interesting as the Standard Model process \( K_L \rightarrow \pi \overline{\nu} \nu \) has a tiny branching ratio, two orders of magnitude too small to explain the number of events.

Assuming the anomaly is just statistics, the probability of observing three or more events would be of the order of one chance in \( 10,000 \) if we take the more generous estimate of the background. On the other hand, it is apparently only roughly two-sigma evidence for an anomalous \( K_L \rightarrow \pi_0 + \mathrm{invisible} \) signal. Moreover, the central value of the required signal is just above (but well within errors of) the Grossman-Nir bound, which says that if something generates \( K_L \rightarrow \pi \overline{\nu} \nu \), it should also generate \( K^+ \rightarrow \pi^+ \overline{\nu} \nu\) in the ratio $$ \frac{\mathrm{Br} (K_L \rightarrow \pi_0 \overline{\nu} \nu)}{\mathrm{Br}(K^+ \rightarrow \pi^+ \overline{\nu} \nu)} = \sin^2 \theta_c$$ where \( \theta_c \) is the Cabbibo angle, provided that the interactions respect isospin. Since the charged process is not observed, the observed anomaly might be in slight tension with this bound.

So far I can find three papers seeking to explain this anomaly, through light scalar extensions of the Standard Model (with masses less than 180 MeV) and the inevitable two-Higgs doublet model. Since such scalars must couple to quarks/mesons they look a bit like axion-like particles and there are many astrophysical and beam-dump experiments that exclude large swathes of the potential parameter space, but this is quite exciting as, if the anomaly is confirmed, it should also be possible to easily look for it in (many) other experiments.

Here’s a striking coincidence. Last Friday I was preparing to deliver a lecture on special relativity to my undergrad General Relativity class with this Hobbity thought experiment (that helps one discover Lorentz-Fitzgerald contraction), when I heard that Christopher Tolkien (the boy the Hobbit was originally written for) had died. (RIP. … Click to continue reading this post →

Café host Emily Riehl has just been awarded a $250,000 prize by her university!! Johns Hopkins gives one President’s Frontier Award every year across the whole university, and the 2020 one has gone to Emily. Up to now it’s usually been given to biological and medical researchers, but when Emily came along they had to make an exception and give it to a mathematician. The award has the goal of “supporting exceptional scholars … who are on the cusp of transforming their fields.”.

Congratulations, Emily! Obviously you’re too modest to announce it yourself here, but someone had to.

You can read all about it here, including the delightful description of how the news was sprung on her:

When Riehl arrived at what she thought was a meeting with a department administrator, she says it was “a complete shock” to find JHU President Ronald J. Daniels, Provost Sunil Kumar, other university leaders, and many colleagues poised to surprise her.

One of the great things about a good university is the variety of excellent talks that you can see.

Yesterday we had our annual Chapman Lecture on Nanotechnology, in honor of Rice alum Richard Chapman, who turned down a first-round draft selection to the Detroit Lions to pursue a physics PhD and a career in engineering. This year's speaker was Zhenan Bao from Stanford, whom I know from back in my Bell Labs postdoc days. She spoke about her group's remarkable work on artificial skin: biocompatible, ultraflexible electronics including active matrices of touch sensors, transistors, etc. Here are a few representative papers that give you some idea of the kind of work that goes into this: Engineering semiconducting polymers to have robust elastic properties while retaining high charge mobilities; a way of combining conducting polymers (PEDOT) with hydrogels so that you can pattern them and then hydrate to produce super-soft devices; a full-on demonstration of artificial skin for sensing applications. Very impressive stuff.

Today, we had a colloquium by Gabe Aeppli of ETH and the Paul Scherrer Institute, talking about x-ray ptychographic imaging. Ptychography is a simple enough idea. Use a coherent source of radiation to illuminate some sample at some spot, and with a large-area detector, measure the diffraction pattern. Now scan the spot over the sample (including perhaps rotating the sample) and record all those diffraction patterns as well. With the right approach, you can combine all of those diffraction patterns and invert to get the spatial distribution of the scatterers (that is, the matter in the sample). Sounds reasonable, but these folks have taken it to the next level (pdf here). The video I'm embedding here is the old result from 2017. The 2019 paper I linked here is even more impressive, able to image, nondestructively, in 3D, individual circuit elements within a commercial integrated circuit at nanoscale resolution. It's clear that a long-term goal is to be able to image, non-destructively, the connectome of brains.

Continuing from part I: Well, I set the scene there, and so after that, a number of different ideas come together nicely. Let me list them:

[caption id="attachment_19442" align="alignright" width="250"] What "nearly" AdS_2 looks like via JT gravity. The boundary wiggles, but has fixed length 1/T.[/caption]

Exact solution of the SYK model (or dual JT model) in that low temperature limit I mentioned before gave an answer for the partition function $latex Z(\beta)$, by solving the Schwarzian dynamics for the wiggling boundary that I mentioned earlier. (The interior has a model of gravity on $latex AdS_2$, as I mentioned before, but as we're in 2D, there's no local dynamics associated with that part. But we'll see in a moment that there's very interesting stuff to take into account there too.) Anyway, the result for the Schwarzian dynamics can be written (see Stanford and Witten) in a way familiar from standard, say, statistical mechanics: $latex Z_0(\beta)=\int dE \rho_0(E) \exp(-\beta E)$, where $latex \rho_0(E)\sim\sinh(2\pi\sqrt{E})$ is the spectral density of the model. I now need to explain why everything has a subscript 0 in it in the last sentence.

On the other hand, the JT gravity model organises itself as a very interesting topological sum that is important if we are doing quantum gravity. First, recall that we're working in the "Euclidean" manner discussed before (i.e., time is a spatial parameter, and so 2D space can be tessellated in that nice Escher way). The point is that the Einstein-Hilbert action in 2D is a topological counting parameter (as mentioned before, there's no dynamics!). The thing that is being counted is the Euler characteristic of the space: $latex \chi=2-2g-b-c$, where $latex g,b,c$ are the number of handles, boundaries, and crosscaps the surface has, characterising its topology. Forget about crosscaps for now (that has to do with unorientable surfaces like a möbius strip $latex (g=0,b=1,c=1)$ - we'll stick with orientable surfaces here). The full JT gravity action therefore has just the thing one needs to keep track of the dynamics of the quantum theory, and the partition function (or other quantities that you might wish to compute) can be written as a sum of contributions from every possible topology. So one can write the JT partition function as $latex Z(\beta)=\sum_{g=0}^\infty\hbar^{-(1-2g)}Z_g(\beta)$ where the parameter $latex \hbar$ weights different genus surfaces. In that sum the weight of a surface is $latex \hbar^{-\chi}$ and $latex b=1$ since there's a boundary of length $latex \beta$, you may recall.

The basic Schwarzian computation mentioned above therefore gives the leading piece of the partition function, i.e., $latex g=0$, and so that's why I put the subscript 0 on it at the outset. A big question then is what is the result for JT gravity computed on all those other topologies?!

(A somewhat more technical post follows.) Well, I think I promised to say a bit more about what I’ve been up to in that work that resulted in the paper I talked about in an earlier post. The title of my paper, “Non-perturbative JT gravity” has JT (Jackiw-Teitelbiom) gravity in … Click to continue reading this post →

In Part 1, I said how to push sheaves forward along a continuous map. Now let’s see how to pull them back! This will set up a pair of adjoint functors with nice properties, called a ‘geometric morphism’.

First recall how we push sheaves forward. I’ll say it more concisely this time. If you have a continuous map between topological spaces, the inverse image of any open set is open, so we get a map

A functor between categories gives a functor between the opposite categories. I’ll use the same name for this, if you can stand it:

A presheaf on is a functor

and we can compose this with to get a presheaf on

We call this presheaf on the direct image or pushforward of along and we write it as In a nutshell:

Even better, this direct image operation extends to a functor from the category of presheaves on to the category of presheaves on

Better still, this functor sends sheaves to sheaves, so it restricts to a functor

This is how we push forward sheaves on to get sheaves on

All this seems very natural and nice. But now let’s stop pushing and start pulling! This will give a functor going the other way:

The inverse image of a sheaf

At first it seems hard how to pull back sheaves, given how natural it was to push them forward. This is where our second picture of sheaves comes in handy!

Remember, a bundle over a topological space is a topological space equipped with a continuous map

We say it’s an etale space over if it has a special property: each point has an open neighborhood such that restricted to this neighborhood is a homeomorphism from this neighborhood to an open subset of In Part 2 we defined the category of bundles over which is called and the full subcategory of this whose objects are etale spaces, called I also sketched how we get an equivalence of categories

So, to pull back sheaves we can just convert them into etale spaces, pull those back, and then convert them back into sheaves!

First I’ll tell you how to pull back a bundle. I’ll assume you know the general concept of ‘pullbacks’, and what they’re like in the category of sets. The category of topological spaces and continuous maps has pullbacks, and they work a lot like they do in the category of sets. Say we’re given a bundle over which is really just a continuous map

and a continuous map

Then we can form their pullback and get a bundle over called

In class I’ll draw the pullback diagram, but it’s too much work to do here! As a set,

It’s a subset of and we make it into a topological space using the subspace topology. The map

does the obvious thing: it sends to

Puzzle. Prove that this construction really obeys the universal property for pullbacks in the category where objects are topological space and morphisms are continuous maps.

Puzzle. Show that this construction extends to a functor

That is, find a natural way to define the pullback of a morphism between bundles, and prove that this makes into a functor.

Puzzle. Prove that if is an etale space over and is any continuous map, then is an etale space over

Putting these puzzles together, it instantly follows that we can restrict the functor

to etale spaces and morphisms between those, and get a functor

Using the equivalence

we then get our desired functor

called the inverse image or pullback functor.

Slick! But what does the inverse image of a sheaf actually look like?

Suppose we have a sheaf on and a continuous map We get an inverse image sheaf on But what is it like, concretely?

That is, suppose we have an open set What does an element of amount to?

Unraveling the definitions, must be a section over of the pullback along of the etale space corresponding to

A point in the etale space corresponding to is the germ at some of some where is some open neighborhood of

Thus, our section is just a continuous function sending each point to some germ of this sort at

There is more to say: we could try to unravel the definitions a bit more, and describe directly in terms of the sheaf without mentioning the corresponding etale space! But maybe one of you reading this can do that more gracefully than I can.

The adjunction between direct and inverse image functors

Once they have direct and inverse images in hand, Mac Lane and Moerdijk prove the following as Theorem 2 in Section II.9:

Theorem. For any continuous map the direct image functor

is left adjoint to the inverse image functor:

I won’t do it here, so please look at their proof if you’re curious! As you might expect, it involves hopping back and forth between our two pictures of sheaves: as presheaves with an extra property, and as bundles with an extra property — namely, etale spaces.

I don’t think there’s anything especially sneaky about their argument. They do however use this: if you take a sheaf, and convert it into an etale space, and convert that back into a sheaf, you get back where you started up to natural isomorphism. This isomorphism is just the counit that I mentioned in Part 3.

Remember, the functor that turns presheaves into bundles

is left adjoint to the functor that turns bundles into presheaves:

So, there’s a unit

and a unit

The fact we need now is that whenever a presheaf is a sheaf, its counit

is an isomorphism. This is part of Theorem 2 in Section II.6 in Mac Lane and Moerdijk.

And by the way, this fact has a partner! Whenever a bundle is an etale space, its unit is an isomorphism. So, converting an etale space into a sheaf and then back into an etale space also gets you back where you started, up to natural isomorphism. But the favored direction of this morphism is in the other direction: any sheaf maps to the sheaf of sections of its associated etale space, while any bundle maps to the etale space of its sheaf of sections.

A probe station for testing silicon sensors. CMS will use silicon sensors for building prototypes of a highly granular sandwich calorimeter, the CMS HGC (High Granular Calorimeter) upgrade for High-Luminosity LHC. (Credit: N. Caraban Gonzalez)

A picture is worth a thousand words.

70% of all your sensory receptors are in your eyes. 50% of our brain is involved in visual processing.*

Last time I described two viewpoints on sheaves. In the first, a sheaf on a topological space is a special sort of presheaf

Namely, it’s one obeying the ‘sheaf condition’.

I explained this condition in Part 1, but here’s a slicker way to say it. Suppose is an open set covered by a collection of open sets Then we get this diagram:

The first arrow comes from restricting elements of to the smaller sets The other two arrows come from this: we can either restrict from to or restrict from to

The sheaf condition says that this diagram is an equalizer! This is just another way of saying that a family of are the restrictions of a unique iff their restrictions to the overlaps are equal.

In the second viewpoint, a sheaf is a bundle over

with the special property of being ‘etale’. Remember, this means that every point in has an open neighborhood that’s mapped homeomorphically onto an open neighborhood in

Last time I showed you how to change viewpoints. We got a functor that turns presheaves into bundles

and a functor that turns bundles into presheaves:

Moreover, I claimed actually turns presheaves into etale spaces, and actually turns bundles into sheaves. And I claimed that these functors restrict to an equivalence between the category of sheaves and the category of etale spaces:

What can we do with these ideas? Right away we can do two things:

• We can describe ‘sheafification’: the process of improving a presheaf to get a sheaf.

• We can see how to push forward and pull back sheaves along a continuous map between spaces.

I’ll do the first now and the second next time. I’m finding it pleasant to break up these notes into small bite-sized pieces, shorter than my actual lectures.

Sheafification

To turn a presheaf into a sheaf, we just hit it with and then with In other words, we turn our presheaf into a bundle and then turn it back into a presheaf. It turns out the result is a sheaf!

Why? The reason is this:

Theorem. If we apply the functor

to any object, the result is a sheaf on

(The objects of are, of course, the bundles over )

Proving this theorem was a puzzle last time; let me outline the solution. Remember that if we take a bundle

and hit it with we get a presheaf called where is the set of sections of over and we restrict sections in the usual way, by restricting functions. But you can check that if we have an open set covered by a bunch of open subsets and a bunch of sections on the that agree on the overlaps these sections piece together to define a unique section on all of that restricts to each of the So, is a sheaf!

It follows that sends presheaves to sheaves. Since sheaves are a full subcategory of presheaves, any automatically sends any morphism of presheaves to a morphism of sheaves, and we get the sheafification functor

To fully understand this, it’s good to actually take a presheaf and sheafify it. So take a presheaf:

When we hit this with we get a bundle

Remember: any element of for any open neighborhood of gives a point over in all points over show up this way, and two such elements determine the same point iff they become equal when we restrict them to some sufficiently small open neighborhood of

When we hit this bundle with we get a sheaf

where is the set of sections of over This is the sheafification of

So, if you think about it, you’ll see this: to define a section of the sheafification of over an open set you can just take a bunch of sections of over open sets covering that agree when restricted to the overlaps.

Puzzle. Prove the above claim. Give a procedure for constructing a section of over given open sets covering and sections of over the that obey

The adjunction between presheaves and bundles

Here’s one nice consequence of the last puzzle. We can always use the trivial cover of by itself! Thus, any section of over gives a section of over This is the key to the following puzzle:

Puzzle. Show that for any presheaf there is morphism of presheaves

Show that these morphisms are natural in so they define a natural transformation

Now, this is just the sort of thing we’d expect if were the left adjoint of Remember, when you have a left adjoint and a right adjoint you always have a ‘unit’

and a ‘counit’

where the double arrows stand for natural transformations.

And indeed, in Part 2 I claimed that is the left adjoint of But I didn’t prove it. What we’re doing now could be part of the proof: in fact Mac Lane and Moerdijk prove it this way in Theorem 2 of Section II.6.

Let’s see if we can construct the counit

For this I hand you a bundle

You form its sheaf of sections and then you form the etale space of that. Then you want to construct a morphism of bundles from your etale space to my original bundle.

Mac Lane and Moerdijk call the construction ‘inevitable’. Here’s how it works. We get points in over from sections of over open sets containing But you can just take one of these sections and evaluate it at and get a point in

Puzzle. Show that this procedure gives a well-defined continuous map

and that this is actually a morphism of bundles over Show that these morphisms define a natural transformation

An attempt to lay out a vision for research in the US beyond Science: The Endless Frontier. The evolving roles of the national academies are interesting, though I found the description of the future of research universities to be rather vague - I'm not sure growing universities to the size of Arizona State is the best way to provide high quality access to knowledge for a large population. It still feels to me like an eventual successful endpoint for online education could be natural language individualized tutoring ("Alexa, teach me multivariable calculus."), but we are still a long way from there.

Dan Ralph at Cornell has done a nice service to the community by making his lecture notes available on the arxiv. The intent is for these to serve as a supplement to a solid state course such as one out of Ashcroft and Mermin, bringing students up to date about Berry curvature and topology at a similar level to that famous text.

This preprint tries to understand an extremely early color photography process developed by Becquerel (the photovoltaic one, who was the father of the radioactivity Becquerel). It turns out that there are systematic changes in reflectivity spectra of the exposed Ag/AgCl films depending on the incident wavelength. Why the reflectivity changes that way remains a mystery to me after reading this.

On a related note, this led me to this PNAS paper about the role of plasmons in the daguerreotype process. Voila, nanophotonics in the 19th century.

This preprint (now out in Nature Nano) demonstrates incredibly sensitive measurements of torques on very rapidly rotating dielectric nanoparticles. This could be used to see vacuum rotational friction.

The inventors of chemically amplified photoresists have been awarded the Charles Stark Draper prize. Without that research, you probably would not have the computing device sitting in front of you....

Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let be a probability space, and be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps that preserve the measure : . To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as concrete measurable maps, and measurable spaces as concrete measurable spaces. (One could also call a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)

Let be a discrete group. A (concrete) measure-preserving action of on is a group homomorphism from to , thus is the identity map and for all . A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when is the integers (with the additive group law).

Let be a compact Hausdorff abelian group, which we can endow with the Borel -algebra . A (concrete measurable) –cocycle is a collection of concrete measurable maps obeying the cocycle equation

Then a -valued concrete measurable cocycle is a concrete coboundary if and only if for each character , the -valued cocycles are concrete coboundaries.

The hypotheses (i), (ii), (iii) are saying in some sense that the data are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to being second countable, and (ii) is equivalent to being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.

Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each is a coboundary, then so is . By hypothesis, we then have an equation of the form

Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in , so the right-hand side must be also. Manipulating this fact, we eventually arrive at

In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is -invariant. Now let us assume for sake of argument that the action of is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only -invariant functions are constant. So now we get a weaker version of (4), namely

and now the previous strategy works with replaced by . This concludes the sketch of proof of Theorem 1.

In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).

My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.

If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation , while still continuous, can fail to be measurable as a map from to ! Thus for instance the sum of two measurable functions need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the Nedoma pathology. A standard example arises when is the uncountable torus , endowed with the product topology. Crucially, the Borel -algebra generated by this uncountable product is not the product of the factor Borel -algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but -algebras do not); relating to this, the product -algebra is not the same as the Borel -algebra , but is instead a strict sub-algebra. If the group operations on were measurable, then the diagonal set

would be measurable in . But it is an easy exercise in manipulation of -algebras to show that if are any two measurable spaces and is measurable in , then the fibres of are contained in some countably generated subalgebra of . Thus if were -measurable, then all the points of would lie in a single countably generated -algebra. But the cardinality of such an algebra is at most while the cardinality of is , and Cantor’s theorem then gives a contradiction.

To resolve this problem, we give a coarser -algebra than the Borel -algebra, namely the Baire -algebra , thus coarsening the measurable space structure on to a new measurable space . In the case of compact Hausdorff abelian groups, can be defined as the -algebra generated by the characters ; for more general compact abelian groups, one can define as the -algebra generated by all continuous maps into metric spaces. This -algebra is equal to when is metrisable but can be smaller for other . With this measurable structure, becomes a measurable group; it seems that once one leaves the metrisable world that is a superior (or at least equally good) space to work with than for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if is the Haar probability measure on , then (thus, every -measurable set is equivalent modulo -null sets to a -measurable set), so there is no damage to Plancherel caused by passing to the Baire -algebra.

Passing to the Baire -algebra fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of abstract measure theory, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of abstract measurable spaces, which we formally define as the opposite category of the category of -algebras (with Boolean algebra homomorphisms). Thus, we define an abstract measurable space to be an object of the form , where is an (abstract) -algebra and is a formal placeholder symbol that signifies use of the opposite category, and an abstract measurable map is an object of the form , where is a Boolean algebra homomorphism and is again used as a formal placeholder; we call the pullback map associated to . [UPDATE: It turns out that this definition of a measurable map led to technical issues. In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be -complete (i.e., it respects countable joins).] The composition of two abstract measurable maps , is defined by the formula , or equivalently .

Every concrete measurable space can be identified with an abstract counterpart , and similarly every concrete measurable map can be identified with an abstract counterpart , where is the pullback map . Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the opposite measure algebra of , defined as where is the ideal of null sets in . Informally, is the space with all the null sets removed; there is a canonical abstract embedding map , which allows one to convert any concrete measurable map into an abstract one . One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing with the opposite measure algebra ; see the paper for details. Our main theorem is then

We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:

(i) Suppose one has an abstract measurable map into a concrete measurable space. Does there exist a representation of by a concrete measurable map ? Is it unique up to almost everywhere equivalence?

(ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?

If does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If is not compact or Polish, there can be counterexamples to existence.

If is a compact metric space or a Polish space, then one always has existence and uniqueness.

If is a compact Hausdorff abelian group, one always has existence.

If is a complete measure space, then one always has existence (from a theorem of Maharam).

If is the unit interval with the Borel -algebra and Lebesgue measure, then one has existence for all compact Hausdorff assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).

For more general , existence for all compact Hausdorff is equivalent to the existence of a lifting from the -algebra to (or, in the language of abstract measurable spaces, the existence of an abstract retraction from to ).

It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever is compact Hausdorff.

Our understanding of (ii) is much less complete:

If is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).

If is at most countable and is a complete measure space, then the answer is again “always”.

In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.

Last time I started talking about the groupoid of ‘finite sets equipped with permutation’, $\mathsf{Perm}$. Remember:

an object $(X,\sigma)$ of $\mathsf{Perm}$ is a finite set $X$ with a bijection $\sigma \colon X \to X$;

a morphism $f \colon (X,\sigma) \to (X',\sigma')$ is a bijection $f \colon X \to X'$ such that $\sigma' = f \sigma f^{-1}$.

In other words, $\mathsf{Perm}$ is the groupoid of finite $\mathbb{Z}$-sets. It’s also equivalent to the groupoid of covering spaces of the circle having finitely many sheets!

Today I’d like to talk about another slightly bigger groupoid. It’s very pretty, and I think it will shed light on a puzzle we saw earlier: the mysterious connection between random permutations and Poisson distributions.

I’ll conclude with a question for homotopy theorists.

where $Y$ is the set of Young diagrams, $y(k)$ is the number of columns of length $k$ in the Young diagram $y$, $\mathsf{B}(G)$ is the one-object groupoid corresponding to the group $G$, and for any category $\mathsf{C}$ I’m using

$\frac{\mathsf{C}^n}{n!}$

to stand for the ‘weak quotient’ of $\mathsf{C}^n$ by the permutation group $S_n$. (That is, instead of just modding out, we throw in isomorphisms coming from permutations. I explained this in more detail last time.)

is the free symmetric monoidal category on $\mathsf{C}$. The formula for $\mathsf{Perm}$ looks vaguely similar! Indeed, the free symmetric monoidal category on $\mathsf{B}(\mathbb{Z}/k)$ is

What’s going on is that $Y$, the set of Young diagrams, is really the set of functions $y \colon \mathbb{N}^+ \to \mathbb{N}$ that vanish except at finitely many points. Suppose we drop that finiteness condition! Then things get nicer.

Remember, in any situation where products distribute over sums, if we have a bunch of things $x_{i j}$ indexed by $i \in I$, $j \in J$ we can write the distributive law as

It’s just the product of the free symmetric monoidal categories on all the $\mathsf{B}(\mathbb{Z}/k)$.

What is the category $\mathsf{S}(\mathsf{B}(\mathbb{Z}/k))$ actually like? It’s a groupoid. It has objects $1, x, x^{\otimes 2}, x^{\otimes 3}, \dots$ and so on. There are no morphisms between distinct objects. The automorphism group of $x^{\otimes n}$ is the semidirect product of $S_n$ and $\mathbb{Z}/k \times \cdots \times \mathbb{Z}/k$, where the symmetric group acts to permute the factors.

So, in words, $\mathsf{S}(\mathsf{B}(\mathbb{Z}/k))$ is the ‘free symmetric monoidal category on an object $x$ having $\mathbb{Z}/k$ as its symmetry group’.

This sounds abstract. But it’s equivalent to something concrete: the groupoid of finite sets that are equipped with a permutation all of whose cycles have length $k$. The object $x$ corresponds to a set with a permutation having a single cycle of length $k$. The tensor product corresponds to disjoint union. Thus, $x^{\otimes n}$ corresponds to a set with a permutation having $n$ disjoint cycles of length $k$.

as an infinite list of finite sets, one for each $k = 1, 2, 3, \dots$, where the $k$th set is equipped with a permutation having only cycles of length $k$.

Taking the disjoint union of all these sets, we get a single set with a permutation on it. This set may be infinite, but all its cycles have finite length, and it has finitely many cycles of each length $k = 1, 2, 3, \dots$. So:

Theorem. The groupoid
$\mathsf{Poiss} \simeq \prod_{k =1}^\infty \mathsf{S}(\mathsf{B}(\mathbb{Z}/k))$
is equivalent to the groupoid of sets equipped with a permutation having only cycles of finite length, with finitely many cycles of each length.

It’s easy from this description to see the inclusion

$\mathsf{Perm} \hookrightarrow \mathsf{Poiss}$

It’s just the inclusion of the finite sets equipped with permutation!

I claim that the groupoid $\mathsf{Poiss}$ explains why the number of cycles of length $k$ in a randomly chosen permutation of an $n$-element set approaches a Poisson-distributed random variable with mean $1/k$ as $n \to \infty$. The fact that it’s a product also explains why these random variables become independent in the $n \to \infty$ limit.

I’ll talk about this more later. But to get an idea of how it works, let’s compute the groupoid cardinality of $\mathsf{S}(\mathsf{B}(\mathbb{Z}/k))$. It’s

so we’re seeing that lurking here. But I need to think about this more before I can give a really convincing explanation.

Let me conclude with a puzzle for homotopy theorists.

First, some background so others can enjoy the puzzle, or at least learn something. Homotopy theorists know how to take any category and turn it into a topological space: the geometric realization of its nerve. If we take a group $G$ and apply this trick to the one-object groupoid $\mathsf{B}(G)$ we get the Eilenberg–Mac Lane space$K(G,1)$. This is the connected space with $G$ as its fundamental group and all higher homotopy groups being trivial. As long as we give $G$ the discrete topology, as we’ll do for $G = \mathbb{Z}/k$ here, $K(G,1)$ is also the classifying space for $G$-bundles, denoted $B G$. (This looks a lot like the groupoid $\mathsf{B}(G)$ — but that’s okay, because in homotopy theory a groupoid is considered only slightly different from the geometric realization of its nerve.)

Puzzle. Given a discrete group $G$, what’s a nice description of the geometric realization of the nerve of $\mathsf{S}(\mathsf{B}(G))$, the free symmetric monoidal category on the one-object groupoid corresponding to $G$? I’m especially interested in the case where $G$ is a finite cyclic group.

By the way, the classifying space $B(\mathbb{Z}/k)$ is a ‘lens space’: it’s formed by taking the unit sphere in $\mathbb{C}^\infty$ and quotienting by the action of the $k$th roots of unity. My first guess on the puzzle is to take the disjoint union

A point in here is a finite set of points in the lens space! Note that the construction here is different from the infinite symmetric product used in the Dold–Kan theorem, because we are not identifying an $n$-element set of points with an $(n+1)$-element set whose extra element is the basepoint.

It was a low-research day (job season) but I worked a bit with Lily Zhao (Yale) on interpolation methods and comparing interpolations. This for our hierarchical, non-parametric wavelength calibration method.

On the flight home from #AAS235, I did some writing in a paper by Lily Zhao (Yale) about spectrograph (wavelength) calibration. I'm very excited about this project; we removed all dependence on polynomials and other kinds of strict functional forms. We went non-parametric. But of course this greatly increases the degrees of freedom of the fitting or interpolation of the calibration data. So when we do this, we also have to go hierarchical; we have to restrict the calibration freedom using the data. That is, we don't have any strict functional form for the calibration of the spectrograph, but we require that the calibration solution we find lives in the space of solutions that we have seen before. That is, if you increase the freedom by going non-parametric, you need to restrict the freedom by going non-parametric. (The results look incredible.)

Today was Hack Together Day #hackaas at #AAS235. We computed that this is the eighth winter AAS meeting to have a hack day, making it (AAS Hack Together Day) one of my scientific accomplishments of the decade. At the hack day, the main thing I did was hack on hack day, working with Jim Davenport (UW) to brainstorm things we can do to keep the event fresh, and keep us experimenting with it. I also had a great conversation with Brigitta Sipocz, Geert Barentsen, and others about ways we can use our hacking and design thinking to support a reduction in CO_{2} emissions by astronomers and academics in general. Related to my conversations of yesterday.

But many great things happened in the Hack Together Day. Too many to list here. Look at the wrap-up slides to get a sense of the range and depth of the projects. So many people learned a lot and did a lot. I'm proud, which is a sin, apparently.

A highlight of today was a long meeting with Chris Lintott (Oxford) covering many subjects. But he told me about dot-dot-astronomy, which is a fully-remote reboot they are working on for the niche but extremely influential dot-astronomy meetings. The idea is to go fully remote—all participants remote—but then change the meeting expectations and structure to respect that. The idea is: Maybe not try to do remote meetings so they are just as good as face-to-face meetings, but to try to do remote meetings so they are something very different from face-to-face meetings. That seems like a great idea. Let's re-frame our goals. We have to do something about what we are doing to this planet.

Picture a scientist at work. You’re probably picturing an experiment, test tubes and beakers bubbling away. But not all scientists do experiments. Theoretical physicists work on the mathematical side of the field, making predictions and trying to understand how to make them better. So what does it look like when a theoretical physicist is working?

The first thing you might imagine is that we just sit and think. While that happens sometimes, we don’t actually do that very often. It’s better, and easier, to think by doing something.

Sometimes, this means working with pen and paper. This should be at least a little familiar to anyone who has done math homework. We’ll do short calculations and draw quick diagrams to test ideas, and do a more detailed, organized, “show your work” calculation if we’re trying to figure out something more complicated. Sometimes very short calculations are done on a blackboard instead, it can help us visualize what we’re doing.

Sometimes, we use computers instead. There are computer algebra packages, like Mathematica, Maple, or Sage, that let us do roughly what we would do on pen and paper, but with the speed and efficiency of a computer. Others program in more normal programming languages: C++, Python, even Fortran, making programs that can calculate whatever they are interested in.

Sometimes we read. With most of our field’s papers available for free on arXiv.org, we spend time reading up on what our colleagues have done, trying to understand their work and use it to improve ours.

Sometimes we talk. A paper can only communicate so much, and sometimes it’s better to just walk down the hall and ask a question. Conversations are also a good way to quickly rule out bad ideas, and narrow down to the promising ones. Some people find it easier to think clearly about something if they talk to a colleague about it, even (sometimes especially) if the colleague isn’t understanding much.

And sometimes, of course, we do all the other stuff. We write up our papers, making the diagrams nice and the formulas clean. We teach students. We go to meetings. We write grant applications.

It’s been said that a theoretical physicist can work anywhere. That’s kind of true. Some places are more comfortable, and everyone has different preferences: a busy office, a quiet room, a cafe. But with pen and paper, a computer, and people to talk to, we can do quite a lot.

Another Update (Jan. 16): Yet another reason to be excited about this result—one that somehow hadn’t occurred to me—is that, as far as I know, it’s the first-ever fully convincing example of a non-relativizing computability result. See this comment for more.

Update: If you’re interested in the above topic, then you should probably stop reading this post right now, and switch to this better post by Thomas Vidick, one of the authors of the new breakthrough. (Or this by Boaz Barak or this by Lance Fortnow or this by Ken Regan.) (For background, also see Thomas Vidick’s excellent piece for the AMS Notices.)

Still here? Alright, alright…

Here’s the paper, which weighs in at 165 pages. The authors are Zhengfeng Ji, Anand Natarajan, my former postdoc Thomas Vidick, John Wright (who will be joining the CS faculty here at UT Austin this fall), and my wife Dana’s former student Henry Yuen. Rather than pretending that I can provide intelligent commentary on this opus in the space of a day, I’ll basically just open my comment section to discussion and quote the abstract:

We show that the class MIP* of languages that can be decided by a classical verifier interacting with multiple all-powerful quantum provers sharing entanglement is equal to the class RE of recursively enumerable languages. Our proof builds upon the quantum low-degree test of (Natarajan and Vidick, FOCS 2018) by integrating recent developments from (Natarajan and Wright, FOCS 2019) and combining them with the recursive compression framework of (Fitzsimons et al., STOC 2019). An immediate byproduct of our result is that there is an efficient reduction from the Halting Problem to the problem of deciding whether a two-player nonlocal game has entangled value 1 or at most 1/2. Using a known connection, undecidability of the entangled value implies a negative answer to Tsirelson’s problem: we show, by providing an explicit example, that the closure C_{qa} of the set of quantum tensor product correlations is strictly included in the set C_{qc} of quantum commuting correlations. Following work of (Fritz, Rev. Math. Phys. 2012) and (Junge et al., J. Math. Phys. 2011) our results provide a refutation of Connes’ embedding conjecture from the theory of von Neumann algebras.

To say it differently (in response to a commenter’s request), some of the major implications are as follows.

(1) There is a protocol by which two entangled provers can convince a polynomial-time verifier of the answer to any computable problem whatsoever (!!), or indeed that a given Turing machine halts.

(2) There is a two-prover game, analogous to the Bell/CHSH game, for which Alice and Bob can do markedly better with a literally infinite amount of entanglement than they can with any finite amount of entanglement.

(3) There is no algorithm even to approximate the entangled value of a two-prover game (i.e., the probability that Alice and Bob win the game, if they use the best possible strategy and as much entanglement as they like). Instead, this problem is equivalent to the halting problem.

(4) There are types of correlations between Alice and Bob that can be produced using infinite entanglement, but that can’t even be approximated using any finite amount of entanglement.

(5) The Connes embedding conjecture, a central conjecture from the theory of operator algebras dating back to the 1970s, is false.

Note that all of these implications—including the ones for pure math and the foundations of quantum physics—were obtained using tools that originated in theoretical computer science, specifically the study of interactive proof systems.

I can remember when the class MIP* was first defined and studied, back around 2003, and people made the point that we didn’t know any reasonable upper bound on the class’s power—not NEXP, not NEEEEXP, not even the set of all computable languages. Back then, the joke was how far our proof techniques were from what was self-evidently the truth. I don’t remember a single person who seriously contemplated that two entangled provers could convince a polynomial-time verifier than an arbitrary Turing machine halts.

Still, ever since Natarajan and Wright’s NEEXP in MIP* breakthrough last year, all of us in quantum computing theory knew that MIP*=RE was a live possibility—and all through the summer and fall, I heard many hints that such a breakthrough was imminent.

It’s worth pointing out that, with only classical correlations between the provers, MIP gives “merely” the power of NEXP (Nondeterministic Exponential Time), while with arbitrary non-signalling correlations between the provers, the so-called MIP_{ns} gives the power of EXP (Deterministic Exponential Time). So it’s particularly striking that quantum entanglement, which is “intermediate” between classical correlations and arbitrary non-signalling correlations, yields such wildly greater computational power than either of those two.

The usual proviso applies: when I’ve blogged excitedly about preprints with amazing new results, most have stood, but at least two ended up being retracted. Still, assuming this one stands (as I’m guessing it will), I regard it as easily one of the biggest complexity-theoretic (and indeed computability-theoretic!) surprises so far in this century. Huge congratulations to the authors on what looks to be a historic achievement.

In unrelated news, for anyone for whom the 165-page MIP* paper is too heavy going (really??), please enjoy this CNBC video on quantum computing, which features several clips of yours truly speaking in front of a fake UT tower.

In other unrelated news, I’m also excited about this preprint by Avishay Tal, which sets a new record for the largest known separation between quantum query complexity and classical randomized query complexity, making substantial progress toward proving a conjecture by me and Andris Ambainis from 2015. (Not the “Aaronson-Ambainis Conjecture,” a different conjecture.)

Scott’s preface: Imagine that every time you turned your blog over to a certain topic, you got denounced on Twitter and Reddit as a privileged douchebro, entitled STEMlord, counterrevolutionary bourgeoisie, etc. etc. The sane response would simply be to quit blogging about that topic. But there’s also an insane (or masochistic?) response: the response that says, “but if everyone like me stopped talking, we’d cede the field by default to the loudest, angriest voices on all sides—thereby giving those voices exactly what they wanted. To hell with that!”

A few weeks ago, while I was being attacked for sharing Steven Pinker’s guest post about NIPS vs. NeurIPS, I received a beautiful message of support from a PhD student in physical chemistry and quantum computing named Karen Morenz. Besides her strong words of encouragement, Karen wanted to share with me an essay she had written on Medium about why too many women leave STEM.

Karen’s essay, I found, marshaled data, logic, and her own experience in support of an insight that strikes me as true and important and underappreciated—one that dovetails with what I’ve heard from many other women in STEM fields, including my wife Dana. So I asked Karen for permission to reprint her essay on this blog, and she graciously agreed.

Briefly: anyone with a brain and a soul wants there to be many more women in STEM. Karen outlines a realistic way to achieve this shared goal. Crucially, Karen’s way is not about shaming male STEM nerds for their deep-seated misogyny, their arrogant mansplaining, or their gross, creepy, predatory sexual desires. Yes, you can go the shaming route (God knows it’s being tried). If you do, you’ll probably snare many guys who really do deserve to be shamed as creeps or misogynists, along with many more who don’t. Yet for all your efforts, Karen predicts, you’ll no more solve the original problem of too few women in STEM, than arresting the kulaks solved the problem of lifting the masses out of poverty.

For you still won’t have made a dent in the real issue: namely that, the way we’ve set things up, pursuing an academic STEM career demands fanatical devotion, to the exclusion of nearly everything else in life, between the ages of roughly 18 and 35. And as long as that’s true, Karen says, the majority of talented women are going to look at academic STEM, in light of all the other great options available to them, and say “no thanks.” Solving this problem might look like more money for maternity leave and childcare. It might also look like re-imagining the academic career trajectory itself, to make it easier to rejoin it after five or ten years away. Way back in 2006, I tried to make this point in a blog post called Nerdify the world, and the women will follow. I’m grateful to Karen for making it more cogently than I did.

Without further ado, here’s Karen’s essay. –SA

Is it really just sexism? An alternative argument for why women leave STEM

by Karen Morenz

Everyone knows that you’re not supposed to start your argument with ‘everyone knows,’ but in this case, I think we ought to make an exception:

Everyone knows that STEM (Science, Technology, Engineering and Mathematics) has a problem retaining women (see, for example Jean, Payne, and Thompson 2015). We pour money into attracting girls and women to STEM fields. We pour money into recruiting women, training women, and addressing sexism, both overt and subconscious. In 2011, the United States spent nearly $3 billion tax dollars on STEM education, of which roughly one third was spent supporting and encouraging underrepresented groups to enter STEM (including women). And yet, women are still leaving at alarming rates.

Alarming? Isn’t that a little, I don’t know, alarmist? Well, let’s look at some stats.

A recent report by the National Science Foundation (2011) found that women received 20.3% of the bachelor’s degrees and 18.6% of the PhD degrees in physics in 2008. In chemistry, women earned 49.95% of the bachelor’s degrees but only 36.1% of the doctoral degrees. By comparison, in biology women received 59.8% of the bachelor’s degrees and 50.6% of the doctoral degrees. A recent article in Chemical and Engineering News showed a chart based on a survey of life sciences workers by Liftstream and MassBio demonstrating how women are vastly underrepresented in science leadership despite earning degrees at similar rates, which I’ve copied below. The story is the same in academia, as you can see on the second chart — from comparable or even larger number of women at the student level, we move towards a significantly larger proportion of men at the more and more advanced stages of an academic career.

Although 74% of women in STEM report “loving their work,” half (56%, in fact) leave over the course of their career — largely at the “mid-level” point, when the loss of their talent is most costly as they have just completed training and begun to contribute maximally to the work force.

A study by Dr. Flaherty found that women who obtain faculty position in astronomy spent on average 1 year less than their male counterparts between completing their PhD and obtaining their position — but he concluded that this is because women leave the field at a rate 3 to 4 times greater than men, and in particular, if they do not obtain a faculty position quickly, will simply move to another career. So, women and men are hired at about the same rate during the early years of their post docs, but women stop applying to academic positions and drop out of the field as time goes on, pulling down the average time to hiring for women.

There are many more studies to this effect. At this point, the assertion that women leave STEM at an alarming rate after obtaining PhDs is nothing short of an established fact. In fact, it’s actually a problem across all academic disciplines, as you can see in this matching chart showing the same phenomenon in humanities, social sciences, and education. The phenomenon has been affectionately dubbed the “leaky pipeline.”

But hang on a second, maybe there just aren’t enough women qualified for the top levels of STEM? Maybe it’ll all get better in a few years if we just wait around doing nothing?

Nope, sorry. This study says that 41% of highly qualified STEM people are female. And also, it’s clear from the previous charts and stats that a significantly larger number of women are getting PhDs than going on the be professors, in comparison to their male counterparts. Dr. Laurie Glimcher, when she started her professorship at Harvard University in the early 1980s, remembers seeing very few women in leadership positions. “I thought, ‘Oh, this is really going to change dramatically,’ ” she says. But 30 years later, “it’s not where I expected it to be.” Her experiences are similar to those of other leading female faculty.

So what gives? Why are all the STEM women leaving?

It is widely believed that sexism is the leading problem. A quick google search of “sexism in STEM” will turn up a veritable cornucopia of articles to that effect. And indeed, around 60% of women report experiencing some form of sexism in the last year (Robnett 2016). So, that’s clearly not good.

And yet, if you ask leading women researchers like Nobel Laureate in Physics 2018, Professor Donna Strickland, or Canada Research Chair in Advanced Functional Materials (Chemistry), Professor Eugenia Kumacheva, theysay that sexism was not a barrier in their careers. Moreover, extensive research has shown that sexism has overall decreased since Professors Strickland and Kumacheva (for example) were starting their careers. Even more interestingly, Dr. Rachael Robnett showed that more mathematical fields such as Physics have a greater problem with sexism than less mathematical fields, such as Chemistry, a finding which rings true with the subjective experience of many women I know in Chemistry and Physics. However, as we saw above, women leave the field of Chemistry in greater proportions following their BSc than they leave Physics. On top of that, although 22% of women report experiencing sexual harassment at work, the proportion is the same among STEM and non-STEM careers, and yet women leave STEM careers at a much higher rate than non-STEM careers.

So,it seems that sexism can not fully explain why women with STEM PhDs are leaving STEM. At the point when women have earned a PhD, for the most part they have already survived the worst of the sexism. They’ve already proven themselves to be generally thick-skinned and, as anyone with a PhD can attest, very stubborn in the face of overwhelming difficulties. Sexism is frustrating, and it can limit advancement, but it doesn’t fully explain why we have so many women obtaining PhDs in STEM, and then leaving. In fact, at least in the U of T chemistry department, faculty hires are directly proportional to the applicant pool —although the exact number of applicants are not made public, from public information we can see that approximately one in four interview invitees are women, and approximately one in four hires are women. Our hiring committees have received bias training, and it seems that it has been largely successful. That’s not to say that we’re done, but it’s time to start looking elsewhere to explain why there are so few women sticking around.

So why don’t more women apply?

Well, one truly brilliant researcher had the groundbreaking idea of asking women why they left the field. When you ask women why they left, the number one reason they cite is balancing work/life responsibilities — which as far as I can tell is a euphemism for family concerns.

Theresearchis inonthis. Women who stay in academia expect to marry later, and delay or completely forego having children, and if they do have children, plan to have fewer than their non-STEM counterparts (Sassler et al 2016, Owens 2012). Men in STEM have no such difference compared to their non-STEM counterparts; they marry and have children about the same ages and rates as their non-STEM counterparts (Sassler et al 2016). Women leave STEM in droves in their early to mid thirties (Funk and Parker 2018) — the time when women’s fertility begins to decrease, and risks of childbirth complications begin to skyrocket for both mother and child. Men don’t see an effect on their fertility until their mid forties. Of the 56% of women who leave STEM, 50% wind up self-employed or using their training in a not for profit or government, 30% leave to a non-STEM more ‘family friendly’ career, and 20% leave to be stay-at-home moms (Ashcraft and Blithe 2002). Meanwhile, institutions with better childcare and maternity leave policies have twice(!) the number of female faculty in STEM (Troeger 2018). In analogy to the affectionately named “leaky pipeline,” the challenge of balancing motherhood and career has been titled the “maternal wall.”

To understand the so-called maternal wall better, let’s take a quick look at the sketch of a typical academic career.

For the sake of this exercise, let’s all pretend to be me. I’m a talented 25 year old PhD candidate studying Physical Chemistry — I use laser spectroscopy to try to understand atypical energy transfer processes in innovative materials that I hope will one day be used to make vastly more efficient solar panels. I got my BSc in Chemistry and Mathematics at the age of 22, and have published 4 scientific papers in two different fields already (Astrophysics and Environmental Chemistry). I’ve got a big scholarship, and a lot of people supporting me to give me the best shot at an academic career — a career I dearly want. But, I also want a family — maybe two or three kids. Here’s what I can expect if I pursue an academic career:

With any luck, 2–3 years from now I’ll graduate with a PhD, at the age of 27. Academics are expected to travel a lot, and to move a lot, especially in their 20s and early 30s — all of the key childbearing years. I’m planning to go on exchange next year, and then the year after that I’ll need to work hard to wrap up research, write a thesis, and travel to several conferences to showcase my work. After I finish my PhD, I’ll need to undertake one or two post doctoral fellowships, lasting one or two years each, probably in completely different places. During that time, I’ll start to apply for professorships. In order to do this, I’ll travel around to conferences to advertise my work and to meet important leaders in my field, and then, if I am invited for interviews, I’ll travel around to different universities for two or three days at a time to undertake these interviews. This usually occurs in a person’s early 30s — our helpful astronomy guy, Dr. Flaherty, found the average time to hiring was 5 years, so let’s say I’m 32 at this point. If offered a position, I’ll spend the next year or two renovating and building a lab, buying equipment, recruiting talented graduate students, and designing and teaching courses. People work really, really hard during this time and have essentially no leisure time. Now I’m 34. Within usually 5 years I’ll need to apply for tenure. This means that by the time I’m 36, I’ll need to be making significant contributions in my field, and then in the final year before applying for tenure, I will once more need to travel to many conferences to promote my work, in order to secure tenure — if I fail to do so, my position at the university would probably be terminated. Although many universities offer a “tenure extension” in cases where an assistant professor has had a child, this does not solve all of the problems. Taking a year off during that critical 5 or 6 year period often means that the research “goes bad” — students flounder, projects that were promising get “scooped” by competitors at other institutions, and sometimes, in biology and chemistry especially, experiments literally go bad. You wind up needing to rebuild much more than just a year’s worth of effort.

At no point during this time do I appear stable enough, career-wise, to take even six months off to be pregnant and care for a newborn. Hypothetical future-me is travelling around, or even moving, conducting and promoting my own independent research and training students. As you’re likely aware, very pregnant people and newborns don’t travel well. And academia has a very individualistic and meritocratic culture. Starting at the graduate level, huge emphasis is based on independent research, and independent contributions, rather than valuing team efforts. This feature of academia is both a blessing and a curse. The individualistic culture means that people have the independence and the freedom to pursue whatever research interests them — in fact this is the main draw for me personally. But it also means that there is often no one to fall back on when you need extra support, and because of biological constraints, this winds up impacting women more than men.

At this point, I need to make sure that you’re aware of some basics of female reproductive biology. According to Wikipedia, the unquestionable source of all reliable knowledge, at age 25, my risk of conceiving a baby with chromosomal abnormalities (including Down’s Syndrome) is 1 in about 1400. By 35, that risk more than quadruples to 1 in 340. At 30, I have a 75% chance of a successful birth in one year, but by 35 it has dropped to 66%, and by 40 it’s down to 44%. Meanwhile, 87 to 94% of women report at least 1 health problem immediately after birth, and 1.5% of mothers have a severe health problem, while 31% have long-term persistent health problems as a result of pregnancy (defined as lasting more than six months after delivery). Furthermore, mothers over the age of 35 are at higher risk for pregnancy complications like preterm delivery, hypertension, superimposed preeclampsia, severe preeclampsia (Cavazos-Rehg et al 2016). Because of factors like these, pregnancies in women over 35 are known as “geriatric pregnancies” due to the drastically increased risk of complications. This tight timeline for births is often called the “biological clock” — if women want a family, they basically need to start before 35. Now, that’s not to say it’s impossible to have a child later on, and in fact some studies show that it has positive impacts on the child’s mental health. But it is riskier.

So, women with a PhD in STEM know that they have the capability to make interesting contributions to STEM, and to make plenty of money doing it. They usually marry someone who also has or expects to make a high salary as well. But this isn’t the only consideration. Such highly educated women are usually aware of the biological clock and the risks associated with pregnancy, and are confident in their understanding of statistical risks.

The Irish say, “The common challenge facing young women is achieving a satisfactory work-life balance, especially when children are small. From a career perspective, this period of parenthood (which after all is relatively short compared to an entire working life) tends to coincide exactly with the critical point at which an individual’s career may or may not take off. […] All the evidence shows that it is at this point that women either drop out of the workforce altogether, switch to part-time working or move to more family-friendly jobs, which may be less demanding and which do not always utilise their full skillset.”

And in the Netherlands, “The research project in Tilburg also showed that women academics have more often no children or fewer children than women outside academia.” Meanwhile in Italy “On a personal level, the data show that for a significant number of women there is a trade-off between family and work: a large share of female economists in Italy do not live with a partner and do not have children”

Most jobs available to women with STEM PhDs offer greater stability and a larger salary earlier in the career. Moreover, most non-academic careers have less emphasis on independent research, meaning that employees usually work within the scope of a larger team, and so if a person has to take some time off, there are others who can help cover their workload. By and large, women leave to go to a career where they will be stable, well funded, and well supported, even if it doesn’t fulfill their passion for STEM — or they leave to be stay-at-home moms or self-employed.

I would presume that if we made academia a more feasible place for a woman with a family to work, we could keep almost all of those 20% of leavers who leave to just stay at home, almost all of the 30% who leave to self-employment, and all of those 30% who leave to more family friendly careers (after all, if academia were made to be as family friendly as other careers, there would be no incentive to leave). Of course, there is nothing wrong with being a stay at home parent — it’s an admirable choice and contributes greatly to our society. One estimate valued the equivalent salary benefit of stay-at-home parenthood at about $160,000/year. Moreover, children with a stay-at-home parent show long term benefits such as better school performance — something that most academic women would want for their children. But a lot of people only choose it out of necessity — about half of stay-at-home moms would prefer to be working (Ciciolla, Curlee, & Luthar 2017). When the reality is that your salary is barely more than the cost of daycare, then a lot of people wind up giving up and staying home with their kids rather than paying for daycare. In a heterosexual couple it will usually be the woman that winds up staying home since she is the one who needs to do things like breast feed anyways. And so we lose these women from the workforce.

And yet, somehow, during this informal research adventure of mine, most scholars and policy makers seem to be advising that we try to encourage young girls to be interested in STEM, and to address sexism in the workplace, with the implication that this will fix the high attrition rate in STEM women. But from what I’ve found, the stats don’t back up sexism as the main reason women leave. There is sexism, and that is a problem, and women do leave STEM because of it — but it’s a problem that we’re already dealing with pretty successfully, and it’s not why the majority of women who have already obtained STEM PhDs opt to leave the field. The whole family planning thing is huge and for some reason, almost totally swept under the rug — mostly because we’re too shy to talk about it, I think.

In fact, I think that the plethora of articles suggesting that the problem is sexism actually contribute to our unwillingness to talk about the family planning problem, because it reinforces the perception that that men in power will not hire a woman for fear that she’ll get pregnant and take time off. Why would anyone talk about how they want to have a family when they keep hearing that even the mere suggestion of such a thing will limit their chances of being hired? I personally know women who have avoided bringing up the topic with colleagues or supervisors for fear of professional repercussions. So we spend all this time and energy talking about how sexism is really bad, and very little time trying to address the family planning challenge, because, I guess, as the stats show, if women are serious enough about science then they just give up on the family (except for the really, really exceptional ones who can handle the stresses of both simultaneously).

To be very clear, I’m not saying that sexism is not a problem. What I am saying is that, thanks to the sustained efforts of a large number of people over a long period of time, we’ve reduced the sexism problem to the point where, at least at the graduate level, it is no longer the largest major barrier to women’s advancement in STEM. Hurray! That does not mean that we should stop paying attention to the issue of sexism, but does mean that it’s time to start paying more attention to other issues, like how to properly support women who want to raise a family while also maintaining a career in STEM.

So what can we do to better support STEM women who want families?

A couple of solutions have been tentatively tested. From a study mentioned above, it’s clear that providing free and conveniently located childcare makes a colossal difference to women’s choices of whether or not to stay in STEM, alongside extended and paid maternity leave. Another popular and successful strategy was implemented by a leading woman in STEM, Laurie Glimcher, a past Harvard Professor in Immunology and now CEO of Dana-Farber Cancer Institute. While working at NIH, Dr. Glimcher designed a program to provide primary caregivers (usually women) with an assistant or lab technician to help manage their laboratories while they cared for children. Now, at Dana-Farber Cancer Institute, she has created a similar program to pay for a technician or postdoctoral researcher for assistant professors. In the academic setting, Dr. Glimcher’s strategies are key for helping to alleviate the challenges associated with the individualistic culture of academia without compromising women’s research and leadership potential.

For me personally, I’m in the ideal situation for an academic woman. I graduated my BSc with high honours in four years, and with many awards. I’ve already had success in research and have published several peer reviewed papers. I’ve faced some mild sexism from peers and a couple of TAs, but nothing that’s seriously held me back. My supervisors have all been extremely supportive and feminist, and all of the people that I work with on a daily basis are equally wonderful. Despite all of this support, I’m looking at the timelines of an academic career, and the time constraints of female reproduction, and honestly, I don’t see how I can feasible expect to stay in academia and have the family life I want. And since I’m in the privileged position of being surrounded by supportive and feminist colleagues, I can say it: I’m considering leaving academia, if something doesn’t change, because even though I love it, I don’t see how it can fit in to my family plans.

But wait! All of these interventions are really expensive. Money doesn’t just grow on trees, you know!

It doesn’t in general, but in this case it kind of does — well, actually, we already grew it. We spend billions of dollars training women in STEM. By not making full use of their skills, if we look at only the american economy, we are wasting about $1.5 billion USD per year in economic benefits they would have produced if they stayed in STEM. So here’s a business proposal: let’s spend half of that on better family support and scientific assistants for primary caregivers, and keep the other half in profit. Heck, let’s spend 99% — $1.485 billion (in the states alone) on better support. That should put a dent in the support bill, and I’d sure pick up $15 million if I saw it lying around. Wouldn’t you?

By demonstrating that we will support women in STEM who choose to have a family, we will encourage more women with PhDs to apply for the academic positions that they are eminently qualified for. Our institutions will benefit from the wider applicant pool, and our whole society will benefit from having the skills of these highly trained and intelligent women put to use innovating new solutions to our modern day challenges.

The third "Machine Learning for Jets" workshop is ongoing these days at the Kimmel centre of New York University, a nice venue overlooking Washington Square park in downtown Manhattan. I came to attend it and remain up-to-date with the most advanced new algorithms that are been used for research in collider physics, as I have done last year. The workshop is really well organized and all talks are quite interesting, so this is definitely a good time investment for me.

Yesterday I gave a seminar at the University of California, Riverside, through the magic of Skype. It was the first time I’ve given a talk sitting down, and only the second time I’ve done it in my socks.

The talk was on codensity monads, and that link takes you to the slides. I blogged about this subject lots of times in 2012 (1, 2, 3, 4), and my then-PhD student Tom Avery blogged about it too. In a nutshell, the message is:

Whenever you meet a functor, ask what its codensity monad is.

This should probably be drilled into learning category theorists as much as better-known principles like “whenever you meet a functor, ask what adjoints it has”. But codensity monads took longer to be discovered, and are saddled with a forbidding name — should we just call them “induced monads”?

In any case, following this principle quickly leads to many riches, of which my talk was intended to give a taste.

How to make progress in High Energy Physics Before I start, just following from my previous post about B-mesons, today I saw a CERN press release about lepton universality in B-baryons (i.e. particles made of three quarks, at least one of which is a bottom, rather than B-mesons, which have two quarks, at least one of which is a bottom). It seems there is a \( 1 \sigma \) deviation in $$ R_{pK}^{-1} \equiv \frac{\mathrm{BR} (\Lambda_b^0 \rightarrow p K^- e^+ e^-)}{\mathrm{BR} (\Lambda_b^0 \rightarrow p K^- J/\psi(\rightarrow e^+ e^-))} \times \frac{\mathrm{BR} (\Lambda_b^0 \rightarrow p K^- J/\psi(\rightarrow \mu^+ \mu^-))}{\mathrm{BR} (\Lambda_b^0 \rightarrow p K^- \mu^+ \mu^-)} $$ While, by itself, it is amazing that this gets a press release heralding a "crack in the Standard Model", it does add some small evidence to the picture of deviations from Standard Model predictions; no doubt the interpretation in terms of a global fit with other observables will appear soon on the arXiv. So it's a positive way to start this entry.

Polemics on foundations of HEP

Recently an article appeared by S. Hossenfelder that again makes the claim that "fundamental physics" is stuck, has failed etc, that theorists are pursuing dead-end theories and "do not think about which hypotheses are promising" and "theoretical physicists have developed a habit of putting forward entirely baseless speculations." It is fairly common and depressing to see this message echoed in a public space. However what riled me enough to write was the article on Not Even Wrong discussing it, in which surprise is expressed that people at elite institutions would still teach courses in Beyond the Standard Model Physics and Supersymmetry. This feels to me like an inadvertent personal attack, since I happen to teach courses on BSM physics and SUSY at an elite French institution (Ecole Polytechnique) ... hence this post.

Physicists are not sheep

Firstly though I'd like to address the idea that physicists are not aware of the state of their field. While "maverick outsiders" might like to believe that HEP theorists live in a bubble, just following what they are told to work on, it upsets me that this message has cut through enough that a lot of students now feel that it is true, that if they do not work on what they perceive to be "hot topics" then they will be censured. The truth is that now, more than perhaps at any time since I entered the field, there is a lack of really "hot topics." In previous decades, there were often papers that would appear with a new idea that would be immediately jumped on by tens or hundreds of people, leading to a large number of more-or-less fast follow-up papers (someone once categorised string theorists as monkeys running from tree to tree eating only the low-hanging fruit). This seems to me to be much less prevalent now. People are really stepping back and thinking about what they do. They are aware that there is a lack of clear evidence for what form new physics should take, and that the previously very popular idea of naturalness is probably not a reliable guide. Some people are embracing new directions in dark matter searches, others are trying to re-interpret experimental data in terms of an effective field theory extension of the Standard Model, others are seeing what we can learn from gravitational waves, still others are looking at developing new searches for axions, some people are looking instead at fundamental Quantum Field Theory problems, cosmology has made huge progress, etc etc (see also this thread by Dan Green). There is a huge diversity of ideas in the field and that is actually very healthy. There is also a very healthy amount of skepticism.

On the other hand, as I mentioned in my previous post, there are some tentative pieces of evidence pointing to new physics at accessible scales; and whatever explains dark matter, it should be possible to probe it with some form of experiment or observation. This is the reason for continued optimism in the field of real breakthroughs. We could be on the verge of overturning the status quo, via the (apparently outdated) method of doing experiments, and then we will race to understand the results and interpret them in terms of our favourite theories -- or maybe genuinely new ones. Of course, maybe these are mirages; as physicists we will continue to look for new and creative ways to search for new phenomena, even if we do not have a new high energy collider -- but if we don't build a new collider we will never know what we might find.

What courses should you take

Coming now to the idea of what students entering the field should learn, in the current negative climate it needs repeating that the Standard Model is incomplete. I'm not just talking about a lack of quantum gravity, but there is a laundry list of problems that I repeat to my students at the beginning of the course:

Quantum gravity.

Dark matter, or something that explains rotation curves, the CMB, etc.

Dark energy -- no, it hasn't been ruled out by one paper on supernovae. It was awarded the Nobel Prize because people already expected it from other observations.

Inflation, or something else that solves the same problems.

The strong CP problem. We have phases in the quark Yukawas, so we should have a neutron electric dipole moment \(10^{10} \) times greater than we observe. Most people believe this should be solved by an axion -- which might also be dark matter -- hence a lot of effort to find it, and ADMX (among other experiments) might be getting close.

Baryogenesis. The Standard Model Higgs is too heavy to have electroweak baryogenesis. There is apparently not enough CP violation in the Standard Model either.

Neutrino masses. We can't write them into the Standard Model because we don't even know if neutrinos are Majorana or Dirac! Maybe a heavy right-handed neutrino can give us Baryogenesis through leptogenesis. There is a huge amount going on in neutrino physics at the moment, too ...

Nearly all of these topics are not generally covered in a standard set of graduate courses (at least here). I try to present the evidence and some possible solutions. So the first time a lot of students encounter these issues is through popular press articles, and oblique references in "standard" courses. And if we are going to make progress on solving some of these fundamental issues, should students not have some idea on what attempts have been made to solve them?

Turning now to supersymmetry, I would not recommend that a beginning student in particle phenomenology make it the sole focus of their work (unless they really have a good motivation to do so). But there are many reasons to study it still:

It is hugely important in formal applications -- to give us a handle on strongly coupled theories, allowing us to compute things we could never do in non-SUSY theories, as toy models, \( N=4 \) SYM being the "simplest field theory" (as Arkani-Hamed likes to reiterate) etc etc.

It seems to be necessary for the consistency of string theory. I personally prefer string theory as a candidate framework for quantum gravity; if you want to study it, you need to study SUSY.

A lot of the difficulty with the formalism for beginning students is just understanding two-component spinors -- these are actually very useful tools if you want to study amplitudes in general.

It allows us to actually address the Hierarchy problem, and related to this, the idea of the vacuum energy of the theory being related to a cosmological constant. This is a subtle (and maybe heated) discussion for another time.

The gauge couplings apparently unify in the simplest SUSY extensions of the Standard Model. If this is just a coincidence then I feel that nature is playing a cruel joke on us.

The Standard Model appears to be at best metastable (there is some dispute about this). It has been further suggested (e.g. here) that black holes might seed the vacuum decays, so that if it is not absolutely stable then it should decay much quicker than we would otherwise think; and in any case the standard calculation has to assume that there are no quantum gravity contributions (giving higher-order operators). New physics at an intermediate scale (below \( \sim 10^{11} \) GeV) such as supersymmetry would then be necessary to stabilise the vacuum.

It genuinely could still be found at accessible energies; the LHC is actually very poor at finding particles that don't couple to the strong force, and new electroweak states could easily be lurking in plain sight ...

... related to this, it's just about the only "phenomenological framework" for new physics that addresses lots of different problems with the Standard Model.

Of course, nowadays as a community we are trying to hedge our bets: there is much more ambivalence about what theories might be found just around the corner, hence my own work on generic pheneomenology, and a lot of interest in the Standard Model EFT.

How we should make progress

Finally we get to the topic of the post. In the original article that I linked to above, Hossenfelder does make (as she has made elsewhere) the positive suggestion that physicists should talk to philosophers. [ In France, this is amusing, because there is a fantastic tradition of famous philosophers, and every schoolchild has to study philosophy up to the age of 18 ] It is good to make suggestions. In the article though is the idea that people cannot recognise promising new ideas amidst a sea of "bad" ones, so people are either following old dead ends or endlessly making ridiculous suggestions. I admit that, superficially, this is the impression people could have got once upon a time, but I would argue is not the state of the field now. I disagree that the problem is a fundamental one about how people think, or that there is a system censuring of "good" radical ideas. I don't think there is only one way to make progress: if I had a suggestion how the scientific creative process should work I would be applying it like crazy before advertising the benefits publically! And of censureship of "good" ideas, there are a lot of people willing to take a risk on new concepts. Research is hard, and creativity is not something that is easily taught. But I am constantly amazed by the creativity and ingenuity of my peers, the diversity of their ideas, and it is heartbreaking to see their effort denigrated in popular articles.

Indeed, repeatedly making the claim in public that one group of scientists are dishonest (or kidding themselves) about progress, that the field has failed etc, helps no-one. It deeply worries me to read that Dominic Cummings has Not Even Wrong on his blog roll; and I have already seen that people in other fields often hold very wrong opinions about the state of fundamental physics due to this filtering through (most people only see wildly speculative and hagiographic articles on one side and hugely negative pessimism on the other). It has been an issue when deciding about grant funding for at least a decade already. And it also filters through to students when they are deciding what to do, who, as I pointed out above, usually haven't really seen enough about the fundamentals of the field before they have to make a decision on what they want to study.

Finally, coming back to the suggestion that physicists do not think about what they are doing or why, there are two very important times when we emphatically do do this: when we are teaching, and when we are writing grant proposals. The preparation of both of these things can be hard work, but they are rewarding, and more reasons that I have faith in my fellow physicists ability to genuinely try to challenge the big problems in our field.

Last time I defined sheaves on a topological space this time I’ll say how to get these sheaves from ‘bundles’ over You may or may not have heard of bundles of various kinds, like vector bundles or fiber bundles. If you have, be glad: the bundles I’m talking about now include these as special cases. If not, don’t worry: the bundles I’m talking about now are much simpler!

A bundle over is simply a topological space equipped with a continuous map to say

You should visualize as hovering above and as projecting points down to their shadows in This explains the word ‘over’, the term ‘projection’ for the map and many other things. It’s a powerful metaphor.

Bundles are not only a great source of examples of sheaves; in fact every sheaf comes from a bundle! Conversely, every sheaf—and even every presheaf—gives rise to a bundle.

But these constructions, which I’ll explain, do not give an equivalence of categories. That is, sheaves are not just another way of thinking about bundles, and neither are presheaves. Instead, we’ll get adjoint functors between the category of presheaves on and the category of bundles and these will restrict to give an equivalence between the category of ‘nice’ presheaves on —namely, the sheaves—and a certain category of ‘nice’ bundles over which are called ‘etale spaces’.

Thus, in the end we’ll get two complementary viewpoints on sheaves: the one I discussed last time, and another, where we think of them as specially nice bundles over In Sections 2.8 and 2.9 Mac Lane and Moerdijk use these complementary viewpoints to efficiently prove some of the big theorems about sheaves that I stated last time.

Before we get going, a word about a word: ‘etale’. This is really a French word, ‘étalé’, meaning ‘spread out’. We’ll see why Grothendieck chose this word. But now I mainly just want to apologize for leaving out the accents. I’m going to be typing a lot, it’s a pain to stick in those accents each time, and in English words with accents feel ‘fancy’.

From bundles to presheaves

Any bundle over meaning any continuous map

gives a sheaf over Here’s how. Given an open set define a section of over to be a continuous function

such that

In terms of pictures (which I’m too lazy to draw here) maps each point of to a point in ‘sitting directly over it’. There’s a presheaf on that assigns to each open set the set of all sections of over

Of course, to make into a presheaf we need to say how to restrict sections over to sections over a smaller open set, but we do this in the usual way: by restricting a function to a subset of its domain.

Puzzle. Check that with this choice of restriction maps is a presheaf, and in fact a sheaf.

There’s actually a category of bundles over Given bundles

and

a morphism from the first to the second is a continuous map

making the obvious triangle commute:

I’m too lazy to draw this as a triangle, so if you don’t see it in your mind’s eye you’d better draw it. Draw and as two spaces hovering over and as mapping each point in over to a point in over the same point

We can compose morphisms between bundles over in an evident way: a morphism is a continuous map with some property, so we just compose those maps. We thus get a category of bundles over which is called

I’ve told you how a bundle over gives a presheaf on Similarly, a morphism of bundles over gives a morphism of presheaves on Because this works in a very easy way, it should be no surprise that this gives a functor, which we call

Puzzle. Suppose we have two bundles over say and and a morphism from the first to the second, say Suppose is a section of the first bundle over the open set Show that is a section of the second bundle over Use this to describe what the functor does on morphisms, and check functoriality.

From presheaves to bundles

How do we go back from presheaves to bundles? Start with a presheaf

on To build a bundle over we’ll start by building a bunch of sets called one for each point Then we’ll take the union of these and put a topology on it, getting a space called There will be a map

sending all the points in to and this will be our bundle over

How do we build these sets Our presheaf

doesn’t give us sets for points of just for open sets. So, we should take some sort of ‘limit’ of the sets over smaller and smaller open neighborhoods of Remember, if our presheaf gives a restriction map

So, what we’ll actually do is take the colimit of all these sets as ranges over all neighborhoods of That gives us our set

It’s good to ponder what elements of are actually like. They’re called germs at which is a nice name, because you can only see them under a microscope! For example, suppose is the sheaf of continuous real-valued functions, so that consists of all continuous functions from to By the definition of colimit, for any open neighborhood of we have a map

So any continuous real-valued function defined on any open neighborhood of gives a ‘germ’ of a function on But also by the definition of colimit, any two such functions give the same germ iff they become equal when restricted to some open neighborhood of So the germ of a function is what’s left of that function as you zoom in closer and closer to the point

(If we were studying analytic functions on the real line, the germ at would remember exactly their Taylor series at that point. But smooth functions have more information in their germs, and continuous functions are weirder still. For more on germs, watch this video.)

Now that we have the space of germs for each point we define

There is then a unique function

sending everybody in to So we’ve almost gotten our bundle over We just need to put a topology on

We do this as follows. We’ll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Remember, any point in is a germ. More specifically, any point in is in some set so it’s the germ of some where is an open neighborhood of But this has lots of other germs, too, namely its germs at all points We take this collection of all these germs to be an open neighborhood of A general open set in will then be an arbitrary union of sets like this.

Puzzle. Show that with this topology on the map is continuous.

Thus any presheaf on gives a bundle over

Puzzle. Describe how a morphism of presheaves on gives a morphism of bundles over and show that your construction defines a functor

Etale spaces

So now we have functors that turn bundles into presheaves:

and presheaves into bundles:

But we have already seen that the presheaves coming from bundles are ‘better than average’: they are sheaves! Similarly, the bundles coming from presheaves are better than average. They are ‘etale spaces’.

What does this mean? Well, if you think back on how we took a presheaf and gave a topology a minute ago, you’ll see something very funny about that topology. Each point in has a neighborhood such that

restricted to that neighborhood is a homeomorphism. Indeed, remember that each point in is a germ of some

for some open We made the set of all germs of into an open set in Call that open set

Puzzle. Show that is a homeomorphism from to

In class I’ll draw a picture of what’s going on. is a space sitting over has lots of open sets that look exactly like open sets down in In terms of our visual metaphor, these open sets are ‘horizontal’, which is why we invoke the term ‘etale’:

Definition. A bundle is etale if each point has an open neighborhood such that restricted to is a homeomorphism from to an open subset of We often call such a bundle an etale space over

So, if you did the last puzzle, you’ve shown that any presheaf on gives an etale space over

(By the way, if you know about covering spaces, you should note that every covering space of is an etale space over but not conversely. In a covering space we demand that each point down below, in has a neighborhood such that is a disjoint union of open sets homeomorphic to with restricting to homeomorphism on each of these open sets. In an etale space we merely demand that each point up above, in has a neighborhood such that restricted to is a homeomorphism. This is a weaker condition. In general, etale spaces are rather weird if you’re used to spaces like manifolds: for example, will often not be Hausdorff.)

Sheaves versus etale spaces

Now things are nicely symmetrical! We have a functor that turns bundles into presheaves

but in fact it turns bundles into sheaves. We have a functor that turns presheaves into bundles

but in fact it turns presheaves into etale spaces.

Last time we defined to be the full subcategory of having sheaves as objects. Now let’s define to be the full subcategory of having etale spaces as objects. And here’s the punchline:

Theorem. The functor

is left adjoint to the functor

Moreover, if we restrict these functors to the subcategories and we get an equivalence of categories

The proof involves some work but also some very beautiful abstract nonsense: see Theorem 2, Corollary 3 and Lemma 4 of Section II.6. There’s a lot more to say, but this seems like a good place to stop.

The Wolf Prize in Physics for 2020 was announced yesterday, and it's going to Pablo Jarillo-Herrero, Allan MacDonald, and Rafi Bistritzer, for twisted bilayer graphene. This prize is both well-deserved and a great example of how condensed matter physics works.

MacDonald and Bistritzer did key theory work (for example) highlighting how the band structure of twisted bilayer graphene would become very interesting for certain twist angles - how the moire pattern from the two layers would produce a lateral periodicity, and that interactions between the layers would lead to very flat bands. Did they predict every exotic thing that has been seen in this system? No, but they had the insight to get key elements, and the knowledge that flat bands would likely lead to many competing energy scales, including electron-electron interactions, the weak kinetic energy of the flat bands, the interlayer coupling, effective magnetic interactions, etc. Jarillo-Herrero was the first to implement this with sufficient control and sample quality to uncover a remarkable phase diagram involving superconductivity and correlated insulating states. Figuring out what is really going on here and looking at all the possibilities in related layered materials will keep people busy for years. (As an added example of how condensed matter works as a field, Bistritzer is in industry working for Applied Materials.)

All of this activity and excitement, thanks to feedback between well-motivated theory and experiment, is how the bulk of physics that isn't "high energy theory" actually works.

I’m teaching an introduction to topos theory this quarter, loosely based on Mac Lane and Moerdijk’s Sheaves in Geometry and Logic.

I’m teaching one and a half hours each week for 10 weeks, so we probably won’t make it far very through this 629-page book. I may continue for the next quarter, but still, to make good progress I’ll have to do various things.

First, I’ll assume basic knowledge of category theory, a lot of which is explained in the Categorical Preliminaries and Chapter 1 of this book. I’ll start in with Chapter 2. Feel free to ask questions!

Second, I’ll skip a lot of proofs and focus on stating definitions and theorems, and explaining what they mean and why they’re interesting.

These notes to myself will be compressed versions of what I will later write on the whiteboard.

Sheaves

Topos theory emerged from Grothendieck’s work on algebraic geometry; he developed it as part of his plan to prove the Weil Conjectures. It was really just one of many linked innovations in algebraic geometry that emerged from the French school, and it makes the most sense if you examine the whole package. Unfortunately algebraic geometry takes a long time to explain! But later Lawvere and Tierney realized that topos theory could serve as a grand generalization of logic and set theory. This logical approach is more self-contained, and easier to explain, but also a bit more dry—at least to me. I will try to steer a middle course, and the title Sheaves in Geometry and Logic shows that Mac Lane and Moerdijk were trying to do this too.

The basic idea of algebraic geometry is to associate to a space the commutative ring of functions on that space, and study the geometry and topology of this space using that ring. For example, if is a compact Hausdorff space there’s a ring consisting of all continuous real-valued functions on and you can recover from this ring. But algebraic geometers often deal with situations where there aren’t enough everywhere-defined functions (of the sort they want to consider) on a space. For example, the only analytic functions on the Riemann sphere are constant functions. That’s not good enough! Most analytic functions on the Riemann sphere have poles, and are only defined away from these poles. (I’m giving an example from complex analysis, in hopes that more people will get what I’m talking about, but there are plenty of purely algebraic examples.)

This forced algebraic geometers to invent ‘sheaves’, around 1945 or so. The idea of a sheaf is that instead of only considering functions defined everywhere, we look at functions defined on open sets.

So, let be a topological space and let be the collection of open subsets of This is a poset with inclusion as the partial ordering, and thus it is a category. A presheaf is a functor

So, a sheaf assigns to each open set a set It allows us to restrict an element of to any smaller open set and a couple of axioms hold, which are encoded in the word ‘functor’. Note the ‘op’: that’s what lets us restrict elements of to smaller open sets.

The example to keep in mind is where consists of functions on (that is, functions of the sort we want to consider, such as continuous or smooth or analytic functions). However, other examples are important too.

In many of these examples something nice happens. First, suppose we have and an open cover of by open sets Then we can restrict to getting something we can call We can then further restrict this to And by the definition of presheaf, we have

In other words, if we take a guy in and restrict it to a bunch of open sets covering the resulting guys agree on the overlaps Check that this follows from the definition of functor and some other facts!

This is true for any presheaf. A presheaf is a sheaf if we can start the other way around, with a bunch of guys that agree on overlaps:

and get a unique that restricts to all these guys:

Note this definition secretly has two clauses: I’m saying that in this situation exists and is unique. If we have uniqueness but not necessarily existence, we say our presheaf is a separated presheaf.

The point of a sheaf is that you can tell if something is in by examining it locally. These examples explain what I mean:

Puzzle. Let and for each open set take to be the set of continuous real-valued functions on Show that with the usual concept of restriction of functions, is a presheaf and in fact a sheaf.

Puzzle. Let and for each open set take to be the set of bounded continuous real-valued functions on Show that with the usual concept of restriction of functions, is a separated presheaf but not a sheaf.

The problem is that a function can be bounded on each open set in an open cover of yet not bounded on You can tell if a function is continuous by examining it locally, but you can’t tell if its bounded!

So, in a sense that should gradually become clear, sheaves are about ‘local truth’.

The category of sheaves on a space

There’s a category of presheaves on any topological space Since a presheaf on is a functor

a morphism between presheaves is a natural transformation between such functors.

Remember, if and are categories, we use to stand for the category where the objects are functors from to and the morphisms are natural transformations. This is called a functor category.

So, a category of presheaves is just an example of a functor category, and the category of presheaves on is called

But this name is rather ungainly, so we make an abbreviation

Then the category of presheaves on is called

Sheaves are subtler, but we define morphisms of sheaves the exact same way. Every sheaf has an underlying presheaf, so we define a morphism between sheaves to be a morphism between their underlying presheaves. This gives the category of sheaves on which we call

By how we’ve set things up, is a full subcategory of

Now, what Grothendieck realized is that acts a whole lot like the category of sets. For example, in the category of sets we can define ‘commutative rings’, but we can copy the definition in and get ‘sheaves of commutative rings’, and so on. The point is that we’re copying ordinary math, but doing it locally, in a topological space.

Elementary topoi

Lawvere and Tierney clarified what was going on here by inventing the concept of ‘elementary topos’. I’ll throw the definition at you now and explain all the pieces in future classes:

Definition. An elementary topos, or topos for short, is a category with finite limits and colimits, exponentials and a subobject classifier.

I hope you know limits and colimits, since that’s the kind of basic category theory definition I’m assuming. Given two objects and in a category, their exponential is an object that acts like the thing of all maps from to I’ll give the actual definition later. A subobject classifier is, roughly, an object that generalizes the usual set of truth values

Namely, subobjects of any object are in one-to-one correspondence with morphisms from to which serve as ‘characteristic functions’. Again, this is just a sketch: I’ll give the actual definition later, or you can click on the link and read it now.

The point is that an elementary topos has enough bells and whistles that we can ‘do mathematics inside it’. It’s like an alternative universe, a variant of our usual category of sets and functions, where mathematicians can live. But beware: in general, the kind of mathematics we do in an elementary topos is finitistic mathematics using intuitionistic logic.

You see, the category of finite sets is an elementary topos, so you can’t expect to have ‘infinite objects’ like the set of natural numbers in an elementary topos—unless you decree that you want them (which people often do).

Also, we will see that while is a Boolean algebra, the subobject classifier of an elementary topos need only be a ‘Heyting algebra’: a generalization of a Boolean algebra in which the law of excluded middle fails. This is actually not weird: it’s connected to the fact that a category of sheaves lets us reason ‘locally’. For example, we don’t just care if two functions are equal or not, we care if they’re equal or not in each open set. So we need a subtler form of logic than classical Boolean logic.

There’s a lot more to say, and I’m just sketching out the territory now, but one of the first big theorems we’re aiming for is this:

Theorem. For any topological space is an elementary topos.

The topos of sheaves remembers a lot about the topological space that it came from… so a topos can also be seen as a way of talking about a space! This is even true for elementary topoi that aren’t topoi of sheaves on an actual space. So, topos theory is more than a generalization of set theory. It’s also, in a different way, a generalization of topology.

Grothendieck topoi

You’ll notice that sheaves on were defined starting with the poset of open sets of In fact, to define them we never used anything about except this poset! This suggests that we could define sheaves more generally starting from any poset.

And that’s true—but Grothendieck went further: he defined sheaves starting from any category, as long as that category was equipped with some extra structure saying when a bunch of morphisms serve to ‘cover’ the object This extra data is called a ‘coverage’ or more often (rather confusingly) a ‘Grothendieck topology’. A category equipped with a Grothendieck topology is called a ‘site’.

So, Grothendieck figured out how to talk about the category of sheaves on any site He did this before Lawvere and Tierney came along, and this was his definition of a topos. So, nowadays we say a category of sheaves on a site is a Grothendieck topos. However:

Theorem. Any Grothendieck topos is an elementary topos.

So, Lawvere and Tierney’s approach subsumes Grothendieck’s, in a sense. Not every elementary topos is a Grothendieck topos, though! For example, the category of finite sets is an elementary topos but not a Grothendieck topos. (It’s not big enough: any Grothendieck topos has, not just finite limits and colimits, but all small limits and colimits.) So both concepts of topos are important and still used. But when I say just ‘topos’, I’ll mean ‘elementary topos’.

Why did Grothendieck bother to generalize the concept of sheaves from sheaves on a topological space to sheaves on a site? He wasn’t just doing it for fun: it was a crucial step in his attempt to prove the Weil Conjectures!

Basically, when you’re dealing with spaces that algebraic geometers like—say, algebraic varieties—there aren’t enough open sets to do everything we want, so we need to use covering spaces as a generalization of open covers. So, instead of defining sheaves using the poset of open subsets of our space Grothendieck needed to use the category of covering spaces of

That’s the rough idea, anyway.

Geometric morphisms

As you probably know if you’re reading this, category theory is all about the morphisms. This is true not just within a category, but between them. The point of topos theory is not just to study one topos, but many. We don’t want merely to do mathematics in alternative universes: we want to be able to translate mathematics from one alternative universe to another!

So, what are the morphisms between topoi?

First, if you have a continuous map between topological spaces, you can take the ‘direct image’ of a presheaf on to get a presheaf on Here’s how this works.

The inverse image of any open set is open, so we get an inverse image map

sending each open set to the open set

Given a presheaf on we define its direct image to be the presheaf on given by

Note the double reversal here: maps points in to points in but open sets in give open sets in and then presheaves on give presheaves on

Of course we need to check that it works:

Puzzle. Show that is a presheaf. That is, explain how we can restrict an element of to any open set contained in and check that we get a presheaf this way.

In fact it works very nicely:

Puzzle. Show that taking direct images gives a functor from the category of presheaves on to the category of presheaves on

Puzzle. Show that if is a sheaf on its direct image is a sheaf on

The upshot of all this is that a continuous map between topological spaces

gives a functor between sheaf categories

And this functor turns out to be very nice! This is another big theorem we aim to prove later:

Theorem. If is a continuous map between topological spaces, the functor

has a left adjoint

that preserves finite limits.

This left adjoint is called the inverse image map. Note that because has a left adjoint, it is a right adjoint, so it preserves limits. Because is a left adjoint, it preserves colimits. The fact that preserves finite limits is extra gravy on top of an already nice situation!

We bundle all this niceness into a definition:

Definition. A functor between topoi is a geometric morphism if it has a left adjoint that preserves finite limits.

And this is the most important kind of morphism between topoi. It’s not a very obvious definition, but it’s extracted straight from what happens in examples.

To wrap up, I should add that people usually call the pair consisting of and its left adjoint a geometric morphism. A functor has at most one adjoint, up to natural isomorphism, so my definition is at least tolerable. But I’ll probably switch to the standard one when we get serious about geometric morphisms.

And we will eventually see that geometric morphisms let us translate mathematics from one alternative universe to another!

Conclusion

If this seemed like too much too soon, fear not, I’ll go over it again and actually define a lot of the concepts I merely sketched, like ‘exponentials’, ‘subobject classifier’, ‘Heyting algebra’, ‘Grothendieck topology’, and ‘Grothendieck topos’. I just wanted to get a lot of the main concepts on the table quickly. You should do the puzzles to see if you understand what I wanted you to understand. Unless I made a mistake, all of these are straightforward definition-pushing if you’re comfortable with some basic category theory.

For more background on topos theory I highly recommend this:

Abstract. The view that toposes originated as generalized set theory is a figment of set theoretically educated common sense. This false history obstructs understanding of category theory and especially of categorical foundations for mathematics. Problems in geometry, topology, and related algebra led to categories and toposes. Elementary toposes arose when Lawvere’s interest in the foundations of physics and Tierney’s in the foundations of topology led both to study Grothendieck’s foundations for algebraic geometry. I end with remarks on a categorical view of the history of set theory, including a false history plausible from that point of view that would make it helpful to introduce toposes as a generalization from set theory.

There’s also a lot of background material in the book for this course:

Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.

“A sharp square function estimate for the cone in “, by Larry Guth, Hong Wang, and Ruixiang Zhang. This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions. Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates. Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale. The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.

“On the Chowla and twin primes conjectures over “, by Will Sawin and Mark Shusterman. This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power which is fixed (in contrast to a number of existing results in the “large ” limit) but has a large exponent . The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions. In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.

“Bounds for sets with no polynomial progressions“, by Sarah Peluse. This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type. For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions . The corresponding question for polynomial progressions is considered more difficult for a number of reasons. One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials . Another issue is that the ranges of the two parameters are now at different scales. Peluse gets around these difficulties in the case when all the polynomials have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case). To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials , so that one can still run a density increment argument efficiently. To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms. Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.

“On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel. This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities). I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating. Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established… In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.

I'm looking more seriously at trying to do some popularly accessible writing about condensed matter. I have a number of ideas about what should be included in such a work, but I'm always interested in other peoples' thoughts on this. Suggestions?

This time I’d like to repackage some of the results in Part 11 in a prettier way. I’ll describe the groupoid of ‘finite sets equipped with a permutation’ in terms of Young diagrams and cyclic groups. Taking groupoid cardinalities, this description will give a well-known formula for the probability that a random permutation belongs to any given conjugacy class!

First, $\mathsf{Perm}$ stands for the groupoid of finite sets equipped with permutation. Explicitly:

an object $(X,\sigma)$ of $\mathsf{Perm}$ is a finite set $X$ with a bijection $\sigma \colon X \to X$;

a morphism $f \colon (X,\sigma) \to (X',\sigma')$ is a bijection $f \colon X \to X'$ such that $\sigma' = f \sigma f^{-1}$.

Second, $Y$ stands for the set of Young diagrams. A Young diagram looks like this:

but we will think of Young diagrams as functions $y \colon \mathbb{N}^+ \to \mathbb{N}$ that vanish at all but finitely many points. The idea is that a Young diagram $y$ has $y(k)$ columns of length $k$ for each $k = 1, 2, 3, \dots$. For example, the Young diagram above has $y(1) = 1, y(2) = 3, y(3) = 1$ and $y(n) = 0$ for all other $n$.

Third, $\mathsf{B}(G)$ stands for the one-object groupoid corresponding to the group $G$.

Fourth, for any category $\mathsf{C}$,

$\frac{\mathsf{C}^k}{k!}$

stands for the $k$th symmetrized power of $\mathsf{C}$. This is easiest to understand if we recall that the free symmetric monoidal category on $\mathsf{C}$, say $\mathsf{S}(\mathsf{C})$, has a description as

where an object of $\mathsf{C}^k/k!$ is a $k$-tuple $(c_1, \dots, c_k)$ of objects of $\mathsf{C}$ and a morphism is a $k$-tuple $(f_1, \dots, f_k)$ of morphisms in $\mathsf{C}$ together with a permutation $\sigma \in S_k$. The morphisms are composed in a manner familiar from the ‘wreath product’ of groups. Indeed, if $G$ is a group and $\mathsf{B}(G)$ is the corresponding one-object groupoid, we have

Proof. First note that $\mathsf{Perm}$ is equivalent to its full subcategory where we use one finite set with each cardinality. It is thus equivalent to the groupoid where

an object is a natural number $n$ and an element $\sigma \in S_n$,

a morphism $f \colon (n,\sigma) \to (n, \sigma')$ is a permutation $f \in S_n$ such that $\sigma' = f \sigma f^{-1}$.

Thus, isomorphism classes of objects in $\mathsf{Perm}$ correspond to conjugacy classes of permutations. A conjugacy class of permutations is classified by its number of cycles of each length, and thus by a Young diagram $y \colon \mathbb{N}^+ \to \mathbb{N}$ saying that there are $y(k)$ cycles of length $k$ for each $k = 1, 2, 3, \dots$.

In short, if we use $\pi_0(G)$ to stand for the set of isomorphism classes of objects of the groupoid $G$, we have established an isomorphism

$\pi_0(\mathsf{Perm}) \cong Y$

where $Y$ is the set of Young diagrams. The groupoid $\mathsf{Perm}$ is thus equivalent to a coproduct of connected groupoids, one for each Young diagram:

By taking a skeleton we can assume each groupoid $\mathsf{Perm}_y$ has one object, namely $(n,\sigma)$ where $\sigma \in S_n$ is a chosen permutation with $y(k)$ cycles of length $k$ for each $k = 1, 2, 3, \dots$. The automorphisms of this object are then permutations $f \in S_n$ with $\sigma = f \sigma f^{-1}$.

In short, $\mathsf{Perm}_y$ is the one-object groupoid corresponding to the centralizer of $\sigma \in S_n$, where $\sigma$ is any permutation with $y(k)$ cycles of length $k$ for all $k$.

We can choose $\sigma$ to act on the boxes of the Young diagram $y$, cyclically permuting the entries in each column in such a way that the first entry in each column is mapped to the second, the second is mapped to the third, and so on, with the last entry being mapped to the first. Any element of the centralizer of $\sigma$ thus consists of a permutation of the columns, mapping each column to some other column of the same height, followed by an arbitrary cyclic permutation of the entries in each column. It follows that the centralizer is isomorphic to

Now let’s see how this result lets us compute the probability that a random permutation of an $n$-element set lies in any given conjugacy class. The conjugacy classes in $S_n$ correspond to Young diagrams $y$ with $n$ boxes. For each such $y$ we will compute the probability that a random element of $S_n$ lies in the corresponding conjugacy class. Let’s call this probability $p_y$.

In general, the probability that a randomly chosen element of a finite group $G$ lies in some conjugacy class $K$ is $|K|/|G|$. But $K \cong G/C(k)$ where $C(k)$ is the centralizer of some element $k \in K$. Thus, the probability in question equals $1/|C(k)|$.

Recall that $\mathsf{Perm}_y$ is one-object groupoid corresponding to the centralizer of some $\sigma \in S_n$ whose conjugacy class corresponds to the Young diagram $y$. The cardinality of a one-object groupoid is the reciprocal of the cardinality of the corresponding group, so

$|\mathsf{Perm}_y| = \frac{1}{|C(\sigma)|}$

It follows that

$p_y = |\mathsf{Perm}_y|$

In other words, the probability we are trying to compute is the cardinality of a groupoid we have already studied! We saw in the proof of the Theorem that

Theorem. The probability $p_y$ that a random permutation of an $n$-element set has $y(k)$ cycles of length $k$ for all $k = 1, 2, 3, \dots$ is given by

The theorem is easy to prove, so the point is just that this probability is the cardinality of a naturally defined groupoid, and a similar formula holds at the level of groupoids:

Part 0 — What’s the average length
of the longest cycle in a random permutation of an $n$-element set?

Part 1 — What is the probability that a randomly chosen permutation of an $n$-element set has exactly $k$ fixed points?

Part 2 — What is the probability that the shortest cycle in a randomly chosen permutation of an $n$-element set has length greater than $k$?

Part 3 — A large collection of questions about random permutations, with answers.

Part 4 — What is the probability that a randomly chosen permutation of an $n$-element set has a cycle of length greater than $n/2$?

Part 5 — What is the average length of a cycle in a randomly chosen permutation of an $n$-element set?

Part 6 — What expected number of cycles of length $k$ in a randomly chosen permutation of an $n$-element set?

Part 7 — How is the distribution of the number of cycles of length $k$ in a random permutation related to a Poisson distribution?

Part 8 — What’s the $n$th moment of a Poisson distribution?

Part 9 — If we treat the number of cycles of length $k$ in a random permutation of an $n$-element set as a random variable, what do the moments of this random variable approach as $n \to \infty$?

Part 10 — How to compute statistics
of random permutations using groupoid cardinalities.

Part 11 — How to prove the Cycle Length Lemma, a fundamental result on random permutations, using groupoid cardinalities.

Part 12 — How to write the groupoid of finite sets equipped with a permutation as a sum over Young diagrams, and how to use this to compute the probability that a random permutation has given cycle lengths.

Let's try this again It's been suggested that if you go to send someone a "Happy New Year" message and notice that the last one you sent was the same thing last year, that maybe you don't need to send the message. Well, I'm going to try to prove that wrong and once again kick start this blog.

My writing in 2018 was hamstrung partly by illness. I had much better excuses last year, which was actually much worse. But this blog is called "Real Self Energy" and I am always trying to look on the bright side and see the positives, so if we look at what I was looking forward to last year in physics, in fact this year we could say almost exactly the same things:

We're still waiting for the results on muon \( g-2 \). At the French intensity frontier GDR meeting in November we had a great talk on this by Marc Knecht, and the theory challenges in the future, which really convinced me that theorists do have a good handle on the calculation and we are just waiting for the experiments to have their final say.

The B-meson anomalies lost a bit of their lustre with an update that just preserved the status quo: the measurement of \( R(K) \equiv \frac{\mathrm{BR}(B \rightarrow K \mu \mu)}{\mathrm{BR}(B \rightarrow K e e)}\) by LHCb moved closer to the Standard Model value while the uncertainty shrank, keeping the deviation about the same, while a preliminary measurement by Belle of \( R(K^*) \equiv \frac{\mathrm{BR}(B \rightarrow K^* \mu \mu)}{\mathrm{BR}(B \rightarrow K^* e e)} \) was consistent with the SM value, but with much poorer uncertainty than the (anomalous) values from LHCb. Again we discussed this extensively at the GDR in November and there is still a lot of excitement and anticipation that looks set to continue for some time, with many experiments set to report data over the coming years. A good reference of the current status is this paper.

The CMS and ATLAS collaborations seem to be taking their time with analyses of the full dataset of Run 2, so we are still waiting for lots of new results to come out. From the theory perspective, I have recently been involved in putting collider limits on new theory models ("recasting") and every time new experimental results come out, there is a lag before they are implemented in the various theory tools. One of the interesting questions for me will be which theory tool emerges as the winner in the long run from this effort, or if the experiments will first make their analyses completely unreproducible (e.g. by moving from cut-based analyses to neural networks)!

Regarding the Higgs, the mass has already been experimentally determined to an accuracy much better than we "need" (compared e.g. to the top quark), and the accuracy of the coupling measurements will only be incrementally improved with more data. There has been a lot of interest in the production cross-section, for both single and double Higgs events, where I learnt recently that the prediction in the Standard Model is now more accurately known than it can ever be determined at the LHC. This is an interesting effort that one of my LPTHE colleagues got into last year: here and here.

None of this mentions dark matter, neutrinos or axions, where there are lots of interesting things going on. And I was going to mention last year's politics, but I have run out of time for today, and, since that's a rather personal and bleak post, I will leave it for later!

This is a quick, off-the-cuff, conceptual question. Hopefully, it has an
easy answer.

Often in algebra, we want to quotient out by a set of elements that we
regard as trivial or degenerate. That’s almost a tautology: any time we
take a quotient, the elements quotiented out are by definition treated as
negligible. And often the situation is mathematically trivial too, as when
we quotient by the kernel of a homomorphism.

But some examples of quotienting by degenerates are slightly more subtle.
The two I have in mind are:

the definition of exterior power;

the definition of normalized chain complex.

I’d like to know whether there’s a thread connecting the two.

Let me now explain those two examples in a way that makes them look
somewhat similar. I’ll start with exterior powers.

Take a vector space $X$ over some field, and take an integer $r \geq
0$. A multilinear map

$f: X^r \to Y$

to another vector space $Y$ is said to be alternating if

The $r$th exterior power$\bigwedge^r X$ is the codomain of the
universal alternating map out of $X^r$.

That’s a characterization of $\bigwedge^r X$ by a universal property, but
it’s not an actual construction. It’s constructed like this: the tensor power $X^{\otimes r}$ has a linear subspace $D_r$ generated by

and then we put $\bigwedge^r X = X^{\otimes r}/D_r$. The letter $D$ is chosen to
stand for either “degenerate” or “dependent”, linearly dependence being a kind of degeneracy condition. So, $\bigwedge^r X$ is $X^{\otimes r}$ quotiented out by its degenerate part.

Incidentally, the universal property and construction of the exterior power
aren’t usually phrased this way. More often, a multilinear map $f: X^r \to
Y$ is defined to be “alternating” if $f(x_1, \ldots, x_n) = 0$ whenever
$x_i = x_j$ for some $i \neq j$. But this is equivalent. Similarly, $D_r$
can equivalently be defined as the subspace of $X^{\otimes r}$ generated
by the elements $x_1 \otimes \cdots \otimes x_r$ where $x_i = x_j$ for some
$i \neq j$.

Personally, my comfort with exterior algebra took a leap forward when I learned
that the definition of exterior power could be expressed in terms of linear
dependence, as opposed to the standard presentation via repeated arguments. I
never felt entirely motivated by that standard approach, despite the volume
interpretation and the nearly-equivalent condition that swapping arguments
changes the sign. But the condition that degenerate terms get sent to
zero feels more natural to me, whatever “natural” means.

Now let’s do normalized chains. I guess I could say this in the context of
an arbitrary abelian category, but I’ll just say it for modules over
some commutative ring.

Let $X$ be a simplicial module. It gives rise to a chain complex $C(X)$ of
modules (the unnormalized complex of $X$). The $r$th module $C_r(X)$ is
just $X_r$, and the boundary maps in $C(X)$ are the alternating sums $\sum
(-1)^i d_i$ of face maps in $C(X)$, as usual.

Now, some of the elements of $X_r$ are “degenerate”, in the sense that they
can be obtained from lower-dimensional elements. Specifically, let $D_r(X)$
be the submodule of $X_r$ generated by

A little calculation shows that $D(X) = (D_r(X))_{r \geq 0}$ is a
subcomplex of the unnormalized complex $C(X)$. So we can form the quotient
complex $N(X) = C(X)/D(X)$, which by definition is the normalized
complex of the simplicial module $X$.

Again, this isn’t quite the usual presentation. An important fact is that
$N(X)$ is not just a quotient of $C(X)$, but a direct summand. In
particular, it can also be viewed as a subcomplex of $C(X)$. There are two
dual ways to view it thus: $N_r(X)$ can be seen as the intersection
$\bigcap_{i = 0}^{r - 1} ker(d_i)$ of the kernels of all but the last face
map, or dually as the intersection $\bigcap_{i = 1}^r ker(d_i)$ of the kernels
of all but the first face map. Most often, $N(X)$ is defined to be one
of these two subcomplexes. But to me it seems more natural to view it
primarily as $C(X)/D(X)$: the quotient of $C(X)$ by its degenerate part.

So, those are the two situations I wanted to describe. They seem moderately
similar to me: both involve a quotient by a subobject generated by
degenerate elements; in both, the degenerate elements don’t actually form
a subobject themselves (the words “generated by” are crucial); both
involve an indexing over the natural numbers $r$. So I’m wondering whether
the two situations are related. Maybe one is a special case of the other,
or maybe there’s a common generalization. Even if not, perhaps there’s some
good point of view on “quotient out the degenerate” constructions of this
kind, including my two examples and maybe others. Can anyone shed any light?

I do this, in part, because it lets me stay one step ahead. I can do more with those toy models, answer more complicated questions with greater precision, than I can for the real world. I can do more ambitious calculations, and still get an answer. And by doing those calculations, I can start to anticipate problems that will crop up for the real world too. Even if we can’t do a calculation yet for the real world, if it requires too much precision or too many particles, we can still study it in a toy model. Then when we’re ready to do those calculations in the real world, we know better what to expect. The toy model will have shown us some of the key challenges, and how to tackle them.

There’s a risk, working with simpler toy models. The risk is that their simplicity misleads you. When you solve a problem in a toy model, could you solve it only because the toy model is easy? Or would a similar solution work in the real world? What features of the toy model did you need, and which are extra?

The only way around this risk is to be careful. You have to keep track of how your toy model differs from the real world. You must keep in mind difficulties that come up on the road to reality: the twists and turns and potholes that real-world theories will give you. You can’t plan around all of them, that’s why you’re working with a toy model in the first place. But for a few key, important ones, you should keep your eye on the horizon. You should keep in mind that, eventually, the simplifications of the toy model will go away. And you should have ideas, perhaps not full plans but at least ideas, for how to handle some of those difficulties. If you put the work in, you stand a good chance of building something that’s useful, not just for toy models, but for explaining the real world.

What hurts you is not what you don't know, but those mistaken assumptions which "everyone knows to be true". [The following text is courtesy Andras Kovacs - T.D.]

I’ve been asked, innumerable times, to explain quantum entanglement to some lay audience. Most of the elementary explanations that I have seen (heck, maybe all of them) fail to draw any meaningful distinction between “entanglement” and mere “(classical) correlation.”

This drives me up the wall, so each time I am asked, I strive to come up with an elementary explanation of the difference. Rather than keep reinventing the wheel, let me herewith record my latest attempt.

“Entanglement” is a bit tricky to explain, versus “correlation” — which has a perfectly classical interpretation.

Say I tear a page of paper in two, crumple up the two pieces into balls and (at random) hand one to Adam and the other to Betty. They then go their separate ways and — sometime later — Adam unfolds his piece of paper. There’s a 50% chance that he got the top half, and 50% that he got the bottom half. But if he got the top half, we know for certain that Betty got the bottom half (and vice versa).

That’s correlation.

In this regard, the entangled state behaves exactly the same way. What distinguishes the entangled state from the merely correlated is something that doesn’t have a classical analogue. So let me shift from pieces of paper to photons.

You’re probably familiar with the polaroid filters in good sunglasses. They absorb light polarized along the horizontal axis, but transmit light polarized along the vertical axis.

Say, instead of crumpled pieces of paper, I send Adam and Betty a pair of photons.

In the correlated state, one photon is polarized horizontally, and one photon is polarized vertically, and there’s a 50% chance that Adam got the first while Betty got the second and a 50% chance that it’s the other way around.

Adam and Betty send their photons through polaroid filters, both aligned vertically. If Adam’s photon makes it through the filter, we can be certain that Betty’s gets absorbed and vice versa. Same is true if they both align their filters horizontally.

Say Adam aligns his filter horizontally, while Betty aligns hers vertically. Then either both photons make it though (with 50% probability) or both get absorbed (also with 50% probability).

All of the above statements are also true in the entangled state.

The tricky thing, the thing that makes the entangled state different from the correlated state, is what happens if both Adam and Betty align their filters at a 45° angle. Now there’s a 50% chance that Adam’s photon makes it through his filter, and a 50% chance that Betty’s photon makes it through her filter.

(You can check this yourself, if you’re willing to sacrifice an old pair of sunglasses. Polarize a beam of light with one sunglass lens, and view it through the other sunglass lens. As you rotate the second lens, the intensity varies from 100% (when the lenses are aligned) to 0 (when they are at 90°). The intensity is 50% when the second lens is at 45°.)

So what is the probability that both Adam and Betty’s photons make it through? Well, if there’s a 50% chance that his made it through and a 50% chance that hers made it through, then you might surmise that there’s a 25% chance that both made it through.

That’s indeed the correct answer in the correlated state.

In fact, in the correlated state, each of the 4 possible outcomes (both photons made it through, Adam’s made it through but Betty’s got absorbed, Adam’s got absorbed but Betty’s made it through or both got absorbed) has a 25% chance of taking place.

But, in the entangled state, things are different.

In the entangled state, the probability that both photons made it through is 50% – the same as the probability that one made it through. In other words, if Adam’s photon made it through the 45° filter, then we can be certain that Betty’s made it through. And if Adam’s was absorbed, so was Betty’s. There’s zero chance that one of their photons made it through while the other got absorbed.

Unfortunately, while it’s fairly easy to create the correlated state with classical tools (polaroid filters, half-silvered mirrors, …), creating the entangled state requires some quantum mechanical ingredients. So you’ll just have to believe me that quantum mechanics allows for a state of two photons with all of the aforementioned properties.

Sorry if this explanation was a bit convoluted; I told you that entanglement is subtle…

Happy new year. As we head into 2020, here are a few links I've been meaning to point out:

This paper is a topical review of high-throughput (sometimes called combinatorial) approaches to searching for new superconductors. The basic concept is simple enough: co-deposit multiple different elements in a way that deliberately produces compositional gradients across the target substrate. This can be done via geometry of deposition, or with stencils that move during the deposition process. Then characterize the local properties in an efficient way across the various compositional gradients, looking for the target properties you want (e.g., maximum superconducting transition temperature). Ideally, you combine this with high-throughput structural characterization and even annealing or other post-deposition treatment. Doing all of this well in practice is a craft.

Calling back to my post on this topic, Scientific American has an article about wealth distribution based on statistical mechanics-like models of economies. It's hard for me to believe that some of these insights are really "new" - seems like many of these models could have been examined decades ago....

This is impressive. Jason Petta's group at Princeton has demonstrated controlled entanglement between single-electron spins in Si/SiGe gate-defined quantum dots separated by 4 mm. That may not sound all that exciting; one could use photons to entangle atoms separated by km, as has been done with optical fiber. However, doing this on-chip using engineered quantum dots (with gates for tunable control) in an arrangement that is in principle scalable via microfabrication techniques is a major achievement.

Just in case you needed another demonstration that correlated materials like the copper oxide superconductors are complicated, here you go. These investigators use an approach based on density functional theory (see here, here, and here), and end up worrying about energetic competition between 26 different electronic/magnetic phases. Regardless of the robustness of their specific conclusions, just that tells you the inherent challenge of those systems: Many possible ordered states all with very similar energy scales.

That theory, now, that Odysseus never really forgot Circe; and that Esther was in love with Haman, and decoyed him to the banquet with Ahasuerus just for the sake of once having him near her and hearing him speak; and that Dante, perhaps, if he could have been brought to book, would have had to confess to caring a good deal more for the pietosa donna of the window than for a long-dead Beatrice — well, you know, it tallies wonderfully with the inconsequences and surprises that one is always discovering under the superficial fitnesses of life.

There are limits to this. These principles pick out broad traits of theories: gravity versus the strong force versus the Higgs boson. As far as we know they don’t separate more closely related forces, like the strong nuclear force and the weak nuclear force. (Originally, the Quanta article accidentally made it sound like we know why there are four fundamental forces: we don’t, and the article’s phrasing was corrected.) More generally, a bootstrap method isn’t going to tell you which principles are the right ones. For any set of principles, you can always ask “why?”

With that in mind, why would you want to bootstrap?

First, it can make your life simpler. Those simple physical principles may be clear at the end, but they aren’t always obvious at the start of a calculation. If you don’t make good use of them, you might find you’re calculating many things that violate those principles, things that in the end all add up to zero. Bootstrapping can let you skip that part of the calculation, and sometimes go straight to the answer.

Second, it can suggest possibilities you hadn’t considered. Sometimes, your simple physical principles don’t select a unique theory. Some of the options will be theories you’ve heard of, but some might be theories that never would have come up, or even theories that are entirely new. Trying to understand the new theories, to see whether they make sense and are useful, can lead to discovering new principles as well.

Finally, even if you don’t know which principles are the right ones, some principles are better than others. If there is an ultimate theory that describes the real world, it can’t be logically inconsistent. That’s a start, but it’s quite a weak requirement. There are principles that aren’t required by logic itself, but that still seem important in making the world “make sense”. Often, we appreciate these principles only after we’ve seen them at work in the real world. The best example I can think of is relativity: while Newtonian mechanics is logically consistent, it requires a preferred reference frame, a fixed notion for which things are moving and which things are still. This seemed reasonable for a long time, but now that we understand relativity the idea of a preferred reference frame seems like it should have been obviously wrong. It introduces something arbitrary into the laws of the universe, a “why is it that way?” question that doesn’t have an answer. That doesn’t mean it’s logically inconsistent, or impossible, but it does make it suspect in a way other ideas aren’t. Part of the hope of these kinds of bootstrap methods is that they uncover principles like that, principles that aren’t mandatory but that are still in some sense “obvious”. Hopefully, enough principles like that really do specify the laws of physics. And if they don’t, we’ll at least have learned how to calculate better.

Yay! I’m now a Fellow of the ACM. Along with my fellow new inductee Peter Shor, who I hear is a real up-and-comer in the quantum computing field. I will seek to use this awesome responsibility to steer the ACM along the path of good rather than evil.

Also, last week, I attended the Q2B conference in San Jose, where a central theme was the outlook for practical quantum computing in the wake of the first clear demonstration of quantum computational supremacy. Thanks to the folks at QC Ware for organizing a fun conference (full disclosure: I’m QC Ware’s Chief Scientific Advisor). I’ll have more to say about the actual scientific things discussed at Q2B in future posts.

None of that is why you’re here, though. You’re here because of the battle over “quantum supremacy.”

A week ago, my good friend and collaborator Zach Weinersmith, of SMBC Comics, put out a cartoon with a dark-curly-haired scientist named “Dr. Aaronson,” who’s revealed on a hot mic to be an evil “quantum supremacist.” Apparently a rush job, this cartoon is far from Zach’s finest work. For one thing, if the character is supposed to be me, why not draw him as me, and if he isn’t, why call him “Dr. Aaronson”? In any case, I learned from talking to Zach that the cartoon’s timing was purely coincidental: Zach didn’t even realize what a hornet’s-nest he was poking with this.

Ever since John Preskill coined it in 2012, “quantum supremacy” has been an awkward term. Much as I admire John Preskill’s wisdom, brilliance, generosity, and good sense, in physics as in everything else—yeah, “quantum supremacy” is not a term I would’ve coined, and it’s certainly not a hill I’d choose to die on. Once it had gained common currency, though, I sort of took a liking to it, mostly because I realized that I could mine it for dark one-liners in my talks.

The thinking was: even as white supremacy was making its horrific resurgence in the US and around the world, here we were, physicists and computer scientists and mathematicians of varied skin tones and accents and genders, coming together to pursue a different and better kind of supremacy—a small reflection of the better world that we still believed was possible. You might say that we were reclaiming the word “supremacy”—which, after all, just means a state of being supreme—for something non-sexist and non-racist and inclusive and good.

In the world of 2019, alas, perhaps it was inevitable that people wouldn’t leave things there.

My first intimation came a month ago, when Leonie Mueck—someone who I’d gotten to know and like when she was an editor at Nature handling quantum information papers—emailed me about her view that our community should abandon the term “quantum supremacy,” because of its potential to make women and minorities uncomfortable in our field. She advocated using “quantum advantage” instead.

So I sent Leonie back a friendly reply, explaining that, as the father of a math-loving 6-year-old girl, I understood and shared her concerns—but also, that I didn’t know an alternative term that really worked.

See, it’s like this. Preskill meant “quantum supremacy” to refer to a momentous event that seemed likely to arrive in a matter of years: namely, the moment when programmable quantum computers would first outpace the ability of the fastest classical supercomputers on earth, running the fastest algorithms known by humans, to simulate what the quantum computers were doing (at least on special, contrived problems). And … “the historic milestone of quantum advantage”? It just doesn’t sound right. Plus, as many others pointed out, the term “quantum advantage” is already used to refer to … well, quantum advantages, which might fall well short of supremacy.

But one could go further. Suppose we did switch to “quantum advantage.” Couldn’t that term, too, remind vulnerable people about the unfair advantages that some groups have over others? Indeed, while “advantage” is certainly subtler than “supremacy,” couldn’t that make it all the more insidious, and therefore dangerous?

Oblivious though I sometimes am, I realized Leonie would be unhappy if I offered that, because of my wholehearted agreement, I would henceforth never again call it “quantum supremacy,” but only “quantum superiority,” “quantum dominance,” or “quantum hegemony.”

But maybe you now see the problem. What word does the English language provide to describe one thing decisively beating orbeing better than a different thing for some purpose, and which doesn’t have unsavory connotations?

I’ve heard “quantum ascendancy,” but that makes it sound like we’re a UFO cult—waiting to ascend, like ytterbium ions caught in a laser beam, to a vast quantum computer in the sky.

I’ve heard “quantum inimitability” (that is, inability to imitate using a classical computer), but who can pronounce that?

Yesterday, my brilliant former student Ewin Tang (yes, that one) relayed to me a suggestion by Kevin Tian: “quantum eclipse” (that is, the moment when quantum computers first eclipse classical ones for some task). But would one want to speak of a “quantum eclipse experiment”? And shouldn’t we expect that, the cuter and cleverer the term, the harder it will be to use unironically?

In summary, while someone might think of a term so inspired that it immediately supplants “quantum supremacy” (and while I welcome suggestions), I currently regard it as an open problem.

Anyway, evidently dissatisfied with my response, last week Leonie teamed up with 13 others to publish a letter in Nature, which was originally entitled “Supremacy is for racists—use ‘quantum advantage,'” but whose title I see has now been changed to the less inflammatory “Instead of ‘supremacy’ use ‘quantum advantage.'” Leonie’s co-signatories included four of my good friends and colleagues: Alan Aspuru-Guzik, Helmut Katzgraber, Anne Broadbent, and Chris Granade (the last of whom got started in the field by helping me edit Quantum Computing Since Democritus).

(Update: Leonie pointed me to a longer list of signatories here, at their website called “quantumresponsibility.org.” A few names that might be known to Shtetl-Optimized readers are Andrew White, David Yonge-Mallo, Debbie Leung, Matt Leifer, Matthias Troyer.)

Their letter says:

The community claims that quantum supremacy is a technical term with a specified meaning. However, any technical justification for this descriptor could get swamped as it enters the public arena after the intense media coverage of the past few months.

In our view, ‘supremacy’ has overtones of violence, neocolonialism and racism through its association with ‘white supremacy’. Inherently violent language has crept into other branches of science as well — in human and robotic spaceflight, for example, terms such as ‘conquest’, ‘colonization’ and ‘settlement’ evoke the terra nullius arguments of settler colonialism and must be contextualized against ongoing issues of neocolonialism.

Instead, quantum computing should be an open arena and an inspiration for a new generation of scientists.

When I did an “Ask Me Anything” session, as the closing event at Q2B, Sarah Kaiser asked me to comment on the Nature petition. So I repeated what I’d said in my emailed response to Leonie—running through the problems with each proposed alternative term, talking about the value of reclaiming the word “supremacy,” and mostly just trying to diffuse the tension by getting everyone laughing together. Sarah later tweeted that she was “really disappointed” in my response.

Then the Wall Street Journal got in on the action, with a brief editorial (warning: paywalled) mocking the Nature petition:

There it is, folks: Mankind has hit quantum wokeness. Our species, akin to Schrödinger’s cat, is simultaneously brilliant and brain-dead. We built a quantum computer and then argued about whether the write-up was linguistically racist.

Taken seriously, the renaming game will never end. First put a Sharpie to the Supremacy Clause of the U.S. Constitution, which says federal laws trump state laws. Cancel Matt Damon for his 2004 role in “The Bourne Supremacy.” Make the Air Force give up the term “air supremacy.” Tell lovers of supreme pizza to quit being so chauvinistic about their toppings. Please inform Motown legend Diana Ross that the Supremes are problematic.

The quirks of quantum mechanics, some people argue, are explained by the existence of many universes. How did we get stuck in this one?

Steven Pinker also weighed in, with a linguistically-informed tweetstorm:

This sounds like something from The Onion but actually appeared in Nature … It follows the wokified stigmatization of other innocent words, like “House Master” (now, at Harvard, Residential Dean) and “NIPS” (Neural Information Processing Society, now NeurIPS). It’s a familiar linguistic phenomenon, a lexical version of Gresham’s Law: bad meanings drive good ones out of circulation. Examples: the doomed “niggardly” (no relation to the n-word) and the original senses of “cock,” “ass,” “prick,” “pussy,” and “booty.” Still, the prissy banning of words by academics should be resisted. It dumbs down understanding of language: word meanings are conventions, not spells with magical powers, and all words have multiple senses, which are distinguished in context. Also, it makes academia a laughingstock, tars the innocent, and does nothing to combat actual racism & sexism.

Others had a stronger reaction. Curtis Yarvin, better known as Mencius Moldbug, is one of the founders of “neoreaction” (and a significant influence on Steve Bannon, Michael Anton, and other Trumpists). Regulars might remember that Yarvin argued with me in Shtetl-Optimized‘s comment section, under a post in which I denounced Trump’s travel ban and its effects on my Iranian PhD student. Since then, Yarvin has sent me many emails, which have ranged from long to extremely long, and whose message could be summarized as: “[labored breathing] Abandon your liberal Enlightenment pretensions, young Nerdwalker. Come over the Dark Side.”

After the “supremacy is for racists” letter came out in Nature, though, Yarvin sent me his shortest email ever. It was simply a link to the letter, along with the comment “I knew it would come to this.”

He meant: “What more proof do you need, young Nerdawan, that this performative wokeness is a cancer that will eventually infect everything you value—even totally apolitical research in quantum information? And by extension, that my whole worldview, which warned of this, is fundamentally correct, while your faith in liberal academia is naïve, and will be repaid only with backstabbing?”

In a subsequent email, Yarvin predicted that in two years, the whole community will be saying “quantum advantage” instead of “quantum supremacy,” and in five years I’ll be saying “quantum advantage” too. As Yarvin famously wrote: “Cthulhu may swim slowly. But he only swims left.”

So what do I really think about this epic battle for (and against) supremacy?

Truthfully, half of me just wants to switch to “quantum advantage” right now and be done with it. As I said, I know some of the signatories of the Nature letter to be smart and reasonable and kind. They don’t wish to rid the planet of everyone like me. They’re not Amanda Marcottes or Arthur Chus. Furthermore, there’s little I despise more than a meaty scientific debate devolving into a pointless semantic one, with brilliant friend after brilliant friend getting sucked into the vortex (“you too?”). I’m strongly in the Pinkerian camp, which holds that words are just arbitrary designators, devoid of the totemic power to dictate thoughts. So if friends and colleagues—even just a few of them—tell me that they find some word I use to be offensive, why not just be a mensch, apologize for any unintended hurt, switch words midsentence, and continue discussing the matter at hand?

But then the other half of me wonders: once we’ve ceded an open-ended veto over technical terms that remind anyone of anything bad, where does it stop? How do we ever certify a word as kosher? At what point do we all get to stop arguing and laugh together?

To make this worry concrete, look back at Sarah Kaiser’s Twitter thread—the one where she expresses disappointment in me. Below her tweet, someone remarks that, besides “quantum supremacy,” the word “ancilla” (as in ancilla qubit, a qubit used for intermediate computation or other auxiliary purposes) is problematic as well. Here’s Sarah’s response:

I agree, but I wanted to start by focusing on the obvious one, Its harder for them to object to just one to start with, then once they admit the logic, we can expand the list

(What would Curtis Yarvin say about that?)

You’re probably now wondering: what’s wrong with “ancilla”? Apparently, in ancient Rome, an “ancilla” was a female slave, and indeed that’s the Latin root of the English adjective “ancillary” (as in “providing support to”). I confess that I hadn’t known that—had you? Admittedly, once you do know, you might never again look at a Controlled-NOT gate—pitilessly flipping an ancilla qubit, subject only to the whims of a nearby control qubit—in quite the same way.

(Ah, but the ancilla can fight back against her controller! And she does—in the Hadamard basis.)

The thing is, if we’re gonna play this game: what about annihilation operators? Won’t those need to be … annihilated from physics?

And what about unitary matrices? Doesn’t their very name negate the multiplicity of perspectives and cultures?

What about Dirac’s oddly-named bra/ket notation, with its limitless potential for puerile jokes, about the “bra” vectors displaying their contents horizontally and so forth? (Did you smile at that, you hateful pig?)

What about daggers? Don’t we need a less violent conjugate tranpose?

Not to beat a dead horse, but once you hunt for examples, you realize that the whole dictionary is shot through with domination and brutality—that you’d have to massacre the English language to take it out. There’s nothing special about math or physics in this respect.

The same half of me also thinks about my friends and colleagues who oppose claims of quantum supremacy, or even the quest for quantum supremacy, on various scientific grounds. I.e., either they don’t think that the Google team achieved what it said, or they think that the task wasn’t hard enough for classical computers, or they think that the entire goal is misguided or irrelevant or uninteresting.

Which is fine—these are precisely the arguments we should be having—except that I’ve personally seen some of my respected colleagues, while arguing for these positions, opportunistically tack on ideological objections to the term “quantum supremacy.” Just to goose up their case, I guess. And I confess that every time they did this, it made me want to keep saying “quantum supremacy” from now till the end of time—solely to deny these colleagues a cheap and unearned “victory,” one they apparently felt they couldn’t obtain on the merits alone. I realize that this is childish and irrational.

Most of all, though, the half of me that I’m talking about thinks about Curtis Yarvin and the Wall Street Journal editorial board, cackling with glee to see their worldview so dramatically confirmed—as theatrical wokeness, that self-parodying modern monstrosity, turns its gaze on (of all things) quantum computing research. More red meat to fire up the base—or at least that sliver of the base nerdy enough to care. And the left, as usual, walks right into the trap, sacrificing its credibility with the outside world to pursue a runaway virtue-signaling spiral.

The same half of me thinks: do we really want to fight racism and sexism? Then let’s work together to assemble a broad coalition that can defeat Trump. And Jair Bolsonaro, and Viktor Orbán, and all the other ghastly manifestations of humanity’s collective lizard-brain. Then, if we’re really fantasizing, we could liberalize the drug laws, and get contraception and loans and education to women in the Third World, and stop the systematic disenfranchisement of black voters, and open up the world’s richer, whiter, and higher-elevation countries to climate refugees, and protect the world’s remaining indigenous lands (those that didn’t burn to the ground this year).

In this context, the trouble with obsessing over terms like “quantum supremacy” is not merely that it diverts attention, while contributing nothing to fighting the world’s actual racism and sexism. The trouble is that the obsessions are actually harmful. For they make academics—along with progressive activists—look silly. They make people think that we must not have meant it when we talked about the existential urgency of climate change and the world’s other crises. They pump oxygen into right-wing echo chambers.

But it’s worse than ridiculous, because of the message that I fear is received by many outside the activists’ bubble. When you say stuff like “[quantum] supremacy is for racists,” what’s heard might be something more like:

“Watch your back, you disgusting supremacist. Yes, you. You claim that you mentor women and minorities, donate to good causes, try hard to confront the demons in your own character? Ha! None of that counts for anything with us. You’ll never be with-it enough to be our ally, so don’t bother trying. We’ll see to it that you’re never safe, not even in the most abstruse and apolitical fields. We’ll comb through your words—even words like ‘ancilla qubit’—looking for any that we can cast as offensive by our opaque and ever-shifting standards. And once we find some, we’ll have it within our power to end your career, and you’ll be reduced to groveling that we don’t. Remember those popular kids who bullied you in second grade, giving you nightmares of social ostracism that persist to this day? We plan to achieve what even those bullies couldn’t: to shame you with the full backing of the modern world’s moral code. See, we’re the good guys of this story. It’s goodness itself that’s branding you as racist scum.”

In short, I claim that the message—not the message intended, of course, by anyone other than a Chu or a Marcotte or a SneerClubber, but the message received—is basically a Trump campaign ad. I claim further that our civilization’s current self-inflicted catastrophe will end—i.e., the believers in science and reason and progress and rule of law will claw their way back to power—when, and only when, a generation of activists emerges that understands these dynamics as well as Barack Obama did.

Wouldn’t it be awesome if, five years from now, I could say to Curtis Yarvin: you were wrong? If I could say to him: my colleagues and I still use the term ‘quantum supremacy’ whenever we care to, and none of us have been cancelled or ostracized for it—so maybe you should revisit your paranoid theories about Cthulhu and the Cathedral and so forth? If I could say: quantum computing researchers now have bigger fish to fry than arguments over words—like moving beyond quantum supremacy to the first useful quantum simulations, as well as the race for scalability and fault-tolerance? And even: progressive activists now have bigger fish to fry too—like retaking actual power all over the world?

Anyway, as I said, that’s how half of me feels. The other half is ready to switch to “quantum advantage” or any other serviceable term and get back to doing science.

Sydney fireworks (2019-2020), taken from Blues Point Tower.

I'm glad Sydney went ahead with this, while giving my absolute respect to those who have perished or lost their homes in the surrounding fires. As I watched in amazement at the display, I chose to dedicate that time to reflecting on my gratitude to the RFS volunteers. A celebration needn’t be disrespectful. It can be used to show gratitude too. Perhaps the City of Sydney should have made such a dedication. Thank you RFS.

On New Year’s Day, social media in New Zealand was flooded with images of eerie orange skies above the South Island as the smoke from a continent-scale fire disaster crossed the Tasman Sea.

The smoke shows equally well in the view from space. To set the scale, it is 2000km or three hours’ flying from New Zealand to the Australian coast; roughly, London to Moscow, or Denver to Washington DC.

This stunning image (and you'll be seeing more of them as the fires burn) was taken by the Japanese Himawari satellite. The spacecraft is in a geostationary orbit above the Pacific, keeping track with the rotating Earth and high enough to take images of the whole hemisphere beneath it.

There is no sensible doubt that the magnitude of this crisis is driven by climate change. Australia has always had fires, but climate change makes the continent hotter and drier, nudging more places above the tipping point on which fire will take hold, on more days of the year.

Climate change is, ultimately, an unintended consequence of human ingenuity. We can summon light at the flick of a switch, cross continents in a day, and the world's knowledge is at our fingertips. Billions of humans have access to miracles that emperors could never command.

In fairy tales and fables, wishes granted by genies always come with an unanticipated price. These real-life wonders are no exception: carbon dioxide is poured into the atmosphere as we forge steel, burn fossil fuels to power cars and planes and generate electricity.

When Pandora’s Box was opened, while all manner of harm was unleashed into the world the last thing to be found in the box was hope. Climate change is not a myth or a fairy tale, but hope is still to be found in the satellite images of smoke spilling across the Tasman. Our capacity to snap pictures from space is just one the tools we have to understand the climate and underscores our ability to develop technologies that let us walk more lightly on the earth. Ironically, our ability to significantly, albeit unintentionally, change the planet within a few generations reminds us that we have shaped the world that we live in, and can do so for both worse and for better.

If you’ve been reassuring friends and family and yourself that climate change is probably nothing to worry about, look at the news from Australia. Take a moment. Take a deep breath. And think again.

There are people who'll try to minimise and muddy the role of climate change, by telling you that this is all a mistake, or a con; that the fires were lit by arsonists; that forest floors should have been cleared of fuel. But none of this explains the ferocity of the scenes we are seeing on our screens and hearing from friends across the Tasman.

There's no shame in having gotten it wrong when people have profitted from telling you not to worry, glib climate sceptics get air time and media sites earn clicks from spreading ignorance as well as elucidation.

In truth, the science is complex. Weather forecasts are in Celsius (or Fahrenheit, for Americans) but the Universe thinks in Kelvins; degrees above absolute zero. Average temperatures have moved 1 degree Celsius, or 1 Kelvin. A 25 Celsius day is 298 Kelvins, so the global change in temperature we’ve seen so far is just over 1 part in 300. At the scale of the universe, our current planetary warming is small change.

But ecosystems are finely balanced, and attuned to differences in temperatures, not its absolute value – like passengers in a small boat getting seasick from a gentle swell on top of a deep ocean. For Australia, an extra degree or two means hotter days, less rain, more heatwaves, more fires, which burn more fiercely once they get started. If our carbon emissions do not shift radically from "business as usual" that is only going to get worse.

The real challenge is not scientific: it is social and political. We know what we need to do, and we can figure out how to do it. But we need to make it happen. If you are looking for New Year's resolutions, try some of these. Drive petrol-powered cars less. Ride a bike more (you’ll get fitter). Get solar panels (which will likely save you money). Think carefully about the food you eat and how it’s grown. Purchase thoughtfully. Fly less. And this is the big one: insist that our leaders are serious about climate, and expect them to follow through on their promises.

You should worry. You should not despair.

I’ll close on this. Another New Year’s Day photo from New Zealand, a South Islander whose security lights flicked on at noon, as smoke darkened the skies. I’m a scientist and a rationalist; I don’t believe in fairy tales or fables. But you could take this as a sign.

Well, you know the saying: When life hands you several apples, some blackberries, and a chunk of pastry left over from your last pie-making… you make another apple-blackberry pie, taking the opportunity to make it even better! (No? Never heard that saying? Huh.) -cvj

In the wake of two culture-war posts—the first on the term “quantum supremacy,” the second on the acronym “NIPS”—it’s clear that we all need to cool off with something anodyne and uncontroversial. Fortunately, this holiday season, I know just the thing to bring everyone together: groaning about quantum computing hype!

When I was at the Q2B conference in San Jose, I learned about lots of cool stuff that’s happening in the wake of Google’s quantum supremacy announcement. I heard about the 57-qubit superconducting chip that the Google group is now building, following up on its 53-qubit one; and also about their first small-scale experimental demonstration of my certified randomness protocol. I learned about recent progress on costing out the numbers of qubits and gates needed to do fault-tolerant quantum simulations of useful chemical reactions (IIRC, maybe a hundred thousand qubits and a few hours’ worth of gates—scary, but not Shor’s algorithm scary).

I also learned about two claims about quantum algorithms that startups have made, and which are being wrongly interpreted. The basic pattern is one that I’ve come to know well over the years, and which you could call a science version of the motte-and-bailey. (For those not up on nerd blogosphere terminology: in medieval times, the motte was a dank castle to which you’d retreat while under attack; the bailey was the desirable land that you’d farm once the attackers left.)

To wit:

Startup makes claims that have both a true boring interpretation (e.g., you can do X with a quantum computer), as well as a false exciting interpretation (e.g., you can do X with a quantum computer, and it would actually make sense to do this, because you’ll get an asymptotic speedup over the best known classical algorithm).

Lots of business and government people get all excited, because they assume the false exciting interpretation must be true (or why else would everyone be talking about this?). Some of those people ask me for comment.

I look into it, perhaps by asking the folks at the startup. The startup folks clarify that they meant only the true boring interpretation. To be sure, they’re actively exploring the false exciting interpretation—whether some parts of it might be true after all—but they’re certainly not making any claims about it that would merit, say, a harsh post on Shtetl-Optimized.

I’m satisfied to have gotten to the bottom of things, and I tell the startup folks to go their merry way.

Yet many people continue to seem as excited as if the false exciting interpretation had been shown to be true. They continue asking me questions that presuppose its truth.

Our first instance of this pattern is the recent claim, by Zapata Computing, to have set a world record for integer factoring (1,099,551,473,989 = 1,048,589 × 1,048,601) with a quantum computer, by running a QAOA/variational algorithm on IBM’s superconducting device. Gosh! That sure sounds a lot better than the 21 that’s been factored with Shor’s algorithm, doesn’t it?

I read the Zapata paper that this is based on, entitled “Variational Quantum Factoring,” and I don’t believe that a single word in it is false. My issue is something the paper omits: namely, that once you’ve reduced factoring to a generic optimization problem, you’ve thrown away all the mathematical structure that Shor’s algorithm cleverly exploits, and that makes factoring asymptotically easy for a quantum computer. And hence there’s no reason to expect your quantum algorithm to scale any better than brute-force trial division (or in the most optimistic scenario, trial division enhanced with Grover search). On large numbers, your algorithm will be roundly outperformed even by classical algorithms that do exploit structure, like the Number Field Sieve. Indeed, the quantum computer’s success at factoring the number will have had little or nothing to do with its being quantum at all—a classical optimization algorithm would’ve served as well. And thus, the only reasons to factor a number on a quantum device in this way, would seem to be stuff like calibrating the device.

Admittedly, to people who work in quantum algorithms, everything above is so obvious that it doesn’t need to be said. But I learned at Q2B that there are interested people for whom this is not obvious, and even comes as a revelation. So that’s why I’m saying it.

Again and again over the past twenty years, I’ve seen people reinvent the notion of a “simpler alternative” to Shor’s algorithm: one that cuts out all the difficulty of building a fault-tolerant quantum computer. In every case, the trouble, typically left unstated, has been that these alternatives also cut out the exponential speedup that’s Shor’s algorithm’s raison d’être.

Our second example today of a quantum computing motte-and-bailey is the claim, by Toronto-based quantum computing startup Xanadu, that Gaussian BosonSampling can be used to solve all sorts of graph problems, like graph isomorphism, graph similarity, and densest subgraph. As the co-inventor of BosonSampling, few things would warm my heart more than finding an actual application for that model (besides quantum supremacy experiments and, perhaps, certified random number generation). But I still regard this as an open problem—if by “application,” we mean outperforming what you could’ve done classically.

In papers (see for example here, here, here), members of the Xanadu team have given all sorts of ways to take a graph, and encode it into an instance of Gaussian BosonSampling, in such a way that the output distribution will then reveal features of the graph, like its isomorphism type or its dense subgraphs. The trouble is that so far, I’ve seen no indications that this will actually lead to quantum algorithms that outperform the best classical algorithms, for any graph problems of practical interest.

In the case of Densest Subgraph, the Xanadu folks use the output of a Gaussian BosonSampler to seed (that is, provide an initial guess for) a classical local search algorithm. They say they observe better results this way than if they seed that classical local search algorithm with completely random initial conditions. But of course, the real question is: could we get equally good results by seeding with the output of some classical heuristic? Or by solving Densest Subgraph with a different approach entirely? Given how hard it’s turned out to be just to verify that the outputs of a BosonSampling device come from such a device at all, it would seem astonishing if the answer to these questions wasn’t “yes.”

In the case of Graph Isomorphism, the situation is even clearer. There, the central claim made by the Xanadu folks is that given a graph G, they can use a Gaussian BosonSampling device to sample a probability distribution that encodes G’s isomorphism type. So, isn’t this “promising” for solving GI with a quantum computer? All you’d need to do now is invent some fast classical algorithm that could look at the samples coming from two graphs G and H, and tell you whether the probability distributions were the same.

Except, not really. While the Xanadu paper never says so, if all you want is to sample a distribution that encodes a graph’s isomorphism type, that’s easy to do classically! (I even put this on the final exam for my undergraduate Quantum Information Science course a couple weeks ago.) Here’s how: given as input a graph G, just output G but with its vertices randomly permuted. Indeed, this will even provide a further property, better than anything the BosonSampling approach has been shown to provide (or than it probably does provide): namely, if G and H are not isomorphic, then the two probability distributions will not only be different but will have disjoint supports. Alas, this still leaves us with the problem of distinguishing which distribution a given sample came from, which is as hard as Graph Isomorphism itself. None of these approaches, classical or quantum, seem to lead to any algorithm that’s subexponential time, let alone competitive with the “Babai approach” of thinking really hard about graphs.

All of this stuff falls victim to what I regard as the Fundamental Error of Quantum Algorithms Research: namely, to treat it as “promising” that a quantum algorithm works at all, or works better than some brute-force classical algorithm, without asking yourself whether there are any indications that your approach will ever be able to exploit interference of amplitudes to outperform the best classical algorithm.

Incidentally, I’m not sure exactly why, but in practice, a major red flag that the Fundamental Error is about to be committed is when someone starts talking about “hybrid quantum/classical algorithms.” By this they seem to mean: “outside the domain of traditional quantum algorithms, so don’t judge us by the standards of that domain.” But I liked the way someone at Q2B put it to me: every quantum algorithm is a “hybrid quantum/classical algorithm,” with classical processors used wherever they can be, and qubits used only where they must be.

The other thing people do, when challenged, is to say “well, admittedly we have no rigorous proof of an asymptotic quantum speedup”—thereby brilliantly reframing the whole conversation, to make people like me look like churlish theoreticians insisting on an impossible and perhaps irrelevant standard of rigor, blind to some huge practical quantum speedup that’s about to change the world. The real issue, of course, is not that they haven’t given a proof of a quantum speedup (in either the real world or the black-box world); rather, it’s that they’ve typically given no reasons whatsoever to think that there might be a quantum speedup, compared to the best classical algorithms available.

In the holiday spirit, let me end on a positive note. When I did the Q&A at Q2B—the same one where Sarah Kaiser asked me to comment on the term “quantum supremacy”—one of my answers touched on the most important theoretical open problems about sampling-based quantum supremacy experiments. At the top of the list, I said, was whether there’s some interactive protocol by which a near-term quantum computer can not only exhibit quantum supremacy, but prove it to a polynomial-time-bounded classical skeptic. I mentioned that there was one proposal for how to do this, in the IQP model, due to Bremner and Shepherd, from way back in 2008. I said that their proposal deserved much more attention than it had received, and that trying to break it would be one obvious thing to work on. Little did I know that, literally while I was speaking, a paper was being posted to the arXiv, by Gregory Kahanamoku-Meyer, that claims to break Bremner and Shepherd’s protocol. I haven’t yet studied the paper, but assuming it’s correct, it represents the first clear progress on this problem in years (even though of a negative kind). Cool!!

I realized recently that I’ve forgotten a great deal of my drawing skills, settling back into some clunky habits, due to zero practice. But I’m going to need them back for a project, and so will start teaching myself again. Above is a (digital) chalk doodle I did yesterday. –cvj

You’ll find many scientists working over the holidays this year. Partly that’s because of the competitiveness of academia, with many scientists competing for a few positions, where even those who are “safe” have students who aren’t. But to put a more positive spin on it, it’s also because science is a gift that keeps on giving.

Scientists are driven by curiosity. We want to know more about the world, to find out everything we can. And the great thing about science is that, every time we answer a question, we have another one to ask.

Discover a new particle? You need to measure its properties, understand how it fits into your models and look for alternative explanations. Do a calculation, and in addition to checking it, you can see if the same method works on other cases, or if you can use the result to derive something else.

Down the line, the science that survives leads to further gifts. Good science spreads, with new fields emerging to investigate new phenomena. Eventually, science leads to technology, and our lives are enriched by the gifts of new knowledge.

Science is the gift that keeps on giving. It takes new forms, builds new ideas, it fills our lives and nourishes our minds. It’s a neverending puzzle.

So this Newtonmas, I hope you receive the greatest gift of all: the gift of science.

Let be a divergence-free vector field, thus , which we interpret as a velocity field. In this post we will proceed formally, largely ignoring the analytic issues of whether the fields in question have sufficient regularity and decay to justify the calculations. The vorticity field is then defined as the curl of the velocity:

(From a differential geometry viewpoint, it would be more accurate (especially in other dimensions than three) to define the vorticity as the exterior derivative of the musical isomorphism of the Euclidean metric applied to the velocity field ; see these previous lecture notes. However, we will not need this geometric formalism in this post.)

Assuming suitable regularity and decay hypotheses of the velocity field , it is possible to recover the velocity from the vorticity as follows. From the general vector identity applied to the velocity field , we see that

and thus (by the commutativity of all the differential operators involved)

and then by substituting (1) one gets an autonomous equation for the vorticity field . Unfortunately, this equation is non-local, due to the integration present in (1).

In a recent work, it was observed by Elgindi that in a certain regime, the Biot-Savart law can be approximated by a more “low rank” law, which makes the non-local effects significantly simpler in nature. This simplification was carried out in spherical coordinates, and hinged on a study of the invertibility properties of a certain second order linear differential operator in the latitude variable ; however in this post I would like to observe that the approximation can also be seen directly in Cartesian coordinates from the classical Biot-Savart law (1). As a consequence one can also initiate the beginning of Elgindi’s analysis in constructing somewhat regular solutions to the Euler equations that exhibit self-similar blowup in finite time, though I have not attempted to execute the entirety of the analysis in this setting.

Elgindi’s approximation applies under the following hypotheses:

(i) (Axial symmetry without swirl) The velocity field is assumed to take the form

Thus under the hypotheses (i), (ii), and assuming that is slowly varying, we expect to behave like the linear vector field modulated by a radial scalar function. In applications one needs to control the error in various function spaces instead of pointwise, and with similarly controlled in other function space norms than the norm, but this proposition already gives a flavour of the approximation. If one uses spherical coordinates

then we have (using the spherical change of variables formula and the odd nature of )

where

is the operator introduced in Elgindi’s paper.

Proof: By a limiting argument we may assume that is non-zero, and we may normalise . From the triangle inequality we have

The even symmetry (8) ensures that the integrand in is odd, so vanishes. The symmetry (6) or (7) similarly ensures that , so vanishes. Since , we conclude that

where . Because of the odd nature of , only those terms with one factor of give a non-vanishing contribution to the integral. Using the rotation symmetry we also see that any term with a factor of also vanishes. We can thus simplify the above expression as

Using the rotation symmetry again, we see that the term in the first component can be replaced by or by , and similarly for the term in the second component. Thus the above expression is

giving the claim.

Example 2 Consider the divergence-free vector field , where the vector potential takes the form

for some bump function supported in . We can then calculate

and

In particular the hypotheses (i), (ii) are satisfied with

One can then calculate

If we take the specific choice

where is a fixed bump function supported some interval and is a small parameter (so that is spread out over the range ), then we see that

(with implied constants allowed to depend on ),

and

which is completely consistent with Proposition 1.

One can use this approximation to extract a plausible ansatz for a self-similar blowup to the Euler equations. We let be a small parameter and let be a time-dependent vorticity field obeying (i), (ii) of the form

where and is a smooth field to be chosen later. Admittedly the signum function is not smooth at , but let us ignore this issue for now (to rigorously make an ansatz one will have to smooth out this function a little bit; Elgindi uses the choice , where ). With this ansatz one may compute

By Proposition 1, we thus expect to have the approximation

We insert this into the vorticity equation (2). The transport term will be expected to be negligible because , and hence , is slowly varying (the discontinuity of will not be encountered because the vector field is parallel to this singularity). The modulating function is similarly slowly varying, so derivatives falling on this function should be lower order. Neglecting such terms, we arrive at the approximation

and so in the limit we expect obtain a simple model equation for the evolution of the vorticity envelope :

If we write for the logarithmic primitive of , then we have and hence

which integrates to the Ricatti equation

which can be explicitly solved as

where is any function of that one pleases. (In Elgindi’s work a time dilation is used to remove the unsightly factor of appearing here in the denominator.) If for instance we set , we obtain the self-similar solution

and then on applying

Thus, we expect to be able to construct a self-similar blowup to the Euler equations with a vorticity field approximately behaving like

and velocity field behaving like

In particular, would be expected to be of regularity (and smooth away from the origin), and blows up in (say) norm at time , and one has the self-similarity

and

A self-similar solution of this approximate shape is in fact constructed rigorously in Elgindi’s paper (using spherical coordinates instead of the Cartesian approach adopted here), using a nonlinear stability analysis of the above ansatz. It seems plausible that one could also carry out this stability analysis using this Cartesian coordinate approach, although I have not tried to do this in detail.

Archana Kamal was hunting for an apartment in Cambridge, Massachusetts. She was moving MIT, to work as a postdoc in physics. The first apartment she toured had housed John Updike, during his undergraduate career at Harvard. No other apartment could compete; Archana signed the lease.

The apartment occupied the basement of a red-brick building covered in vines. The rooms spanned no more than 350 square feet. Yet her window opened onto the neighbors’ garden, whose leaves she tracked across the seasons. And Archana cohabited with history.

She’s now studying the universe’s history, as an assistant professor of physics at the University of Massachusetts Lowell. The cosmic microwave background (CMB) pervades the universe. The CMB consists of electromagnetic radiation, or light. Light has particle-like properties and wavelike properties. The wavelike properties include wavelength, the distance between successive peaks. Long-wavelength light includes red light, infrared light, and radio waves. Short-wavelength light includes blue light, ultraviolet light, and X-rays. Light of one wavelength and light of another wavelength are said to belong to different modes.

Does the CMB have nonclassical properties, impossible to predict with classical physics but (perhaps) predictable with quantum theory? The CMB does according to the theory of inflation. According to the theory, during a short time interval after the Big Bang, the universe expanded very quickly: Spacetime stretched. Inflation explains features of our universe, though we don’t know what mechanism would have effected the expansion.

According to inflation, around the Big Bang time, all the light in the universe crowded together. The photons (particles of light) interacted, entangling (developing strong quantum correlations). Spacetime then expanded, and the photons separated. But they might retain entanglement.

Detecting that putative entanglement poses challenges.For instance, the particles that you’d need to measure could produce a signal too weak to observe. Cosmologists have been scratching their heads about how to observe nonclassicality in the CMB. One team—Nishant Agarwal at UMass Lowell and Sarah Shandera at Pennsylvania State University—turned to Archana for help.

Archana studies the theory of open quantum systems, quantum systems that interact with their environments. She thinks most about systems such as superconducting qubits,tiny circuits with which labs are building quantum computers. But the visible universe constitutes an open quantum system.

We can see only part of the universe—or, rather, only part of what we believe is the whole universe. Why? We can see only stuff that’s emitted light that has reached us, and light has had only so long to travel. But the visible universe interacts (we believe) with stuff we haven’t seen. For instance, according to the theory of inflation, that rapid expansion stretched some light modes’ wavelengths. Those wavelengths grew longer than the visible universe. We can’t see those modes’ peak-to-peak variations or otherwise observe the modes, often called “frozen.” But the frozen modes act as an environment that exchanges information and energy with the visible universe.

We describe an open quantum system’s evolution with a quantum master equation, which I blogged about four-and-a-half years ago.Archana and collaborators constructed a quantum master equation for the visible universe. The frozen modes, they found, retain memories of the visible universe. (Experts: the bath is non-Markovian.) Next, they need to solve the equation. Then, they’ll try to use their solution to identify quantum observables that could reveal nonclassicality in the CMB.

Frozen modes

Archana’s project caught my fancy for two reasons. First, when I visited her in October, I was collaborating on a related project. My coauthors and I were concocting a scheme for detecting nonclassical correlations in many-particle systems by measuring large-scale properties. Our paper debuted last month. It might—with thought and a dash of craziness—be applied to detect nonclassicality in the CMB. Archana’s explanation improved my understanding of our scheme’s potential.

Second, Archana and collaborators formulated a quantum master equation for the visible universe. A quantum master equation for the visible universe. The phrase sounded romantic to me.^{1} It merited a coauthor who’d seized on an apartment lived in by a Pulitzer Prize-winning novelist.

Archana’s cosmology and Updike stories reminded me of one reason why I appreciate living in the Boston area: History envelops us here. Last month, while walking to a grocery, I found a sign that marks the building in which the poet e. e. cummings was born. My walking partner then generously tolerated a recitation of cummings’s “anyone lived in a pretty how town.” History enriches our lives—and some of it might contain entanglement.

^{1}It might sound like gobbledygook to you, if I’ve botched my explanations of the terminology.

With thanks to Archana and the UMass Lowell Department of Physics and Applied Physics for their hospitality and seminar invitation.

There’s no way to build a program for people in need that can’t be taken advantage of by unscrupulous people who aren’t in need. I have a friend, an attorney, who used to work cases involving people who defrauded the foster care system, taking state money for the care of children who didn’t exist, or who weren’t really in their care. It’s maddening. She eventually quit that job, partially because it was so dispiriting to come in daily contact with people being awful. But what can you do? You can’t build a fence strong enough to keep out all fraud without making the administrative burden impossibly high for the many honest people doing the hard, humane work of raising kids who need parents. There’s some optimal level of vigilance that leads to some optimal level of fraud and that optimal level of fraud isn’t zero.

I thought of my friend when I read this story, about developer Dan Gilbert getting an “opportunity zone” tax break officially intended for spurring development in impoverished areas:

Gilbert’s relationship with the White House helped him win his desired tax break, an email obtained by ProPublica suggests. In February 2018, as the selection process was underway, a top Michigan economic development official asked her colleague to call Quicken’s executive vice president for government affairs about opportunity zones.

“They worked with the White House on it and want to be sure we are coordinated,” wrote the official, Christine Roeder, in an email with the subject line “Quicken.”

The exact role of the White House is not clear. But less than two weeks after the email was written, the Trump administration revised its list of census tracts that were eligible for the tax break. New to the list? One of the downtown Detroit tracts dominated by Gilbert that had not previously been included. And the area made the cut even though it did not meet the poverty requirements of the program. The Gilbert opportunity zone is one of a handful around the country that were included despite not meeting the eligibility criteria, according to an analysis by ProPublica.

Maybe there’s no way to design a program like this without billionaires with phalanxes of lawyers and friends in high places being able to sop up some of the money. Even before the “opportunity zones,” Jared Kushner was able to game a similar program by drawing a gerrymandered “low-income district” that snaked its way through Jersey City to include the site of his luxury skyscraper and also some poor neighborhoods miles away. But I have to believe the optimal enforcement level is higher and the optimal malfeasance level lower than what we have now.

Ward 49 and Ward 66 are two big voting precincts in Madison. Ward 65, where I vote, has 2.819 registered voters. Ward 49, in the campus area with tons of undergrad-heavy high rises, has 3,505.

In the 2018 governor’s race, Ward 65 went for Tony Evers 2190-179. Ward 49 also went big for Evers, though not as dominantly: he won there 1985-591.

Now look at the April 2019 Supreme Court election. Ward 65 went strongly for the more liberal candidate, Lisa Neubauer, voting for her by a 1631-103 margin. Ward 49 also liked Neubauer but the margin was 531-101. 25% more voters but about a third as many votes. Evers narrowly won his election. Neubauer narrowly lost hers. Young voters sitting out downballot elections is pretty important.

Let us call an arithmetic function -bounded if we have for all . In this section we focus on the asymptotic behaviour of -bounded multiplicative functions. Some key examples of such functions include:

“Archimedean” characters (which I call Archimedean because they are pullbacks of a Fourier character on the multiplicative group , which has the Archimedean property);

Dirichlet characters (or “non-Archimedean” characters) (which are essentially pullbacks of Fourier characters on a multiplicative cyclic group with the discrete (non-Archimedean) metric);

Hybrid characters .

The space of -bounded multiplicative functions is also closed under multiplication and complex conjugation.

Given a multiplicative function , we are often interested in the asymptotics of long averages such as

for large values of , as well as short sums

where and are both large, but is significantly smaller than . (Throughout these notes we will try to normalise most of the sums and integrals appearing here as averages that are trivially bounded by ; note that other normalisations are preferred in some of the literature cited here.) For instance, as we established in Theorem 58 of Notes 1, the prime number theorem is equivalent to the assertion that

The situation is better when one asks to understand the mean value on almost all short intervals, rather than all intervals. There are several equivalent ways to formulate this question:

Exercise 2 Let be a function of such that and as . Let be a -bounded function. Show that the following assertions are equivalent:

(i) One has

as , uniformly for all outside of a set of measure .

whenever and . With this quality of bound (saving arbitrary powers of over the trivial bound of ), this is still the lowest value of one can reach unconditionally. However, in a striking recent breakthrough, it was shown by Matomaki and Radziwill that as long as one is willing to settle for weaker bounds (saving a small power of or , or just a qualitative decay of ), one can obtain non-trivial estimates on far shorter intervals. For instance, they show

Exercise 4 In this exercise you may use Theorem 3 freely.

(i) Establish the lower bound

for some absolute constant and all sufficiently large . (Hint: if this bound failed, then would hold for almost all ; use this to create many intervals for which is extremely large.)

(ii) Show that Theorem 3 also holds with replaced by , where is the principal character of period . (Use the fact that for all .) Use this to establish the corresponding upper bound

to (i).

(There is a curious asymmetry to the difficulty level of these bounds; the upper bound in (ii) was established much earlier by Harman, Pintz, and Wolke, but the lower bound in (i) was only established in the Matomaki-Radziwill paper.)

The techniques discussed previously were highly complex-analytic in nature, relying in particular on the fact that functions such as or have Dirichlet series , that extend meromorphically into the critical strip. In contrast, the Matomaki-Radziwill theorem does not rely on such meromorphic continuations, and in fact holds for more general classes of -bounded multiplicative functions , for which one typically does not expect any meromorphic continuation into the strip. Instead, one can view the Matomaki-Radziwill theory as following the philosophy of a slightly different approach to multiplicative number theory, namely the pretentious multiplicative number theory of Granville and Soundarajan (as presented for instance in their draft monograph). A basic notion here is the pretentious distance between two -bounded multiplicative functions (at a given scale ), which informally measures the extent to which “pretends” to be like (or vice versa). The precise definition is

In the contrapositive, Halasz’ theorem can be formulated as the assertion that if one has a large mean

for some , then one has the pretentious property

for some . This has the flavour of an “inverse theorem”, of the type often found in arithmetic combinatorics.

Among other things, Halasz’s theorem gives yet another proof of the prime number theorem (1); see Section 2.

We now give a version of the Matomaki-Radziwill theorem for general (non-pretentious) multiplicative functions that is formulated in a similar contrapositive (or “inverse theorem”) fashion, though to simplify the presentation we only state a qualitative version that does not give explicit bounds.

Combining Theorem 8 with standard non-pretentiousness facts about the Liouville function (see Exercise 24), we recover Theorem 3 (but with a decay rate of only rather than ). We refer the reader to the original paper of Matomaki-Radziwill (as well as this followup paper with myself) for the quantitative version of Theorem 8 that is strong enough to recover the full version of Theorem 3, and which can also handle real-valued pretentious functions.

With our current state of knowledge, the only arguments that can establish the full strength of Halasz and Matomaki-Radziwill theorems are Fourier analytic in nature, relating sums involving an arithmetic function with its Dirichlet series

which one can view as a discrete Fourier transform of (or more precisely of the measure , if one evaluates the Dirichlet series on the right edge of the critical strip). In this aspect, the techniques resemble the complex-analytic methods from Notes 2, but with the key difference that no analytic or meromorphic continuation into the strip is assumed. The key identity that allows us to pass to Dirichlet series is the following variant of Proposition 7 of Notes 2:

In the simplest case of the logarithmically averaged Halasz theorem (Proposition 6), Fourier estimates are already sufficient to obtain decent control on the (weighted) Fourier type expressions that show up. However, these estimates are not enough by themselves to establish the full Halasz theorem or the Matomaki-Radziwill theorem. To get from Fourier control to Fourier or control more efficiently, the key trick is use Hölder’s inequality, which when combined with the basic Dirichlet series identity

The strategy is then to factor (or approximately factor) the original function as a Dirichlet convolution (or average of convolutions) of various components, each of which enjoys reasonably good Fourier or estimates on various regions , and then combine them using the Hölder inequalities (5), (6) and the triangle inequality. For instance, to prove Halasz’s theorem, we will split into the Dirichlet convolution of three factors, one of which will be estimated in using the non-pretentiousness hypothesis, and the other two being estimated in using Exercise 14. For the Matomaki-Radziwill theorem, one uses a significantly more complicated decomposition of into a variety of Dirichlet convolutions of factors, and also splits up the Fourier domain into several subregions depending on whether the Dirichlet series associated to some of these components are large or small. In each region and for each component of these decompositions, all but one of the factors will be estimated in , and the other in ; but the precise way in which this is done will vary from component to component. For instance, in some regions a key factor will be small in by construction of the region; in other places, the control will come from Exercise 13. Similarly, in some regions, satisfactory control is provided by Exercise 14, but in other regions one must instead use “large value” theorems (in the spirit of Proposition 9 from Notes 6), or amplify the power of the standard mean value theorems by combining the Dirichlet series with other Dirichlet series that are known to be large in this region.

There are several ways to achieve the desired factorisation. In the case of Halasz’s theorem, we can simply work with a crude version of the Euler product factorisation, dividing the primes into three categories (“small”, “medium”, and “large” primes) and expressing as a triple Dirichlet convolution accordingly. For the Matomaki-Radziwill theorem, one instead exploits the Turan-Kubilius phenomenon (Section 5 of Notes 1, or Lemma 2 of Notes 9)) that for various moderately wide ranges of primes, the number of prime divisors of a large number in the range is almost always close to . Thus, if we introduce the arithmetic functions

In this section we explore the notion of pretentious distance. The following Hilbert space lemma will be useful for establishing the triangle inequality for this distance:

Exercise 17 If are Dirichlet characters of periods respectively induced from the same primitive character, and , show that for some absolute constant (the only purpose of which is to keep the triple logarithm positive). (Hint: control the contributions of the primes in each dyadic block separately for .)

Next, we relate pretentious distance to the value of Dirichlet series just to the right of the critical strip. There is an annoying minor technicality that the prime has to be treated separately, but this will not cause too much trouble.

and from Definition 5 and Mertens’ theorem we have

and so it will suffice on canceling the factor and taking logarithms to show that

For , the quantity differs from by at most . Also we have

and hence by Taylor expansion

By the triangle inequality, it thus suffices to show that

But the first bound follows from the mean value estimate and Mertens’ theorems, while the second bound follows from summing the bounds

that also arise from Mertens’ theorems.

The quantity is bounded in magnitude by , giving (9). Under either of the two technical conditions listed, this quantity is equal to either or , and in either case it is comparable in magnitude to , giving (10).

If for and , we may repeat the above arguments with the terms deleted, since we no longer need to control the tail contribution .

Now we explore the geometry of the Archimedean characters with respect to pretentious distance.

whenever ; thus the Archimedean characters do not pretend to be like each other at all once the parameter is changed by at least a unit distance (but not changed by an enormous amount).

We begin with the upper bound. For , the claim follows from Mertens’ theorems and the triangle inequality. For , we bound

and the claim again follows from Mertens’ theorems (note that in this case). For , we bound by for and by for , and the claim once again follows from Mertens’ theorems.

Now we establish the lower bound. We first work in the range . In this case we have a matching lower bound

for and some small absolute constant , and hence

giving the lower bound. Now suppose that . Applying Lemma 18 with and replaced by some , we have

for some . However, from the Vinogradov-Korobov estimates (Exercise 43 of Notes 2) we have

whenever ; since we are assuming , the claim follows.

Exercise 20 Assume the Riemann hypothesis. Establish a bound of the form

for some absolute constant whenever for a sufficiently large absolute constant . (Hint: use Perron’s formula and shift the contour to within of the critical line.) Use this to conclude that the upper bound in Proposition 19 can be relaxed (assuming RH) to .

Exercise 21 Let be a -bounded multiplicative function with for all . For any , show that

Thus some sort of upper bound on in Proposition 19 is necessary.

Exercise 22 Let be a non-principal character of modulus , and let be sufficiently large depending on . Show that

for all . (One will need to adapt the Vinogradov-Korobov theory to Dirichet -functions.)

Proposition 19 measures how close the function lies to the Archimedean characters . Using the triangle inequality, one can then lower bound the distance of any other -bounded multiplicative function to these characters:

Proposition 23 Let be sufficiently large. Then for any -bounded multiplicative function , there exists a real number with such that

whenever . In particular we have

if and . If is real-valued, one can take .

Proof: For the first claim, choose to minimize among all real numbers with . Then for any other , we see from the triangle inequality that

Proof: (Proof of Proposition 6) By Exercise 16(iv) we may normalise . We may assume that when , since the value of on these primes has no impact on the sum or on . In particular, from Euler products we now have the absolute convergence . Let be a small quantity to be optimized later, and be smooth compactly supported function on that equals one on with the derivative bounds , on , so on integration by parts we see that the Fourier transform obeys the bounds

for any and . From the triangle inequality have

Applying Proposition 10 applied to a finite truncation of , and then using the absolute convergence of and dominated convergence to eliminate the truncation (or by using Proposition 7 of Notes 2 and then shifting the contour), we can write the right-hand side as

Proof: (Proof of Theorem 7) We may assume that , since the claim is trivial otherwise. On the other hand, for for a sufficiently large , the second term on the right-hand side is dominated by the first, so the estimate does not become any stronger as one increases beyond , and hence we may assume without loss of generality that .

From standard sieves (e.g., Theorem 32 from Notes 4), we see that the proportion of numbers in that do not have a “large” prime factor in , or do not have a “medium” prime factor in , is . Thus by paying an error of , we may restrict to numbers that have at least one “large” prime factor in and at least one “medium” prime factor in (and no prime factor larger than ). This is the same as replacing with the Dirichlet convolution

where is the restriction of to numbers with all prime factors in the “small” range , is the restriction of to numbers in with all prime factors in the “medium” range , and is the restriction of to numbers in with all prime factors in the “large” range . We can thus write

This we can write in turn as

where . It is not advantageous to immediately apply Proposition 10 due to the rough nature of (which is not even Schwartz). But if we let be a Schwartz function of total mass whose Fourier transform is supported on , and define the mollified function

then one easily checks that

which from the triangle inequality soon gives the bounds

Hence we may write

Now we apply Proposition 10 and the triangle inequality to bound this by

Setting for some sufficiently small constant (which in particular will ensure (15) since ), we obtain the claim.

One can optimise this argument to make the constant in Theorem 7 arbitrarily close to ; see this previous post. With an even more refined argument, one can prove the sharper estimate

Let us assume Theorem 27 for the moment and see how it implies Theorem 8. In the latter theorem we may assume without loss of generality that is small. We may assume that , since the case follows easily from Theorem \reF{halasz}.

Let be a smooth compactly supported function with on . By hypothesis, we have

Applying Theorem 27 (with a slightly smaller value of and ), we obtain the claim.

Exercise 28 In the converse direction, show that Theorem 27 is a consequence of Theorem 8.

Exercise 29 Let be supported on , and let . Show that

(Hint: use summation by parts to express as a suitable linear combination of sums and , then use the Cauchy-Schwarz inequality and the Fubini-Tonelli theorem.) Conclude in particular that

It remains to establish Theorem 27. As before we may assume that is small. Let us call a finitely supported arithmetic function large on some subset of if

and small on if

Note that a function cannot be simultaneously large and small on the same set ; and if a function is large on some subset , then it remains large on after modifying by any small error (assuming is small enough, and adjusting the implied constants appropriately). From the hypothesis (17) we know that is large on . As discussed in the introduction, the strategy is now to decompose into various regions, and on each of these regions split (up to small errors) as an average of Dirichlet convolutions of other factors which enjoy either good estimates or good estimates on the given region.

We will need the following ranges:

(i) is the interval

(ii) is the interval

(iii) is the interval

We will be able to cover the range just using arguments involving the zeroth interval ; the range can be covered using arguments involving the zeroth interval and the first interval ; and the range can be covered using arguments involving all three intervals . Coverage of the remaining ranges of can be done by an extension of the methods given here and will be left to the exercises at the end of the notes.

We introduce some weight functions and some exceptional sets. For any , let be a bump function on of total mass , and let denote the arithmetic function

supported on the primes . We then define the following subsets of :

(i) is the set of those such that

for some dyadic (i.e., is restricted to be a power of ).

Let . Inserting the bounded factor in the above estimates, and applying Exercise 14, we conclude in particular that the expression

is small on . Since is completely multiplicative, we can write this expression as

We now perform some technical manipulations to move the cutoff to a more convenient location. From (20) we have

We would like to approximate by . A brief triangle inequality calculation using the smoothness of , the -boundedness of , and the narrow support of shows that

where is defined similarly to but with a slightly larger choice of initial cutoff . Integrating this we conclude that

Using Exercise 31 and Exercise 14, the error term is small on . Thus we conclude that

From the fundamental theorem of arithmetic we see that , hence

From the prime number theorem we have . Putting all this together, we conclude that the measure of is at most

Since , we obtain the claim.

Applying this proposition with ranging between and and , and applying the union bound, the we see that the measure of is at most . To exploit this, we will need some bounds of Vinogradov-Korobov type:

Now we can prove (iv). By hypothesis, is large on . On the other hand, the function (21) (with ) is small on . By the triangle inequality, we conclude that

is large on , hence by the pigeonhole principle, there exists such that

is large on . On the other hand, from Proposition 32 and Prosition 34 we have

for some interval . The claim (iv) now follows from Exercise 13. Note that this already concludes the argument in the range .

Now we establish (ii). Here the set is not as well controlled in size as , but is still quite small. Indeed, from applying Proposition 32 with ranging between and and , and applying the union bound, the we see that the measure of is at most . This is too large of a bound to apply Proposition 34, but we may instead apply a different bound:

The point here is that we can get good bounds even when the function is supported at narrower scales (such as ) than the Fourier interval under consideration (such as or ). In particular, this exercise will serve as a replacement for Exercise 14, which will not give good estimates in this case.

As before, the function (21) is small on , so it will suffice by the triangle inequality to show that

and the claim now follows from (6). Note that this already concludes the argument in the range .

Finally, we establish (iii). The function (21) (with ) is small on , so by the triangle inequality as before it suffices to show that

is small on for all . On the one hand, the definition of gives a good bound on :

To conclude using (6) we now need a good bound for . Unfortunately, the function is now supported on too short of an interval for Exercise 14 to give good estimates, and is too large for Exercise 35 to be applicable either.

where . As is -bounded, and the summand only vanishes when , we can bound the right-hand side by

where denotes the set of primes in the interval .

Suppose has prime factors in this interval (counting multiplicity). Then vanishes unless , in which case we can bound

and

Thus we may bound the above sum by

By the prime number theorem, has elements, so by double counting we have

and thus the previous bound becomes

which sums to

Since

we thus have

so that , we obtain the claim.

Combining this proposition with (22) and (6), we conclude part (iii) of Proposition 30. This establishes Theorem 27 up to the range .

Exercise 37 Show that for any fixed , Theorem 27 holds in the range

where denotes the -fold iterated logarithm of . (Hint: this is already accomplished for . For higher , one has to introduce additional exceptional intervals and extend Proposition 30 appropriately.)

Exercise 38 Establish Theorem 27 (and hence Theorem 8) in full generality. (This is the preceding exercise, but now with potentially as large as , where the inverse tower exponential function is defined as the least for which . Now one has to start tracking dependence on of all the arguments in the above analysis; in particular, the convenient notation of arithmetic functions being “large” or “small” needs to be replaced with something more precise.)

Thank you for your invitation to review this manuscript for your journal. Unfortunately, I must decline the invitation given that, as a matter of principle, I do not support or endorse the activities of for-profit scientific journals.

The scientific community has previously offered this industry, free of charge:

Conducting all scientific research.

Writing all scientific manuscripts.

Acting voluntarily in editorial roles.

Performing all refereeing.

(i.e the entire workload of your organisation, other than hosting the website on which you serve the PDFs).

In exchange, we receive:

Massive journal subscription fees.

Article download fees.

Article publication fees.

Intimidation tactics employed against us when we prefer not to be a part of it.

Anti-competitive and financially predatory distribution tactics.

Institutionalised mandates for the above.

This is not a symbiotic relationship, but a parasitic one, for the larger part financed by the taxpayer, who should rather be financing our research. I can no longer endorse this one-sided relationship, in which for-profit journals effectively tax scientific research, to the tune of billions of dollars annually, often using coercive and intimidatory sales tactics, whilst providing very little or no value in return. This capital is best spent on what it was intended for — scientific research for the benefit of humankind — training students, hiring research staff, financing equipment, travel and infrastructure — to which your organisation contributes nothing whatsoever other than to extort value.

In addition to declining this offer, please for future reference:

Remove my name from your referee database.

Immediately cease and desist from using intimidatory tactics when I decline to volunteer my labour (which is of very high value) to your pursuit of profit (in exchange for nothing).

Hassling me for failing to voluntarily contribute my labour to your revenue-raising is tantamount to harassment and extortion.

Do not request that I voluntarily act as your journal editor.

Do not work in cahoots with national scientific funding agencies to enforce your own vendor lock in, thereby effectively mandating your own services, which are in fact of very little or no value whatsoever. This in an indirect form of taxation upon scientific research, which I have no interest in paying, and which we should be expected or forced to.

I do not intend personally to submit any further manuscripts to your journal for consideration (if my co-authors do, I won’t stand in their way).

Personal note to the Editor: this should not be construed as a personal attack against you, who I absolutely respect, but rather against the industry which is exploiting you in a slave-like work relationship, whilst using you as a conduit to engage me for the same purpose. I write this as an act of solidarity with you, not as a personal attack against you.

We advance human knowledge for the benefit of humanity, and provide it as a gift for all.

Sincerely,
Referee 2.

(This post may be freely linked to, reused, or modified without acknowledgement)

Note from the editor: During the Summer of 2019, a group of thirteen undergraduate students from Caltech and universities around the world, spent 10 weeks on campus performing research in experimental quantum physics. Below, Aiden Cullo, a student from Binghampton University in New York, shares his experience working in Professor Yeh’s lab. The program, termed QuantumSURF, will run again during the Summer of 2020.

This summer, I worked in Nai-Chang Yeh’s experimental condensed matter lab. The aim of my project was to observe the effects of a magnetic field on our topological insulator (TI) sample, . The motivation behind this project was to examine more closely the transformation between a topological insulator and a state exhibiting the anomalous hall effect (AHE).

Both states of matter have garnered a good deal of interest in condensed matter research because of their interesting transport properties, among other things. TIs have gained popularity due to their applications in electronics (spintronics), superconductivity, and quantum computation. TIs are peculiar in that they simultaneously have insulating bulk states and conducting surface state. Due to time-reversal symmetry (TRS) and spin-momentum locking, these surface states have a very symmetric hourglass-like gapless energy band structure (Dirac cone).

The focus of our particular study was the effects of “c-plane” magnetization of our TI’s surface state. Theory predicts TRS and spin-momentum locking will be broken, resulting in a gapped spectrum with a single connection between the valence and conduction bands. This gapping has been theorized and shown experimentally in Chromium (Cr)-doped and numerous other TIs with similar make-up.

In 2014, Nai-Chang Yeh’s group showed that Cr-doped exhibit this gap opening due to the surface state of interacting via the proximity effect with a ferromagnet. Our contention is that a similar material, Cr-doped , exhibits a similar effect, but more homogeneously because of reduced structural strain between atoms. Specifically, at temperatures below the Curie temperature (Tc), we expect to see a gap in the energy band and an overall increase in the gap magnitude. In short, the main goal of my summer project was to observe the gapping of our TI’s energy band.

Overall, my summer project entailed a combination of reading papers/textbooks and hands-on experimental work. It was difficult to understand fully the theory behind my project in such a short amount of time, but even with a cursory knowledge of topological insulators, I was able to provide a meaningful analysis/interpretation of our data.

Additionally, my experiment relied heavily on external factors such as our supplier for liquid helium, argon gas, etc. As a result, our progress was slowed if an order was delayed or not placed far enough in advance. Most of the issues we encountered were not related to the abstract theory of the materials/machinery, but rather problems with less complex mechanisms such as wiring, insulation, and temperature regulation.

While I expected to spend a good deal of time troubleshooting, I severely underestimated the amount of time that would be spent dealing with quotidian problems such as configuring software or etching STM tips. Working on a machine as powerful as an STM was frustrating at times, but also very rewarding as eventually we were able to collect a large amount of data on our samples.

An important (and extremely difficult) part of our analysis of STM data was whether patterns/features in our data set were artifacts or genuine phenomena, or a combination. I was fortunate enough to be surrounded by other researchers that helped me sift through the volumes of data and identify traits of our samples. Reflecting on my SURF, I believe it was a positive experience as it not only taught me a great deal about research, but also, more importantly, closely mimicked the experience of graduate school.

We saw the last show of the touring company’s visit to Madison. The kids have played the record hundreds of times so I know the songs very well. But there’s a lot you get from seeing the songs realized by actors in physical space.

I had imagined King George as a character in the plot interacting with the rest of the cast; but in the show, he’s a kind of god/chorus floating above the action, seeing certain things clearly that the people in the thick of it can’t. So his famous line, “I will kill your friends and family to remind you of my love,” comes off in person as less menacing, more cosmic. Neil Haskell played the role very, very, very mincy, which I think was a mistake, but it got laughs.

On the other hand, I hadn’t grasped from the songs how big a role George Washington plays. It’s set up very nicely, with the relation between Hamilton and the two Schuyler sisters presented as a shadow of the much more robust and fully felt love triangle between Hamilton, Burr, and Washington.

The biggest thing I hadn’t understood from the record was the show’s gentle insistence, built up slowly and unavoidably over the whole of the night, that the winner of a duel is the one who gets shot.