*Inspired by the event at the UNESCO headquarters in Paris that celebrated the anniversary of the signature of the CERN convention, Sophie Redford wrote about her impressions on joining CERN as a young researcher. A CERN fellow designing detectors for the future CLIC accelerator, she did her PhD at the University of Oxford, observing rare B decays with the LHCb experiment. *

The “60 years of CERN” celebrations give us all the chance to reflect on the history of our organization. As a young scientist, the early years of CERN might seem remote. However, the continuity of CERN and its values connects this distant past to the present day. At CERN, the past isn’t so far away.

Of course, no matter when you arrive at CERN for the first time, it doesn’t take long to realize that you are in a place with a special history. On the surface, CERN can appear scruffy. Haphazard buildings produce a maze of long corridors, labelled with seemingly random numbers to test the navigation of newcomers. Auditoriums retain original artefacts: ashtrays and blackboards unchanged since the beginning, alongside the modern-day gadgetry of projectors and video-conferencing systems.

The theme of re-use continues underground, where older machines form the injection chain for new. It is here, in the tunnels and caverns buried below the French and Swiss countryside, where CERN spends its money. Accelerators and detectors, their immense size juxtaposed with their minute detail, constitute an unparalleled scientific experiment gone global. As a young scientist this is the stuff of dreams, and you can’t help but feel lucky to be a part of it.

If the physical situation of CERN seems unique, so is the sociological. The row of flags flying outside the main entrance is a colourful red herring, for aside from our diverse allegiances during international sporting events, nationality is meaningless inside CERN. Despite its location straddling international borders, despite our wallets containing two currencies and our heads many languages, scientific excellence is the only thing that matters here. This is a community driven by curiosity, where coffee and cooperation result in particle beams. At CERN we question the laws of our universe. Many answers are as yet unknown but our shared goal of discovery bonds us irrespective of age or nationality.

As a young scientist at CERN I feel welcome and valued; this is an environment where reason and logic rule. I feel privileged to profit from the past endeavour of others, and great pride to contribute to the future of that which others have started. I have learnt that together we can achieve extraordinary things, and that seemingly insurmountable problems can be overcome.

In many ways, the second 60 years of CERN will be nothing like the first. But by continuing to build on our past we can carry the founding values of CERN into the future, allowing the next generation of young scientists to pursue knowledge without borders.

By Sophie Redford

Complete show on YouTube. In case you were wondering what the fuss was about.

Yeah, but can you? Believe it or not, it’s a question philosophers have plagued themselves with for thousands of years, and it keeps reappearing in my feeds!

My first reaction was of course: It’s nonsense – a superficial play on the words “you” and “touch”. “You touch” whatever triggers the nerves in your skin. There, look, I’ve solved a thousand year’s old problem in a matter of 3 seconds.

Then it occurred to me that with this notion of “touch” my shoes never touch the ground. Maybe I’m not a genius after all. Let me get back to that cartoon then. Certainly deep thoughts went into it that I must unravel.

To begin with it isn’t just electrostatic repulsion that prevents atoms from getting close, it is more importantly the Pauli exclusion principle which forces the electrons and quarks that make up the atom to arrange in shells rather than to sit on top of each other.

If you could turn off the Pauli exclusion principle, all electrons from the higher shells would drop into the ground state, releasing energy. The same would happen with the quarks in the nucleus which arrange in similar levels. Since nuclear energy scales are higher than atomic scales by several orders of magnitude, the nuclear collapse causes the bulk of the emitted energy. How much is it?

The typical nuclear level splitting is some 100 keV, that is a few 10^{-14} Joule. Most of the Earth is made up of silicon, iron and oxygen, ie atomic numbers of the order of 15 or so on the average. This gives about 10^{-12} Joule per atom, that is 10^{11} Joule per mol, or 1kTon TNT per kg.

This back-of-the envelope gives pretty much exactly the maximal yield of a nuclear weapon. The difference is though that turning off the Pauli exclusion principle would convert*every* kg of Earthly matter into a nuclear bomb. Since our home planet has a relatively small gravitational pull, I guess it would just blast apart. I saw everybody die, again, see that’s how it happens. But I digress; let me get back to the question of touch.

So it’s not just electrostatics but also the Pauli exclusion principle that prevents you from falling through the cracks. Not only do the electrons in your shoes don’t want to touch the ground, the electrons in your shoes don’t want to touch the other electrons in your shoes either. Electrons, or fermions generally, just don’t like each other.

The 10^{-8} meter actually seem quite optimistic because surfaces are not perfectly even, they have a roughness to them, which means that the average distance between two solids is typically much larger than the interatomic spacing that one has in crystals. Moreover, the human body is not a solid and the skin normally covered by a thin layer of fluids. So you never touch anything just because you’re separated by a layer of grease from the world.

To be fair, grease isn’t why the Greeks were scratching their heads back then, but a guy called Zeno. Zeno’s most famous paradox divides a distance into halves indefinitely to then conclude then that because it consists of an infinite number of steps, the full distance can never be crossed. You cannot, thus, touch your nose, spoke Zeno, or ram an arrow into it respectively. The paradox resolved once it was established that infinite series can converge to finite values; the nose was in the business again, but Zeno would come back to haunt the thinkers of the day centuries later.

The issue reappeared with the advance of the mathematical field of topology in the 19th century. Back then, math, physics, and philosophy had not yet split apart, and the bright minds of the times, Descarte, Euler, Bolzano and the like, they wanted to know, using their new methods, what does it mean for any two objects to touch? And their objects were as abstract as it gets. Any object was supposed to occupy space and cover a topological set in that space. So far so good, but what kind of set?

In the space of the real numbers, sets can be open or closed or a combination thereof. Roughly speaking, if the boundary of the set is part of the set, the set is closed. If the boundary is missing the set is open. Zeno constructed an infinite series of steps that converges to a finite value and we meet these series again in topology. Iff the limiting value (of any such series) is part of the set, the set is closed. (It’s the same as the open and closed intervals you’ve been dealing with in school, just generalized to more dimensions.) The topologists then went on to reason that objects can either occupy open sets or closed sets, and at any point in space there can be only one object.

Sounds simple enough, but here’s the conundrum. If you have two open sets that do not overlap, they will always be separated by the boundary that isn’t part of either of them. And if you have two closed sets that touch, the boundary is part of both, meaning they also overlap. In neither case can the objects touch without overlapping. Now what? This puzzle was so important to them that Bolzano went on to suggest that objects may occupy sets that are partially open and partially closed. While technically possible, it’s hard to see why they would, in more than 1 spatial dimension, always arrange so as to make sure one’s object closed surface touches the other’s open patches.

More time went by and on the stage of science appeared the notion of fields that mediate interactions between things. Now objects could interact without touching, awesome. But if they don’t repel what happens when they get closer? Do or don’t they touch eventually? Or does interacting via a field means they touch already? Before anybody started worrying about this, science moved on and we learned that the field is quantized and the interaction really just mediated by the particles that make up the field. So how do we even phrase now the question whether two objects touch?

We can approach this by specifying that we mean with an “object” a bound state of many atoms. The short distance interaction of these objects will (at room temperature, normal atmospheric pressure, non-relativistically, etc) take place primarily by exchanging (virtual) photons. The photons do in no sensible way belong to any one of the objects, so it seems fair to say that the objects don’t touch. They don’t touch, in one sentence, because there is no four-fermion interaction in the standard model of particle physics.

Alas, tying touch to photon exchange in general doesn’t make much sense when we think about the way we normally use the word. It does for example not have any qualifier about the distance. A more sensible definition would make use of the probability of an interaction. Two objects touch (in some region) if their probability of interaction (in that region) is large, whether or not it was mediated by a messenger particle. This neatly solves the topologists’ problem because in quantum mechanics two objects can indeed overlap.

What one means with “large probability” of interaction is somewhat arbitrary of course, but quantum mechanics being as awkward as it is there’s always the possibility that your finger tunnels through your brain when you try to hit your nose, so we need a quantifier because nothing is ever absolutely certain. And then, after all, you can touch your nose! You already knew that, right?

But if you think this settles it, let me add...

There is a non-vanishing probability that when you touch (attempt to touch?) something you actually exchange electrons with it. This opens a new can of worms because now we have to ask what is “you”? Are “you” the collection of fermions that you are made up of and do “you” change if I remove one electron and replace it with an identical electron? Or should we in that case better say that you just touched something else? Or are “you” instead the information contained in a certain arrangement of elementary particles, irrespective of the particles themselves? But in this case, “you” can never touch anything just because you are not material to begin with. I will leave that to you to ponder.

And so, after having spent an hour staring at that cartoon in my facebook feed, I came to the conclusion that the question isn’t whether we can touch something, but what we mean with “some thing”. I think I had been looking for some thing else though…

Best source I could find for this image: IFLS. |

My first reaction was of course: It’s nonsense – a superficial play on the words “you” and “touch”. “You touch” whatever triggers the nerves in your skin. There, look, I’ve solved a thousand year’s old problem in a matter of 3 seconds.

Then it occurred to me that with this notion of “touch” my shoes never touch the ground. Maybe I’m not a genius after all. Let me get back to that cartoon then. Certainly deep thoughts went into it that I must unravel.

The average size of an atom is an Angstrom, 10^{-10} m. The typical interatomar distance in molecules is a nanometer, 10^{-9} meter, or let that be a few nanometers if you wish. At room temperature and normal atmospheric pressure, electrostatic repulsion prevents you from pushing atoms any closer together. So the 10^{-8} meter in the cartoon seem about correct.

To begin with it isn’t just electrostatic repulsion that prevents atoms from getting close, it is more importantly the Pauli exclusion principle which forces the electrons and quarks that make up the atom to arrange in shells rather than to sit on top of each other.

If you could turn off the Pauli exclusion principle, all electrons from the higher shells would drop into the ground state, releasing energy. The same would happen with the quarks in the nucleus which arrange in similar levels. Since nuclear energy scales are higher than atomic scales by several orders of magnitude, the nuclear collapse causes the bulk of the emitted energy. How much is it?

The typical nuclear level splitting is some 100 keV, that is a few 10

This back-of-the envelope gives pretty much exactly the maximal yield of a nuclear weapon. The difference is though that turning off the Pauli exclusion principle would convert

So it’s not just electrostatics but also the Pauli exclusion principle that prevents you from falling through the cracks. Not only do the electrons in your shoes don’t want to touch the ground, the electrons in your shoes don’t want to touch the other electrons in your shoes either. Electrons, or fermions generally, just don’t like each other.

The 10

To be fair, grease isn’t why the Greeks were scratching their heads back then, but a guy called Zeno. Zeno’s most famous paradox divides a distance into halves indefinitely to then conclude then that because it consists of an infinite number of steps, the full distance can never be crossed. You cannot, thus, touch your nose, spoke Zeno, or ram an arrow into it respectively. The paradox resolved once it was established that infinite series can converge to finite values; the nose was in the business again, but Zeno would come back to haunt the thinkers of the day centuries later.

The issue reappeared with the advance of the mathematical field of topology in the 19th century. Back then, math, physics, and philosophy had not yet split apart, and the bright minds of the times, Descarte, Euler, Bolzano and the like, they wanted to know, using their new methods, what does it mean for any two objects to touch? And their objects were as abstract as it gets. Any object was supposed to occupy space and cover a topological set in that space. So far so good, but what kind of set?

In the space of the real numbers, sets can be open or closed or a combination thereof. Roughly speaking, if the boundary of the set is part of the set, the set is closed. If the boundary is missing the set is open. Zeno constructed an infinite series of steps that converges to a finite value and we meet these series again in topology. Iff the limiting value (of any such series) is part of the set, the set is closed. (It’s the same as the open and closed intervals you’ve been dealing with in school, just generalized to more dimensions.) The topologists then went on to reason that objects can either occupy open sets or closed sets, and at any point in space there can be only one object.

Sounds simple enough, but here’s the conundrum. If you have two open sets that do not overlap, they will always be separated by the boundary that isn’t part of either of them. And if you have two closed sets that touch, the boundary is part of both, meaning they also overlap. In neither case can the objects touch without overlapping. Now what? This puzzle was so important to them that Bolzano went on to suggest that objects may occupy sets that are partially open and partially closed. While technically possible, it’s hard to see why they would, in more than 1 spatial dimension, always arrange so as to make sure one’s object closed surface touches the other’s open patches.

More time went by and on the stage of science appeared the notion of fields that mediate interactions between things. Now objects could interact without touching, awesome. But if they don’t repel what happens when they get closer? Do or don’t they touch eventually? Or does interacting via a field means they touch already? Before anybody started worrying about this, science moved on and we learned that the field is quantized and the interaction really just mediated by the particles that make up the field. So how do we even phrase now the question whether two objects touch?

We can approach this by specifying that we mean with an “object” a bound state of many atoms. The short distance interaction of these objects will (at room temperature, normal atmospheric pressure, non-relativistically, etc) take place primarily by exchanging (virtual) photons. The photons do in no sensible way belong to any one of the objects, so it seems fair to say that the objects don’t touch. They don’t touch, in one sentence, because there is no four-fermion interaction in the standard model of particle physics.

Alas, tying touch to photon exchange in general doesn’t make much sense when we think about the way we normally use the word. It does for example not have any qualifier about the distance. A more sensible definition would make use of the probability of an interaction. Two objects touch (in some region) if their probability of interaction (in that region) is large, whether or not it was mediated by a messenger particle. This neatly solves the topologists’ problem because in quantum mechanics two objects can indeed overlap.

What one means with “large probability” of interaction is somewhat arbitrary of course, but quantum mechanics being as awkward as it is there’s always the possibility that your finger tunnels through your brain when you try to hit your nose, so we need a quantifier because nothing is ever absolutely certain. And then, after all, you can touch your nose! You already knew that, right?

But if you think this settles it, let me add...

Yes, no, maybe, wtf. |

And so, after having spent an hour staring at that cartoon in my facebook feed, I came to the conclusion that the question isn’t whether we can touch something, but what we mean with “some thing”. I think I had been looking for some thing else though…

Hamas is trying to kill as many civilians as it can.

Israel is trying to kill as few civilians as it can.

Neither is succeeding very well.

**Update (July 28):** Please check out a superb essay by Sam Harris on the Israeli/Palestinian conflict. While, as Harris says, the essay contains “something to offend everyone”—even me—it also brilliantly articulates many of the points I’ve been trying to make in this comment thread.

See also a good HuffPost article by Ali A. Rizvi, a “Pakistani-Canadian writer, physician, and musician.”

Sorry for the posting drought. There is a good reason: I'm in the final stages of a textbook based on courses I developed about nanostructures and nanotechnology. It's been an embarrassingly long time in the making, but I'm finally to the index-plus-final-touches stage. I'll say more when it's in to the publisher.

One other thing: I'm going to a 1.5 day workshop at NSF in three weeks about the next steps regarding the NNIN. I've been given copies of the feedback that NSF received in their request for comment period, but if you have additional opinions or information that you'd like aired there, please let me know, either in the comments or via email.

One other thing: I'm going to a 1.5 day workshop at NSF in three weeks about the next steps regarding the NNIN. I've been given copies of the feedback that NSF received in their request for comment period, but if you have additional opinions or information that you'd like aired there, please let me know, either in the comments or via email.

One interesting feature of the heuristics of Garton, Park, Poonen, Wood, Voight, discussed here previously: they predict there are fewer elliptic curves of rank 3 than there are of rank 2. Is this what we believe? On one hand, you might believe that having three independent points should be “harder” than having only two. But […]

One interesting feature of the heuristics of Garton, Park, Poonen, Wood, Voight, discussed here previously: they predict there are fewer elliptic curves of rank 3 than there are of rank 2. Is this what we believe? On one hand, you might believe that having three independent points should be “harder” than having only two. But there’s the parity issue. All right-thinking people believe that there are equally many rank 0 and rank 1 elliptic curves, because 100% of curves with even parity have rank 0, and 100% of curves with odd parity have rank 1. If a curve has even parity, all that has to happen to force it to have rank 2 is to have a non-torsion point. And if a curve has odd parity, all that has to happen to force it to have rank 3 is to have one more non-torsion point you don’t know about it. So in that sense, it seems “equally hard” to have rank 2 or rank 3, given that parity should be even half the time and odd half the time.

So my intuition about this question is very weak. What’s yours? Should rank 3 be less common than rank 2? The same? *More* common?

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: … Continue reading

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: knowing everything there is to know about the universe except where you are in it. (Skeptic’s Play beat me to the punch here, but here’s my own take.)

The setup for the traditional (non-quantum) problem is the following. Some experimental philosophers enlist the help of a subject, Sleeping Beauty. She will be put to sleep, and a coin is flipped. If it comes up heads, Beauty will be awoken on Monday and interviewed; then she will (voluntarily) have all her memories of being awakened wiped out, and be put to sleep again. Then she will be awakened again on Tuesday, and interviewed once again. If the coin came up tails, on the other hand, Beauty will only be awakened on Monday. Beauty herself is fully aware ahead of time of what the experimental protocol will be.

So in one possible world (heads) Beauty is awakened twice, in identical circumstances; in the other possible world (tails) she is only awakened once. Each time she is asked a question: “What is the probability you would assign that the coin came up tails?”

(Some other discussions switch the roles of heads and tails from my example.)

The Sleeping Beauty puzzle is still quite controversial. There are two answers one could imagine reasonably defending.

- “Halfer” — Before going to sleep, Beauty would have said that the probability of the coin coming up heads or tails would be one-half each. Beauty learns nothing upon waking up. She should assign a probability one-half to it having been tails.
- “Thirder” — If Beauty were told upon waking that the coin had come up heads, she would assign equal credence to it being Monday or Tuesday. But if she were told it was Monday, she would assign equal credence to the coin being heads or tails. The only consistent apportionment of credences is to assign 1/3 to each possibility, treating each possible waking-up event on an equal footing.

The Sleeping Beauty puzzle has generated considerable interest. It’s exactly the kind of wacky thought experiment that philosophers just eat up. But it has also attracted attention from cosmologists of late, because of the measure problem in cosmology. In a multiverse, there are many classical spacetimes (analogous to the coin toss) and many observers in each spacetime (analogous to being awakened on multiple occasions). Really the SB puzzle is a test-bed for cases of “mixed” uncertainties from different sources.

Chip and I argue that if we adopt Everettian quantum mechanics (EQM) and our Epistemic Separability Principle (ESP), everything becomes crystal clear. A rare case where the quantum-mechanical version of a problem is actually easier than the classical version.

In the quantum version, we naturally replace the coin toss by the observation of a spin. If the spin is initially oriented along the *x*-axis, we have a 50/50 chance of observing it to be up or down along the *z*-axis. In EQM that’s because we split into two different branches of the wave function, with equal amplitudes.

Our derivation of the Born Rule is actually based on the idea of self-locating uncertainty, so adding a bit more to it is no problem at all. We show that, if you accept the ESP, you are immediately led to the “thirder” position, as originally advocated by Elga. Roughly speaking, in the quantum wave function Beauty is awakened three times, and all of them are on a completely equal footing, and should be assigned equal credences. The same logic that says that probabilities are proportional to the amplitudes squared also says you should be a thirder.

But! We can put a minor twist on the experiment. What if, instead of waking up Beauty twice when the spin is up, we instead observe another spin. If that second spin is also up, she is awakened on Monday, while if it is down, she is awakened on Tuesday. Again we ask what probability she would assign that the first spin was down.

This new version has three branches of the wave function instead of two, as illustrated in the figure. And now the three branches don’t have equal amplitudes; the bottom one is (1/√2), while the top two are each (1/√2)^{2} = 1/2. In this case the ESP simply recovers the Born Rule: the bottom branch has probability 1/2, while each of the top two have probability 1/4. And Beauty wakes up precisely once on each branch, so she should assign probability 1/2 to the initial spin being down. This gives some justification for the “halfer” position, at least in this slightly modified setup.

All very cute, but it does have direct implications for the measure problem in cosmology. Consider a multiverse with many branches of the cosmological wave function, and potentially many identical observers on each branch. Given that you are one of those observers, how do you assign probabilities to the different alternatives?

Simple. Each observer *O _{i}* appears on a branch with amplitude

It looks easy, but note that the formula is not trivial: the weights *w _{i}* will not in general add up to one, since they might describe multiple observers on a single branch and perhaps even at different times. This analysis, we claim, defuses the “Born Rule crisis” pointed out by Don Page in the context of these cosmological spacetimes.

Sleeping Beauty, in other words, might turn out to be very useful in helping us understand the origin of the universe. Then again, plenty of people already think that the multiverse is just a fairy tale, so perhaps we shouldn’t be handing them ammunition.

Given the recent Feynman explosion (timeline of events), some people may be casting about looking for an alternative source of colorful-character anecdotes in physics. Fortunately, the search doesn’t need to go all that far– if you flip back a couple of pages in the imaginary alphabetical listing of physicists, you’ll find a guy who fits the bill very well: Enrico Fermi.

Fermi’s contributions to physics are arguably as significant as Feynman’s. He was the first to work out the statistical mechanics of particles obeying the Pauli exclusion principle, now called “fermions” in his honor (Paul Dirac did the same thing independently a short time later, so the mathematical function is the “Fermi-Dirac distrbution“). He was also the first to develop a theory for the weak nuclear interaction, placing Wolfgang Pauli’s desperate suggestion of the existence of the neutrino on a sound theoretical footing. Fermi’s theory was remarkably successful, and anticipated or readily incorporated the next thirty-ish years of discoveries.

More than that, he was a successful *experimental* physicist. He did pioneering experiments with neutrons, including demonstrating the fission of heavy elements (though he initially misinterpreted this as the creation of transuranic elements) and was the first to successfully construct a nuclear reactor, as part of the Manhattan Project. The US’s great particle physics lab is named Fermilab in his honor.

One of the difficult things about replacing Feynman is that a lot of the genuinely admirable things about his approach to physics are sort of bound up with his personality. Meaning that it’s easy to slide from approach-to-physics stuff– spinning plates at Cornell, etc.– to relatively wholesome anecdotes– dazzling off-the-cuff calculations, cracking safes at Los Alamos– into the strip clubs and other things that make Feynman a polarizing figure.

Fermi brings a lot of the same positive features without the baggage. He had a similarly playful approach to a lot of physics-related things– the whole notion of “Fermi problems” and back-of-the-envelope calculations is pretty much essential to the physics mindset. Wikipedia has a great secondhand quote:

I can calculate anything in physics within a factor 2 on a few sheets: to get the numerical factor in front of the formula right may well take a physicist a year to calculate, but I am not interested in that.

He was also a charming and witty guy, with a quirky sense of humor (the photo above has a mistake in one of the equations, and people have spent years debating whether that was deliberate, because it’s the kind of thing he might’ve done as a joke on the PR people). He even has his own great Manhattan Project anecdotes– he famously estimated the strength of the blast by dropping pieces of paper and pacing off how far they blew when the shock wave hit, and prior to the Trinity test was reprimanded by Oppenheimer for running a betting pool on whether the test would ignite the atmosphere and obliterate life on Earth.

He also has the wide-ranging interests going for him. His name pops up all over physics, from statistical mechanics to the astrophysics of cosmic rays. And just like Feynman is more likely to be cited in popular writing for inspiring either nanotechnology or quantum computing than for his work on QED, Fermi’s true source of popular immortality is that damn “paradox” about aliens.

While the personal anecdotes may not quite stack up to those about Feynman, there isn’t the same dark edge. Fermi was happily married, and in fact moved to the US because of his wife, who was Jewish and subject to the racist policies put in place by Mussolini (they left directly from the Nobel Prize ceremony in 1938, where Fermi picked up a prize for work done in Rome). I’m not aware of any salacious Fermi stories, so he’s much safer in that regard.

Given all that, why is Fermi so much less well-known than Feynman? Partly because a lot of his contributions to physics were excessively practical– experimentalists tend to be less mythologized than theorists, and his greatest theoretical contributions came through cleaning up ideas proposed by Pauli. Mostly, though, it’s because he was a generation older than Feynman and died younger, in 1954. He never got the chance to be a grand old man, and didn’t live into the era where the sort of colorful anecdotage that so inflates Feynman’s status became popular. Had he lived another twenty years, things might’ve been different.

(It’s also interesting to speculate about what Schrödinger’s reputation would be had he been closer to Feynman’s age than Einstein’s. Schrödinger would’ve loved the Sixties, between his fascination with Vedic philosophy and the whole “free love” thing. But had he been alive through then, the skeevy aspects of his personal life would probably be better known, because most of his more sordid activities took place in an era when people didn’t really talk about that sort of thing in public, let alone write best-selling autobiographies about it.)

Anyway, that’s my plug for Fermi. If you find Feynman too problematic to promote– and that’s an entirely reasonable decision– Fermi gets you a lot of the same good stuff (great physicist, playful approach to science, charming and personable guy), without the darker side. He should get more press.

——

(That said, Fermi is one of the figures I regret not being able to feature more prominently in the forthcoming book. The problem is, the focus of the book is on process, and Fermi’s one of those guys whose process of discovery consisted mostly of “be super smart, and work really hard.” I couldn’t come up with a way to fit him into the framework of the book, other than the notion of back-of-the-envelope estimation. But you can’t do that without bringing math into it, and that would’ve pushed the book into a different category…)