## August 24, 2013

### Zombies

Normally, I wouldn’t touch a paper, with the phrase “Boltzmann brains” in the title, with a 10-foot pole. And anyone accosting me, intent on discussing the subject, would normally be treated as one of the walking undead.

But Sean Carroll wrote a paper and a blog post and I really feel the need to do something about it.

Sean’s “idea,” in a nutshell, is that the large-field instability of the Standard-Model Higgs potential — if the top mass is a little heavier than current observations tell us that it is — is a “feature”: our (“false”) vacuum will eventually decay (with a mean lifetime somewhere on the order of the age of the universe), saving us from being Boltzmann brains.

This is plainly nuts. How can a phase transition that may or may not take place, billions of years in the future, affect anything that we measure in the here-and-now? And, if it doesn’t affect anything in the present, why do I #%@^} care?

The whole Boltzmann brain “paradox” is a category error, anyway.

The same argument leads us to conclude that human civilization (and perhaps all life on earth) will collapse sometime in the not-too-distant future. If not, then “most” human beings — out of all the humans who have ever lived, or ever will live — live in the future. So, if I am a typical human (and I have no reason to think that I am atypical), then I am overwhelmingly likely to be living in the future. So why don’t I have a rocket car? To avoid this “paradox,” we conclude that human civilization must end before the number of future humans becomes too large.

The trouble is that there is no theory of probability (Bayesian, frequentist, unicorn, …) under which the reasoning of the previous paragraph is valid. In any theory of probability, that I know of, it’s either nonsensical or wrong.

Now where’s my shovel … ?

Posted by distler at August 24, 2013 2:10 PM

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2651

### Re: Zombies

Couldn’t the Doomsday Argument be expressed as a type of maximum-likelihood estimation? The bigger the future population, the more unlikely your own position near the beginning of history.

Posted by: Mitchell Porter on August 24, 2013 8:14 PM | Permalink | Reply to this

### Re: Zombies

Couldn’t the Doomsday Argument be expressed as a type of maximum-likelihood estimation? The bigger the future population, the more unlikely your own position near the beginning of history.

Again, you need to specify what theory of probability you are using, before words like “likely” and “unlikely” have meaning. The details of what’s wrong with the argument depend on whether you’re a (non-Hindu) frequentist, or a Bayesian.

(And, no, in neither case do I think it can be phrased as an MLE.)

Posted by: Jacques Distler on August 24, 2013 10:26 PM | Permalink | PGP Sig | Reply to this

### Re: Zombies

This is how I understand Sean’s argement:
1) There are many more insects in this world than human beings.
2) It is thus very unlikely that I experience human consciousness instead of mosquito consciousness.
3) Fortunately we have DDT !

Posted by: wolfgang on August 25, 2013 11:29 AM | Permalink | Reply to this

### Re: Zombies

For us Buddhists believing reincarnation (among various types of life forms), that argument is not crazy at all, and is a real problem facing most of us.

Most of us will reincarnate as insects. Or bacterias.

In that sense we take the Bolzman brain problem seriously and at face value. There is a famous maxim in Buddhism, saying that “it’s extremely rare to be born with human consciousness so that we can learn something. So, study Buddhism.”

It’s not that I wanted to preach Buddhism; I’m sorry if this post sounded like that. I just wanted to point out that in a different culture the Boltzman brain “problem” can be a part of an accepted fact, in some sense.

Posted by: YT on August 25, 2013 6:31 PM | Permalink | Reply to this

### Re: Zombies

For us Buddhists believing reincarnation (among various types of life forms), that argument is not crazy at all, and is a real problem facing most of us.

Fair enough.

I should have included Buddhist, along with Hindu, frequentists, as ones who would agree that the Boltzmann brain problem is well-posed.

It’s not clear to me whether being reincarnated as a Boltzmann brain is considered a possibility. Presumably, you would have had to have done something particularly bad in your previous life to deserve that fate.

Posted by: Jacques Distler on August 25, 2013 7:24 PM | Permalink | PGP Sig | Reply to this

### Re: Zombies

I am neither Buddhist nor Hindu but I assume you would *not* want to use DDT on insects to reduce the probability to be reborn with mosquito consciousness.
I would then assume that you also reject Sean’s solution, using the Higgs to eliminate Boltzmann brains.

Posted by: wolfgang on August 25, 2013 8:48 PM | Permalink | Reply to this

### Re: Zombies

In both the paper and the blog post I explain that our reasoning is quite different from the silly arguments rejected above. Naturally, taking the time to read them, understand the point, and engage constructively is a bit of effort with which not everyone will choose to bother.

Posted by: Sean Carroll on August 25, 2013 12:16 PM | Permalink | Reply to this

### Constructive engagement

Your notion of “cognitive instability” is better-understood as the statement that a proper Bayesian, even if his priors strongly favour the hypothesis that he is a Boltzmann brain, will very quickly come to reject that hypothesis.

Call it survivorship-bias, if you wish, but Bayesians have no Boltzmann brain problem (and frequentists would reject the “problem” as nonsensical in the first place).

Posted by: Jacques Distler on August 25, 2013 12:37 PM | Permalink | PGP Sig | Reply to this

### Re: Zombies

Bayes’ Theorem is the inverse theorem of probability, but so-called Bayesian probability is a species of Conceptualist philosophy, reducing probability statements statements of belief. So you can reparse the arguments as Conceptualism tinged with Futurism, passing as topical commentary on high-tech. Whole Empires are seen to go down in this way with somewhat predictable frequency, as fascinated Arnold Toynbee. This is now called Global Warming, and I still say it’s really (in the sense of Scientific Realism, of course) an issue in academic hot air…

Posted by: Orwin O'Dowd on August 27, 2013 3:28 PM | Permalink | Reply to this

### Re: Zombies

A few years ago, Jim Hartle and I wrote some papers trying to put all possible assumptions into a uniform Bayesian framework; short version (no equations) is http://arxiv.org/abs/1004.3816

As best I can tell, the argument made by Sean (and all those who believe that there is a “BB problem”) is this: If there are a lot of BBs somewhere in spacetime whose apparent observations coincide with ours (as of this moment in time for us), and if we assume that “we” are equally likely to be any one of all the observers with our current data, then this setup is ruled out a microsecond later (because the great majority of the BBs with our data will have fluctuated away). Therefore, Sean et al conclude, we must reject any theory that allows for a lot of BBs that have the same apparent data as we do.

Jim Hartle and I point out that another resolution is to discard the “equally likely” assumption, which (we also point out) cannot be derived from any underlying theory. Instead, it must be independently assumed, and therefore should be treated like any other scientific assumption, rather than adhered to dogmatically (as Sean and friends do).

Posted by: Mark Srednicki on September 5, 2013 3:24 PM | Permalink | Reply to this

### Bayesian

A Bayesian’s inner dialogue:

I think that, perhaps, I am a Boltzmann brain. Could that be true?
Let me think about it for a second.

Nope! I’m definitely not a Boltzmann brain.

In equations:

• Let $\epsilon$ be the probability that the thermal fluctuation constituting a Boltzmann brain is not wiped out in the next microsecond. $\epsilon$ is a ridiculously small number, because such large thermal fluctuations are incredibly ephemeral.
• Let $P_0=1-\delta$ be your prior probability that you are a Boltzmann brain. Sean would like to argue that $\delta$ is an even more ridiculously small (but nonzero) number. It doesn’t matter.
• After $n$ microseconds, assuming you have survived, your posterior probability (which becomes your prior for the $(n+1)^{\text{st}}$ microsecond) is $P_n = \frac{\epsilon^n (1-\delta)}{\delta + \epsilon^n (1-\delta)}$

Even if we grant Sean that $\delta\ll\epsilon$, as soon as $\epsilon^n\sim \delta$ (which happens very quickly), $P_n$ drops, like a stone, to zero. All memory of Sean’s (dumb) prior is erased.

Posted by: Jacques Distler on September 6, 2013 8:32 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

Jacques, I believe Sean would agree with everything you say. But you haven’t addressed the crucial next step in his argument: if we agree that we’re not Boltzmann brains (on the basis of the calculation that you just presented), can we tolerate a cosmology that nevertheless predicts the existence of a lot of Boltzmann brains (with precisely our data/memories as of right now)?

Sean says no, those BBs can’t exist, because if they did, we would have been equally likely to have been any one of them, and then it’s overwhelmingly likely that we would no longer exist. But we do still exist. So we’re not BBs, and so therefore we must insist on a cosmology that prevents the existence of a lot of BBs anywhere in spacetime.

Posted by: Mark Srednicki on September 9, 2013 12:05 PM | Permalink | Reply to this

### Re: Bayesian

Sean says no, those BBs can’t exist, because if they did, we would have been equally likely to have been any one of them…

Umh, no. That’s where we part company.

You need to have a definition of what the word “likely” means in that sentence. That is, you need to say what theory of probability you’re working with.

I’ve just presented the argument that a Bayesian would not conclude that we are “equally likely” to be a BB.

A frequentist (who doesn’t believe in reincarnation) would say the sentence is meaningless. Yuji, above, has argued that a frequentist, who does believe in reincarnation, does have a Boltzmann brain problem (but, presumably, only if you can be reincarnated as a BB).

But, since I’m pretty sure that Sean does not believe in reincarnation, I am hard-pressed to understand why (aside from sloppy thinking about the meaning of the word “likely”) he has a BB problem.

Posted by: Jacques Distler on September 9, 2013 12:41 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

The theory of probability in use is Bayesian. Jim Hartle and I went through this in some detail in http://arxiv.org/abs/0906.0042.

If you are going to consider a model of cosmology that predicts that there are BBs with precisely our data, then, in order to make predictions in this model of what “we” will see next, the model must be supplemented with a probability distribution over the set of copies. Jim and I call this the “xerographic distribution”. You can think of it a Bayesian prior that gives the probability that “we” are any particular one of the copies. (For various reasons, Jim and I prefer not to call the xerographic distribution a prior, but that’s just semantics.)

Sean (and many other well known physicists, including Page, Linde, and Vilenkin) firmly believe that the only xerographic distribution that can be considered is a uniform one.

If we insist upon a uniform xerographic distribution, then any model with a large number of BBs with precisely our data is ruled out a microsecond later, by the argument that you have given. (By “ruled out”, I mean that its posterior probability has dropped dramatically after we have acquired this new microsecond’s worth of data, as long as we have assigned some nonzero prior probability to at least one model that predicts no BBs.)

Thus, if we stick dogmatically to a uniform xerographic distribution, I believe that Sean et al are correct: cosmological models that predict a large number of BBs are “ruled out”. Therefore, as theorists, we should concentrate our attention on constructing cosmological models that do not predict a large number of BBs.

My own personal philosophical belief (and Jim Hartle’s) is that we should not insist upon a uniform xerographic distribution.

But I see no other way around Sean et al’s conclusions.

Posted by: Mark Srednicki on September 9, 2013 1:53 PM | Permalink | Reply to this

### Re: Bayesian

You can think of it a Bayesian prior that gives the probability that “we” are any particular one of the copies. (For various reasons, Jim and I prefer not to call the xerographic distribution a prior, but that’s just semantics.)

The whole point of the Bayesian theory is to provide a degree of robustness against badly-chosen priors. You’re not being Bayesian if you don’t avail yourself of Bayes’ Theorem!

My own personal philosophical belief (and Jim Hartle’s) is that we should not insist upon a uniform xerographic distribution.

You and Jim argue that the xerographic prior is a poor one, and we can quantify that.

A “good” prior is one that allows you to converge as quickly as possible to the “correct” probability distribution, through the application of Bayes’ Theorem. The xerographic distribution yields the prior discussed above, where $P_0 = 1-\delta$ with $\delta$ very tiny. Applying Bayes’ Theorem, we see that $P_n$ stays near 1 until $n\sim \frac{\log\delta}{\log\epsilon}$ microseconds, at which point it abruptly drops to near zero.

That’s very fast, but a prior that converges even faster is to start with $P_0$ near 0. That converges for $n\sim O(1)$ (with disappointing results for the BB observer, but he’s not around to care).

So I agree with your conclusion. I just disagree that it is merely a “philosophical belief.”

Posted by: Jacques Distler on September 9, 2013 2:22 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

The whole point of the Bayesian theory is to provide a degree of robustness against badly-chosen priors. You’re not being Bayesian if you don’t avail yourself of Bayes’ Theorem!

Jim and I (and Sean!) are using Bayes’ Theorem. Bayes’ Theorem tells us that we should reject (with high confidence) the joint hypothesis of (1) a cosmological model that predicts lots of BBs, and (2) a uniform prior over the set of all identical copies of us (including the BBs). But Bayes’ Theorem does not tell us which part to reject, (1) or (2) (or both).

You are absolutely sure that (2) should be rejected. Sean (and many others) are absolutely sure that (1) should be rejected.

The main point that Jim and I would like to make is that neither of you should be quite so sure, since there is no way to test anything except the combination of (1) and (2). (Except, of course, by trying to find other consequences, unrelated to BBs, of any competing cosmological models.)

Posted by: Mark Srednicki on September 9, 2013 3:36 PM | Permalink | Reply to this

### Re: Bayesian

You are absolutely sure that (2) should be rejected. Sean (and many others) are absolutely sure that (1) should be rejected.

Not quite. I am arguing (in perfect accord with your and Jim’s analysis) that in any cosmological model in which (1) is true, you should reject (2).

If you have a cosmological model in which (1) is false, then Bayes’ Theorem doesn’t tell you anything that you didn’t know without applying Bayes’ Theorem.

(Except, of course, by trying to find other consequences, unrelated to BBs, of any competing cosmological models.)

Well, exactly. There may be other reasons to prefer one cosmological model over another, but considerations of Boltzmann brains isn’t one of them.

Posted by: Jacques Distler on September 9, 2013 4:13 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

I am arguing (in perfect accord with your and Jim’s analysis) that in any cosmological model in which (1) is true, you should reject (2).

Fine, but absolutely no one disagrees with that!

The question is, if we’re not sure which cosmological model is right (one with BBs and one without), to the extent that (before any consideration of BBs) we assign them equal prior probability, should we strongly favor the one without BBs after we perform the calculation that you did above?

My opinion is that there is not a truly compelling argument either way (but I personally lean towards no).

Posted by: Mark Srednicki on September 9, 2013 4:45 PM | Permalink | Reply to this

### Re: Bayesian

My opinion is that there is not a truly compelling argument either way (but I personally lean towards no).

At this point, I think I am just repeating myself, but my conclusion is that a proper application of Bayes’ Theorem tells you that consideration of Boltzmann brains provides no help at all in deciding between those cosmological models.

Posted by: Jacques Distler on September 9, 2013 5:23 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

Prof. Jacques,

You have calculated the posterior probability of being a BB given data.

Surely the relevant probability is that of a particular cosmo model, given the data,

p( model | data) ~ p(data | model) * p(model)

where the model is the cosmo model under investigation and the data is that our world did not suddenly disappear

Isn’t Prof. Carroll and Srednicki’s argument that p(data | model ) and thus p(model | data) is small?

I’m not necessarily saying the argument is sound.

Best,

Andrew.

Posted by: andrew on February 17, 2014 11:43 AM | Permalink | Reply to this

### Re: Bayesian

Ah sorry I’ve got it now,

P(data | model ) = p (data | model and BB) * p(BB | model) + BB NOT BB

The second probability is the Xerox distribution. You are arguing that the Xerox distribution p(NOT BB | model) should be ~ 1. And that p( data | model) is therefore ~ 1.

Sean Carroll is unsure whether to reject his model or his Xerox distribution? Either of which could be responsible for small p( data | model).

Have I understood correctly?

Best,

Andrew.

Posted by: andrew on February 17, 2014 12:10 PM | Permalink | Reply to this

### Re: Bayesian

Sean Carroll is unsure whether to reject his model or his Xerox distribution?

If you replace “Sean Carroll” with “Mark Srednicki,” I think you have given an adequate summary of the argument.

My response is that the Xerographic Distribution is a Bayesian prior, and you should use Bayes Theorem to update your prior, based on new information.

The whole point of Bayes Theorem is to render your conclusions insensitive to the initial choice of prior. Any conclusion that (still) depends strongly on your initial choice of prior is prima facie unreliable.

Whatever model of particle physics you choose, your posterior probability that you are not a BB either is 1 (in a model that does not admit BBs) or rapidly approaches 1 (in a model that does). Ergo, you cannot distinguish between models on that basis.

Posted by: Jacques Distler on February 17, 2014 2:19 PM | Permalink | PGP Sig | Reply to this

### Re: Bayesian

Thanks for taking the time to reply, and sorry if my math was unreadable - writing latex on my tablet was too tricky - let me write it again so we definitely understand each other.

I wrote that the relevant probability was,$p(\text{model} | \text{data}) \propto p(\text{data} | \text{model}) \times p(\text{model})$I then wrote that, $p(\text{data} | \text{model}) = p(\text{data} | \text{model}, \text{we are BB}) p(\text{we are BB} | \text{model}) + p(\text{data} | \text{model}, \text{we are NOT BB}) p(\text{we are NOT BB} | \text{model})$The second probabilities, $p(\text{we are NOT BB} | \text{model}) = 1 - p(\text{we are BB} | \text{model})$, are the Xerographic distributions.

We know that $p(\text{data} | \text{model}, \text{we are BB}) \simeq 0$ and that $p(\text{data} | \text{model}, \text{we are NOT BB}) \simeq 1$.

Sean Caroll’s Xerographic distribution is that $p(\text{we are NOT BB} | \text{model})\simeq 1$. So he should conclude that either his Xerographic distribution was wrong or he should reject his model.

Right now, I find Sean Caroll’s argument convincing/cogent. You say that the Xerographic distribution is wrong and that $p(\text{we are NOT BB} | \text{model})\simeq 0$. The BB arguments therefore have no power to help us pick models.

You say that the Xerographic is wrong because we should pick a prior, $p(\text{we are NOT BB} | \text{model})$ such that our posterior $p(\text{we are NOT BB} | \text{model}, \text{data})$ converges as quickly as possible? I don’t follow that. Shouldn’t we pick priors that best represent our state of knowledge prior to seeing the data?

Maybe I’ve misunderstood. Let me ask it another way. The Xerox distribution appears on its own in my equations, i.e. it is not conditioned on the data. Why should it be?

Best,

Andrew.

PS Prof. Caroll, Srednicki, sorry if I am misrepresenting your opinions/arguments.

Posted by: andrew on February 18, 2014 8:26 AM | Permalink | Reply to this

### Re: Bayesian

Ah! Typos! Sorry, hopefully you can see that NOT BB ought to be just BB in all the probabilites after I write, Sean Caroll’s Xerographic…

Posted by: andrew on February 18, 2014 12:28 PM | Permalink | Reply to this

### Re: Bayesian

You say that the Xerographic is wrong because we should pick a prior…

No. I am saying that the choice of the Xerographic distribution is a prior. To quote Mark, above,

…in order to make predictions in this model of what “we” will see next, the model must be supplemented with a probability distribution over the set of copies.

This extra assumption, not part of the theory, is something you should treat with suspicion. Or, better, something to which you should apply Bayes Theorem, so as to improve your guess for that (a priori unknown) probability distribution.

Mark’s argument was that there were really two probability distributions. One was the Xerographic distribution; the other was what you call $p(model)$. The data only constrains the product of the two, and so Bayes Theorem doesn’t tell you which one to update.

I agree with Mark that there’s a degeneracy that is not lifted by this data. In such a situation, however, the proper thing to do is to shrug your shoulders and admit that you cannot draw a firm conclusion from your observations.

The Xerox distribution appears on its own in my equations, i.e. it is not conditioned on the data. Why should it be?

Say you measured the SM parameters extremely accurately, and hence you knew which model you were in. This would lift Mark’s degeneracy. The data would then constrain the probability distribution over copies. Correct?

Now, since we haven’t measured those parameters to the requisite precision, we do have Mark’s degeneracy.

Mark claims it’s simply a matter of taste how you break that degeneracy (and you and Sean seem to prefer preserving the prior probability distribution over copies). I claim that, if your choice leaves you sensitive (indeed, exquisitely sensitive) to your choice of priors, then you have made a bad choice (one that violates the spirit, if not the letter, of Bayes Theorem).

The choice which does that is to use the data to constrain the probability distribution over copies. You then become insensitive both to your prior for that distribution and to your prior for the probability distribution over models.

But now I am simply repeating myself ….

Posted by: Jacques Distler on February 20, 2014 12:56 AM | Permalink | PGP Sig | Reply to this

### Re: Zombies

Is not all of this just allowing physics to get sucked into solipsistic arguments? E.g., I find after a usec or so that I still exist (adhuc sum ergo non BB????). And yet a joker could suggest that I was a BB who was instantiated with a memory of wondering a usec ago whether I was a BB. Perhaps the joker is Kurt Vonnegut and my reality bounces among discrete such BB instantiations. To paraphrase KV: this topic has come unstuck in science. (Moreover I am chuckling at the button at the bottom of the blog page: “view chronologically”.)

### Re: Zombies

I am afraid if the LHC (or some other experiment) does not find something really interesting to think about soon, theoretical physicists will finally go completely nuts.
But this means there is a non-zero probability that we are already physicists sitting in a madhouse dreaming bad dreams.
But this cannot be, therefore it follows that the LHC (or some other experiment) will find something really interesting soon!

Posted by: wolfgang on September 11, 2013 2:41 AM | Permalink | Reply to this

### Re: Zombies

What do you think of these two papers?

“Holographic Refrigerator”
http://arxiv.org/abs/1309.4089

“Looking Inside a Black Hole”
http://arxiv.org/abs/1309.4125

Posted by: Hector on September 19, 2013 12:24 PM | Permalink | Reply to this
Read the post The Bus Stop Problems
Weblog: Musings
Excerpt: Evan Soltas poses some puzzles.
Tracked: December 28, 2013 4:42 PM

Post a New Comment