Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

May 27, 2004

High Energy Supersymmetry

I’ve really gotta stop posting about the Landscape. Posts on the subject rot your teeth and attract flies.

Still, it helps to sort out one’s thinking about anthropic ideas, which definitely clash with the sort of “explanation” we have become used-to in field theory. In any scientific framework, one needs to understand what’s just given — input data, if you will — what needs to be “explained” and (most importantly) what counts as an explanation. There’s a temptation to mix and match: to envoke the anthropic principle to explain some things, and “technical naturalness” to explain others. But that is simply inconsistent; a statistical distribution in the space of couplings does not favour technically-natural ones over others.

Consider the question: why is the QCD scale so much lower than the Planck scale (or the GUT scale)?

We are accustomed to saying that this large hierarchy is natural because it arises from renormalization-group running. The QCD coupling starts out moderately small at the GUT scale (α GUT1/25\alpha_{\text{GUT}}\sim 1/{25}), and increases only logarithmically as we go down in energy.

But, in the Landscape, there’s a probability distribution for values of α GUT\alpha_{\text{GUT}}, which might just as easily be 1/101/{10}, or 1/1501/{150}. What sounded like a virtue now sounds like a vice. The ratio Λ QCD/M GUT\Lambda_{\text{QCD}}/M_{\text{GUT}} depends exponentially on α GUT\alpha_{\text{GUT}}, and so is an exquisitely sensitive function of the moduli — exactly the sort of thing about which it is hard to make statistical predictions.

Instead, there’s an anthropic explanation for the value of Λ QCD\Lambda_{\text{QCD}}. Namely, the proton mass (which is essentially determined by the QCD scale) is tightly constrained. Vary m p/M Plm_p/M_{\text{Pl}} by a factor of a few, and stars cease to exist. Hence α GUT\alpha_{\text{GUT}} must be pretty close to 1/251/{25}, otherwise, we aren’t here.

Similarly, point out Arkani-Hamed and Dimopoulos, the electroweak scale cannot be vastly different from Λ QCD\Lambda_{\text{QCD}}. For the ratio enters into the neutron-proton mass difference. If the neutron were lighter than the proton, there would be no atoms at all. If it were much heavier, all heavy elements would be unstable to beta decay, and there would be only hydrogen. Either way, we would not exist.

If the electroweak scale is anthropically-determined, is there any reason to expect any beyond-the-Standard-Model particles below the GUT scale? We don’t need low-energy supersymmetry to make M E.W./M Pl1M_{\text{E.W.}}/M_{\text{Pl}}\ll 1 natural. Arkani-Hamed and Dimopoulos posit a scenario where supersymmetry is broken at a high scale, with squarks and sleptons having masses in the 10 910^9 GeV range (more on that below), whereas the “'inos” (the higgsino, the gluino, the wino, zino and photino) survive down to low energies.

Light fermions are, of course, technically natural. But there’s no reason to expect the theory to have approximate chiral symmetries. So technical naturalness is not, in this context an explanation for the light fermions. Instead, Arkani-Hamed and Dimopoulos argue that low-energy supersymmetry does have one great virtue — it ensured the unification of couplings around 10 1610^{16} GeV. The “'inos” contribute to the β\beta-function at 1-loop, so the 1-loop running in this model is exactly as in the MSSM. The squarks and sleptons contribute at 2-loops (as they come in complete SU(5)SU(5) multiplets, their 1-loop contribution does not affect the unification of couplings), and removing them from low energies actually improves the fit somewhat.

Arguing for coupling constant unification sounds equally bogus until you turn the argument on its head (thanks to Aaron Bergman for helping me see the light). Assume that at short distances one has grand unification. Then one needs light “'inos” so that the 3-2-1 couplings flow to their anthropically-allowed values at long distances.

Once we’ve abandoned low-energy, SUSY breaking, why not let the SUSY breaking scale be all the way up at the GUT scale? The reason is, again, anthropic. The gluino is a light colour-octet fermion, and hence very long-lived (it decays only via gravitino exchange). If you push the SUSY breaking scale up too high, the long-lived gluino creates problems for cosmology. Arkani-Hamed and Dimopoulos favour a SUSY-breaking scale, M S10 9M_S\sim 10^9 GeV.

This gives a big improvement over low-energy SUSY in the context of the landscape. Flavour-changing neutral currents are no longer a problem. And it ameliorates, but does not really solve the problem of proton decay.

Proton decay via tree-level squark exchange
Proton decay via squark exchange, with two R-parity violating vertices.

Recall that there’s no reason for the generic vacuum on the Landscape to respect R-parity. R-parity is respected only on very high codimension subvarieties of the moduli space (if it’s present at all). So, generically, one expects R-parity violating terms in the superpotential to be unsuppressed. Since the squarks are much lighter than M GUTM_{\text{GUT}}, the dominant contribution to proton decay comes from squark exchange and the proton lifetime is roughly T(M SλM GUT) 4×10 32years T \sim \left(\frac{M_S}{\lambda M_{\text{GUT}}} \right)^4 \times 10^{32}\text{years} where λ\lambda is the strength of the R-parity violating Yukawa couplings.

For TeV-mass squarks, the anthropic bound on the proton lifetime gives λ<10 9\lambda \lt 10^{-9}, whereas the observational bound is λ<10 13\lambda \lt 10^{-13}. Pushing the squark masses up to 10 910^9 GeV, the bound on λ\lambda is no longer so absurdly small. The anthropic bound is λ<10 3\lambda \lt 10^{-3}, and the observational bound is λ<10 7\lambda \lt 10^{-7}, but there is still a 4-orders of magnitude discrepancy which needs explaining.

I think that’s still a serious challenge for the anthropic principle. Why are the R-parity violating Yukawa couplings 4 orders of magnitude smaller than required by the anthropic bound?

A possible way out was suggest to me by Nima in the course of our email conversation. The lightest superpartner (one of the neutralinos) decays as well through R-parity violating interactions (a similar diagram to the one which led to proton decay, but with one R-parity violating and one R-parity preserving vertex, instead of two R-parity violating vertices. If we want the lightest superpartner to furnish a candidate for the dark matter (leading to structure formation and hence to us) we need its lifetime to be at least comparable to the age of the universe. For M lspM_{\text{lsp}}\sim a few hundred GeV, to get a lifetime of 10 1710^{17} seconds, one ends up requiring λ10 7\lambda\sim 10^{-7}.

Perhaps it is the existence of dark matter that “explains” the nearly exact R-parity in our universe. I’m still pretty sceptical, but I’m keeping an open mind. So, there may well be more posts on this subject in the future …

Posted by distler at May 27, 2004 12:59 AM

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/371

11 Comments & 1 Trackback

Re: High Energy Supersymmetry

While I think this proposal is extremely interesting, I am concerned about the increasing use of the anthropic principle to solve theoretical problems.

I would point out that at any point in the development of a physical theory, one can stop, and require that anything left unexplained can be put up to anthropism.

For me, physics is exactly the process of getting rid of anthropism.

– Bob

Posted by: Bob McElrath on June 1, 2004 12:18 AM | Permalink | Reply to this

Re: High Energy Supersymmetry

Well, as I argued at some length in a previous post, I don’t think it terribly likely that anthropic reasoning will be either needed or helpful in deciding which vacuum of string theory corresponds to our world.

But it is a useful instrument de pensée in thinking about the space of string vacua generally. Because unlike in field theory, we cannot — for instance — simply postulate the existence of a discrete symmetry to forbid some troublesome interactions.

And it is a perfectly scientific — even æsthetically pleasing — procedure to put in certain facts about our world and use them to derive other facts about our world. Despite the infelicitous name, that’s all the Anthropic Principle is doing.

Posted by: Jacques Distler on June 1, 2004 12:42 AM | Permalink | PGP Sig | Reply to this

Re: High Energy Supersymmetry

Unfortunately, people rarely agree on the exact meaning of “anthropic principle” and it is usually misused.

For example, Hoyle’s deduction of a carbon resonance due to the observation that we contain a lot of carbon. This might be called anthropic reasoning, but doesn’t involve any *principle*.

I believe the correct thing to use is the “observer selection principle” explained by Nick Bostrom. This can be summarized as follows:

When trying to explain a fact about the universe in a model which contains many different possible universes, all the universes which do not contain (semi-)intelligent beings able to observe the fact in question should be discarded. Or statistically, we should multiply the a priori probability of any given universe by the probability of its producing such beings.

But - this is the big but - this requires us to be able to calculate the probability of observers arising in any given theory. Not just observers which look like human beings, any observers. This is too difficult for me in most cases.

I suspect it is even too difficult for Arkani-Hamed/Dimopoulos. Sure, you can argue no stars, or no atoms, or no carbon, but does this prove no intelligent observers? I think not.

As far as I can see, no sufficiently large Universe in which complex information processing can take place can be ruled out. For example, the Universe could be a cellular automaton, or an assemblage of long-lived black holes that instantiate a universal Turing machine.

Fred Hoyle’s story “The Black Cloud” reminds us that intelligent beings could look extremely unlike ourselves. Beware organic or atomic chauvinism!

Hence, unlike the cosmological constant, I don’t believe m_H/m_Pl can be dealt with “anthropically”, until I see a proof that intelligent observers are impossible for all Higgs masses above a given scale.

Posted by: Thomas Dent on June 1, 2004 11:06 AM | Permalink | Reply to this

Re: High Energy Supersymmetry

Hence, unlike the cosmological constant, I don’t believe m_H/m_Pl can be dealt with “anthropically”, until I see a proof that intelligent observers are impossible for all Higgs masses above a given scale.

See Arkani-Hamed/Dimopolous ref 22. The result that the higgs vev must be within a factor 5 of its measured value is extremely interesting, but it seems they have to make a very large number of difficult-to-justify assumptions.

Of course this does not prove observers can’t exist outside that range, even if all their assumptions are correct.

Posted by: Bob McElrath on June 1, 2004 1:15 PM | Permalink | Reply to this

Re: High Energy Supersymmetry

Sure, you can argue no stars, or no atoms, or no carbon, but does this prove no intelligent observers? I think not.

OK.

I still think it is interesting that the existence of stars, atoms, elements heavier than hydrogen, etc., puts such tight constraints on the fundamental parameters of the theory.

But I disagree with the notion that the explanation for the cosmological constant could be anthropic, whereas the other constants of nature would be explained by some other mechanism, like technical naturalness.

Once you have an ensemble of theories (or of string vacua), there’s a distribution of values, not just for the cosmological constant, but also for the other couplings in your theory. And there’s no reason to believe that technically natural values should be favoured.

As I’ve been saying, it’s not clear to me at all that there will, in the end, be a role for the anthropic principle in “explaining” the vacuum we live in.

But it’s important to explore the limits of what can and cannot be explained on anthropic grounds.

It is still true, I think, that proton decay is a serious challenge to the anthropic principle. We have no idea whether the Dark matter is axions or the lightest superpartner, or something completely different.

So I claim that there’s still a 1 part in 104 fine tuning that cannot be explained within the formalism of the anthropic principle.

Posted by: Jacques Distler on June 1, 2004 1:25 PM | Permalink | PGP Sig | Reply to this

Re: High Energy Supersymmetry

Surely what is technically natural is, almost by definition, more likely than what is not.

To be more precise, if some observable is not protected by a symmetry, then the default distribution of vacua in this observable is a uniform (in some sense) one, therefore the likelihood of finding the observable in a given tiny range is tiny.

But if there is a symmetry, then there may be a special value of the observable with respect to the symmetry, at which the distribution may have a peak. E.g. the theta-angle, if CP is preserved by the underlying theory and only broken “spontaneously”.

(However, the actual distribution seems to depend rather much on the manner of breaking such a symmetry.)

There seem to be an awful lot of things that cannot be anthropic under any assumptions, “atomic principle” or not - proton lifetime, tau mass, CKM angles, superpartner spectrum if any… As far as really making a prediction goes, the distribution of vacua over these observables is the only argument we currently have. And as you say, the proton business is not a success so far.

But, in considering such a distribution, it’s only common sense to discard the vacua that have a value of the vacuum energy that would not allow intelligent observers to exist (assuming that the semiclassical treatment of Lambda is correct).

Similarly, in considering the proton lifetime, one has to discard all the vacua that don’t have a proton!

This is what I mean by observer selection - you discard all vacua except those containing observers who are able to observe the phenomenon in question.

Posted by: Thomas Dent on June 2, 2004 5:17 AM | Permalink | Reply to this

Counting points

Surely what is technically natural is, almost by definition, more likely than what is not.

No, it’s not a matter of definition. As you realize a paragraph later,

there may be a special value of the observable with respect to the symmetry, at which the distribution may have a peak.

The distribution, in question, is a discrete one — a set of points — and it is usually assumed that we count each point with equal weight. So your “may” boils down to counting points and asking whether they are clustered around the symmetry point. Generally, they are not.

Of course, it’s not necessarily true that each point should be counted with equal weight. There can be dynamical mechanisms which favour some points over others. But the mechanism of Kofman et al does not apply to R-parity or the QCD θ angle, which are not associated to the existence of extra massless particles.

Posted by: Jacques Distler on June 2, 2004 8:52 AM | Permalink | PGP Sig | Reply to this

Re: Counting points

Now I’m confused. If technical naturalness is useless, why did people want it in the first place?

Let me attempt to dig myself out: the set of points we’re talking about come from models that *explicitly* break symmetries, by flux values (etc.) that are put in by hand. For such a construction there is no such thing as technical naturalness. Observables are, as Banks et al. say, anthropic or random.

It’s not that that “technically natural values” will or will not be favoured, since there are *no* technically natural values!

Only if we have a model where a symmetry survives to the 4d effective theory can one obtain technically natural small numbers.

You’re saying that in the vast majority of the vacua there will be no such symmetries (apart from the surviving gauge group).

Banks/Dine/Gorbatov give a “last ditch” defence of discrete symmetries, supposing that the majority of vacua with u and d quarks of the right mass do in fact have such symmetries (rather than having them by random chance). But they don’t endorse this idea strongly.

(What’s wrong with gauged U(1) flavour symmetry?)

Do you understand Douglas’ argument that ‘low-energy supersymmetry’ might be more likely (top of p.6 in his most recent paper)? Maybe he means the argument presented in Banks/Dine/Gorbatov. The Douglas paper is very ambiguous.

Posted by: Thomas Dent on June 3, 2004 6:00 AM | Permalink | Reply to this

Re: High Energy Supersymmetry

assuming that the semiclassical treatment of Lambda is correct

Could you expand on that? In calculations such as those in the KKLT paper, what of the results are known to be valid classically, semiclassically, quantumly, or even non-perturbatively?

Posted by: Urs Schreiber on June 3, 2004 4:34 AM | Permalink | PGP Sig | Reply to this

Re: High Energy Supersymmetry

That’s a good question, to which I don’t know the answer.

Tom Banks and Keith Dienes have (for different reasons) doubted that effective supergravity gives the correct answer for the cosmological constant.

Banks’ reason I don’t fully grasp, but may well involve black holes and holography; Dienes, because any string effective theory involves integrating out an infinite number of states, and while this may give a sensible answer for some things (e.g. gauge couplings a la Kaplunovsky-Louis) it may not for the c.c.

For a cryptic reference, see the footnote to p.30 of hep-th/0309170.

Posted by: Thomas Dent on June 3, 2004 6:19 AM | Permalink | Reply to this

anthropism and empiricism

Anthropic principle is just a cheap trick to pass experimental data as if it were theoretical principles. Existence of atoms, or earth, of monkeys, of man, is all of them experimental input. For instance I could say that there is a “Very Weak Antropic Principle” consisting on the existence of chemical atoms. This is not theoretical deduction, it is just experience.

In some sense anthropism it is a variant of empiricism, as empiricism already assumes the existence of a researcher taking data.

Posted by: alejandro rivero on June 19, 2004 8:35 PM | Permalink | Reply to this
Read the post Other People's Stuff
Weblog: Not Even Wrong
Excerpt: It's always a little worrying when this happens, but sometimes I find myself very much agreeing with at least parts of what Lubos Motl has to say. For example see this recent posting to sci.physics.strings. In it Motl argues that...
Tracked: February 6, 2005 4:51 PM

Post a New Comment