The Discretium
String Theory, it has been pointed out, does not have any adjustable coupling constants. In Field Theory (say, the Standard Model), one must do a certain number of experiments, merely to pin down the values of the couplings in the theory, before one can start extracting predictions. These coupling constants are inputs, and not predictable within the context of QFT.
String Theory does not have any adjustable couplings, so one might hope to do better. It does, however, have a huge number of vacua. With enough supersymmetry, these vacua come in continuous families. But with less (or no) supersymmetry, they are typically discrete.
I say “typically”, because in the approximations in which we can compute reliably, there invariably components of the moduli space in which some or all of the flat directions are not lifted. In some cases, that’s simply a result of our inability to reliably compute the effects which lift the degeneracy. In others, there may be stringy reasons for the flat direction.
But let us, for the sake of discussion, assume what we expect “generically” is, in fact true, and the vacua of “interest” are discrete. We’ve gone from having a continuous infinity of such vacua, to having a finite number. But this number is, at first blush, frighteningly large, where is some moderately large number itself. (Proponents of the landscape try to outdo each other in making larger and larger estimates for , just as a previous generation of string theorist tried to outdo each other in counting the number of Calabi-Yau manifolds.)
I say “frighteningly,” because these large numbers lead to two distinct, but often conflated anxieties about what it means to do physics in such a situation.
The first anxiety is what I would like to call the “Empirical” question. Which vacuum describes our world? I don’t like to think of myself as being a small fluctuation about the vacuum, but many questions — essentially all of particle physics — can be addressed by studying the physics of small fluctuations about the vacuum.
Rather than doing experiments to determine the values of the coupling constants, we need to do experiments to determine which vacuum to expand about. With a huge number of vacua at our disposal, you might worry that there will still be a large number which are compatible with current observations.
If there are only a small number of such vacua, we might even have predictions for currently-measured quantities. The 19 or so parameters of the Standard Model might not be independently adjustable. Once you pin down some of them, the rest would be determined.
But, if there are enough vacua compatible with current observations, you might worry that we could fit all current observations, and yet have differing predictions for stuff we haven’t measured yet — the mass of the Higgs, the spectrum of superpartners, …
It would be rather depressing if the LHC, and future generations of particle accelerators were simply devoted to pinning down more closely which vacuum we live in, rather than testing predictions.
Fortunately, I don’t think there is a case for the existence of a large number of vacua fitting current observations. Let me pick just two criteria: proton decay and flavour-changing neutral currents. The generic vacuum with approximate supersymmetry (eg, flux vacua in Type IIB orientifold models — the favourite among proponents of the Landscape) have dimension-4 baryon number-violating operators. Indeed, any theory in which extra coloured junk survives below the GUT scale will generically have baryon number-violating interactions whose magnitudes are too large to be compatible with the observed proton lifetime ( years).
With low-energy supersymmetry, the only way to save the day is to find a discrete symmetry (R-parity, or something similar) and impose it on the theory. Most Calabi-Yau moduli spaces do not have such a discrete symmetry, and those which do only have it on some very high codimension subspace of the moduli space. Pick such a Calabi-Yau. Most of the fluxes you might turn on do not respect the discrete symmetry, so you have to set them to zero (thus cutting down hugely the exponent “” in the above estimate) and — even if we restrict ourselves to symmetric fluxes, we need to further restrict ourselves to minima of the resulting superpotential which also respect the symmetry (remember that, just because the scalar potential has a symmetry, its set of minima need not).
If you think avoiding too-fast proton decay is easy (“Hey, we have zillions of vacua to work with!”), then you don’t remember the history of attempts to do String Phenomenology in the late '80s. Back then, people worked in the approximation of vanishing superpotential for the moduli, and simply wished to find a locus on the moduli space of some Calabi-Yau which would yield the correct physics. They allowed themselves to fantasize that nonperturbative effects would later lift the degeneracy and land them precisely where they wanted to be. But, even working with a continuous infinity of vacua (rather than the discretium), finding acceptable solution proved too hard.
I could repeat similar words about FCNC’s, but you get the idea. Finding vacua which fit our current observations is extremely hard. The worry is not that we have too many, but rather that we have too few (i.e. none).
The other anxiety has to do with what I call the “Historical” question. Given that there exists some appropriate vacuum, how did we end up here, as opposed to one of the zillions of other, inappropriate ones? One possibility is just initial conditions: they were just such that we ended up where we ended up. Given that there’s only one (observable) universe, there’s no sense in which we could reasonably ask whether this was “likely” or “unlikely.” We can’t do statistics with a sample of 1.
On the other hand, in scenarios like Chaotic Inflation, different part of the Universe may sample different initial conditions. Some will inflate, and produce an observable Universe that looks like our own. Others may look very different. Even if “most” such universes don’t look anything like ours, one might try to use anthropic arguments to say that it doesn’t matter what the “typical” universe looks like. It only matters what the typical universe capable of sustaining life looks like. In other words, we should study contingent probabilities.
Weinberg argued, for instance, that the cosmological constant could be explained by such reasoning. If it were too large, galaxies would never form, there would be no supernovæ to produce heavy elements, and we wouldn’t be here. The “expected” value of the cosmological constant turns out to be right in the ballpark of what’s observed.
Unfortunately, Banks, Dine & Gorbatov have extended this sort of analysis to other quantities. Consider again, proton decay. The anthropic bound on the proton lifetime is something like years (otherwise, you and I would glow), twenty orders of magnitude smaller than observations. There’s no anthropic bound on FCNC’s, no anthropic bound on the electron-muon mass ratio, etc.
So, merely demanding the existence of life does not explain the universe that we see.
It’s possible that, once you put in the anthropic bound — say, on proton decay — there might not be enough vacua left over to do Bayesian statistics. We might just have ended up a vacuum which exceeds the anthropic bound on the proton lifetime by 20 orders of magnitude for the silly reason that there is a dearth of vacua which satisfy the bound at all. Alternatively, if there are enough vacua, we might try to get further mileage by including other facts that we know about about our universe (aside from the fact that it supports life) in our conditional probabilities. Demanding, say, the absence of FCNC’s might yield “generalized anthropic” predictions about other quantities. At some point, the game breaks down — not enough vacua to do statistics — but it’s perfectly possible that we might be able to say something beyond Weinberg’s statement about the cosmological constant.
Overall, I’m agnostic about the “Historical” question. If some “anthropic” argument bears fruit, that’s great. If not, well that’s too bad, but I don’t think it impacts our ability to do physics. There are many things in physics which turn out to have no deeper explanation than that they are the result of initial conditions. The choice of vacuum in String Theory might be one of them.
Posted by distler at April 17, 2004 12:44 PM
Re: The Discretium
Didn’t Lubos make some comment on sci.physics.strings that said, essentially, that the flux vacua (or metastable states) are the popular vacua right now, but that they probably weren’t very representative of the totality of solutions including completely nonperturbative ones which no-one can calculate at the moment?
So, we find that in the vacua that we can calculate that a stable proton is very rare. But who’s to say whether this statistical property carries over to those where we can’t, and where the SM states are light for a reason that we can’t now fathom?
Unless you want to make a strong claim that we already know about all or most of the types of solution which can possibly lead to something like the SM.