### CDT

After writing my two previous posts on approaches to quantum gravity, various people asked me to write something about Causal Dynamical Triangulations, a lattice model that has enjoyed a certain amount of favourable ‘buzz’ recently. I’ve been procrastinating about following up on that request because, to do a halfway decent job would require at least two post, one about generalities about lattice models of quantum gravity and one specifically about CDT. Alas, I’m not *that* interested in the subject, so I’ve always been able to find something *else* I’d rather write about …

Anyway, as a bit of New Years resolve, here’s a stab at such a post which will, alas, fall far short of what’s really required.

There are two broad classes of lattice models: those which use a fixed triangulation, and those which use random triangulations. Most lattice field theories fall into the former sort, including the Regge-Calculus approach to quantum gravity (in which the edge-lengths of the bonds are the dynamical variables). But random lattice models have been proposed as a solution to the Fermion Doubling problem. And, of course, they’ve been used in the Dynamical Triangulations approach to quantum gravity, the precursor to CDT.

Whether you’re working with a fixed or a random lattice, the real problem is not formulating a suitable lattice model (lattice models are a dime-a-dozen). The real problem is finding/proving the existence of a continuum limit.

There are many, many more lattice theories than there are continuum field theories. Think about lattice Yang-Mills theory in arbitrary dimension, $d$. Only for $d\leq 4$ does a continuum theory exist. It is only in the continuum limit that the (infinite) details of the latticization drop out^{1}, and the resulting continuum theory depends on only a finite number of parameters (and hence is predictive). Continuum theories are relevant (infrared repulsive) deformations of Renormalization Group fixed points, and not every lattice model has such a suitable fixed point.

Conversely, just because you can write down a latticization of some QFT doesn’t mean that lattice theory has anything to do with the behaviour of the continuum theory you would like to describe. Consider the case of QED. It is absolutely straightforward to write down a lattice version of QED. However, 4d Lattice QED does not have a continuum limit. If you hold the low-energy effective gauge coupling (the fine-structure constant) fixed, then the lattice coupling diverges at a finite value of the lattice spacing. You cannot send the lattice spacing to zero.

This is not a big deal because:

- We know that QED is just a low-energy effective theory, which is embedded in some larger theory at short distances. No one is wedded to the idea of putting QED itself on the lattice, as it’s not really fundamental.
- The lattice spacing at which Lattice QED breaks down is exponentially smaller than the electron Compton wavelength (the length scale at which QED effects become important). This is many,
*many*orders of magnitude shorter than the length scale at which QED is subsumed into the aforementioned larger theory.

In the case of lattice gravity, the challenge of finding a continuum limit is to be able to hold the effective Newton’s constant, $G_N$ and cosmological constant, $\Lambda$, fixed, while sending the lattice spacing to zero. The problem is more acute than in the QED case because, in the absence of a suitable RG fixed point, the length scale at which quantum gravity effects become important is *the same* as the lattice spacing, rather than being exponentially longer.

The precursor to CDT was the Dynamically Triangulated Random Surface model, a model of Euclidean 2d quantum gravity (coupled to matter). This model garnered a lot of interest in the early 1990s, because of its relationship to $c\lt 1$ noncritical string theories which were being actively studied in those days. But attempts to add more matter ($c\gt 1$) or to go to higher dimensions ($d\gt 2$) ran into the basic problem of the unboundedness from below of the Euclidean Einstein action. The symptom, in the lattice model, is that the triangulations which dominate the functional integral represent a branched polymer or a crumpled phase, rather than anything resembling a smooth manifold.

CDT represents an important technical advance. Rather than Euclidean gravity, it is a discretization of Wick-rotated Minkowski gravity, and does not have the wrong-signature mode that leads to trouble in the Euclidean theory. To do this, requires a specialization to manifolds which are globally-foliated by spacelike hypersurfaces of a fixed topology (usually, $S^{d-1}$). This is a highly problematic restriction and, if I were inclined to write a second post, it would be largely devoted to *why* that restriction is problematic. But, for present purposes, this stratagem does avoid the aforementioned sickness of the Euclidean theory. For $d=3$ (and, possibly, $d=4$), the theory avoids degenerating into a branched polymer or crumpled phase.

That’s rather nice, and an important step towards addressing the real issue, which is whether the theory has a continuum limit. In particular, we would to send $(G_N)_{\text{eff}}$ and $\Lambda_{\text{eff}}$ to zero in lattice units, as we send that lattice spacing $a\to 0$ (with $\Lambda_{\text{eff}}(G_N)_{\text{eff}}^3 \ll 1$).

In that limit (*if* it exists), you know what configurations should dominate the functional integral: those configurations which are “close” to a classical solution^{1} of the (Wick-rotated) vacuum Einstein equations with cosmological constant. For positive $\Lambda$, these are solutions in which the 3-geometry expands and then recollapses.

So does this Minkowskian theory have a suitable RG fixed point, which would lead to a continuum limit? Reuter *et al* claim that Euclidean Einstein Gravity (with a cosmological constant) has an RG fixed point. But, as we’ve seen, their alleged fixed point corresponds to a solution of an essentially perturbative RG equation lying well beyond the realm of validity of perturbation theory. In other words, it is a mirage.

What one really needs is a *truly* nonperturbative analysis, and that’s what a lattice model affords us the possibility of attempting. No one has actually tackled the question of the existence of a fixed point head-on (through some numerical ‘block-spinning’ of an ensemble of triangulations). But one *very crude* test is to compare the Monte-Carlo simulations that they *have done* with what one might expect from a semiclassical analysis.

Ambjørn *et al* are rather upbeat about the agreement between their simulations and a mini-superspace analysis. But, when I stare at their results, it looks to me as if the effective cosmological constant is quite large (the comparison is insensitive to the effective value of $G_N$), and the departures from semiclassical behaviour (which one expects for small 3-geometries) set in at a scale-factor not all that far removed from the maximal size of the universe. In other words, their results don’t look terribly semiclassical at all.

Perhaps, this is because they are still far from the continuum limit (with their pitifully small average number of simplices), and larger simulations will start looking more semiclassical. That would be evidence (but, still, very *weak* evidence) for the existence of a continuum limit.

More likely, as I’ve argued previously, pure quantum gravity doesn’t have a suitable RG fixed point. Only by adding some as yet unknown (and, for practical purposes, unknowable) set of matter degrees of freedom would such an RG fixed point emerge. In that case, you wouldn’t even *expect* the lattice theory of “pure” gravity to have a continuum limit.

Still, regardless of whether CDT has a continuum limit, it is likely to be an interesting source of insights, just as lattice QED has proven to fruitful to think about, despite the fact that *it* doesn’t have a continuum limit, *either*.

#### Update (2/4/2006): Shellings

*Bah! After further private discussions with Greg, I no longer believe this objection is pertinent. A shelling implies an actual ordering of the $d$-simplices. A discrete time function only determines a partial ordering of the $d$-simplices. For triangulations of $X=M\times\mathbb{R}$, that partial ordering both exists and is unique. On the other hand, Ambjørn et al are largely interested in $X=M\times I$ or $X=M\times S^1$, for which existence is not guaranteed.*

I thought I should move one of Greg Kuperberg’s critiques of CDT from the comments below to someplace “above the fold.” As I said above, we are specializing to d-manifolds of a very special sort, $X^{(d)}= M^{(d-1)}\times \mathbb{R}$. In our lattice model, we are going to sum over piece-wise flat, globally time-oriented, Minkowski-signature metrics on $X$, corresponding to triangulations^{3} of $X$.

Every triangulation, $T$, of $X$ gives rise to such a metric, so one might hope for a discrete approximation to “the space of all Minkowskian metrics on $X$, modulo diffeomorphisms” by simply summing over triangulations, $T$.

However, to actually *define* the lattice model, one needs to do a certain “Wick-rotation.” This requires us to choose a (discrete) time-coordinate, $t$, on each triangulation, $T$, that is, an assignment of an integer, $t_i$, to each vertex, $v_i$, of the triangulation. Since we are trying to model a diffeomorphism-invariant theory, we would like this time-coordinate to *be determined by the triangulation itself*, rather than being an arbitrary extra piece of data.

For a single $(k,d+1-k)$-simplex, this is easy: simply assign an integer $t$ to the first $k$ vertices, and the integer $t+1$ to the remaining $d+1-k$ vertices. The question is:

- Can you do this compatibly for all of the $d$-simplices of the triangulation?
- Is the resulting assignment unique?

Yes.

For $d=2$, the answer to both questions is affirmative. So the triangulation, itself, defines a unique discrete time-coordinate, with respect to which one can do the required Wick-rotation.

For $d\gt 2$, however, the answer to both questions is, in general, “No!”

- There are triangulation, $T$, for which no compatible global choice of discrete time coordinate is possible. Such triangulations are called, “non-shellable.”
- There are triangulations for which there is more than one possible assignment of global time-coordinate. That is, the triangulation is “shellable,” but admits more than one shelling.

The first problem is, perhaps, surmountable. Ambjørn *et al* can simply say, “We will *throw away* all the non-shellable triangulations.” There may be nothing wrong with the corresponding piecewise-flat, globally time-oriented, Minkowskian metrics. But we’ll just not sample that part of the configuration space.

The second problem is more serious. Are you supposed to choose a particular shelling (a particular choice of how to Wick-rotate)? That clearly corresponds to introducing some extraneous, non diffeomorphism-invariant, data into your model. Are you supposed to sum over different choices of shelling (different choices of Wick-rotation)? That, too, would be bizarre, because there would no longer be any sort of 1-1 correspondence between configurations in the “Wick-rotated” lattice model and configurations of the Lorentzian theory we are supposed to be modelling^{4}.

This seems like quite a serious objection to CDT, as currently formulated. If you look at what Ambjørn *et al* actually do, it corresponds to arbitrarily choosing a particular shelling and proceeding. *Even* if the resulting theory has a continuum limit (which, as I said, seems doubtful), the arbitrary choice of shelling means that there is no reason to believe that the resulting continuum theory is diffeomorphism-invariant.

#### Update (1/5/2006):

Ambjørn*et al*, however, are mostly interested in computing transition amplitudes. That is, they are interested in the case $X=M\times I$, where $M\times \{0\}$ and $M\times\{1\}$ are spacelike hypersurfaces

^{5}. In this case, a discrete time coordinate, as described above is unique,

*if it exists*. To see that, simply set $t=0$ everywhere on the initial hypersurface, and determine it on all of the other vertices of the triangulation by induction. That yields the desired discrete time coordinate

*if and only if*the procedure ends up yielding $t=\text{const}$ on the final hypersurface.

But, rather trivially, “most” triangulations of $X=M\times I$ will not have that property. There will be paths with different numbers of timelike links (counted with sign) between the initial and the final hypersurface.

The restriction to triangulations (equivalently, piece-wise flat metrics) for which the above-described discrete time coordinate *does* exist is a very strange and highly non-local restrictions on the metrics you include in the functional integral. Essentially, you are restricting to metrics for which the geodesic distance between the initial and final slices (for all geodesics between the initial and final slice) is fixed. Even if there is a continuum limit, it’s not clear at all that the resulting continuum theory is a local, diffeomorphism-invariant theory.

^{1} In practical applications, the (infinite) freedom to tinker with the details of the latticization (both the choice of lattice, and the choice of the lattice action) can be used to advantage. Rather than work with the simple Wilson action, contemporary lattice gauge theorists prefer to use a more complicated lattice action, tuned so that the convergence to the continuum limit is more rapid.

^{2} The classical solutions themselves, indeed, the set all smooth manifolds, is a set of measure-zero in the functional integral. So, in our simulations, we don’t expect to see the configuration corresponding *precisely* to the classical solution. What we do expect to see is lots of configurations which are small fluctuations about the classical solution. Note that they are not doing the (oscillatory) Minkowskian functional integral, but the “Wick-rotated” one, in which all configurations enter with positive weight.

^{3} That is, each d-simplex of the triangulation is flat, and they are glued-together along $(d-1)$-simplices, so that the time-orientations are compatible. Each d-simplex has a “standard” Minkowski metric on it, induced by embedding in $\mathbb{R}^{d-1,1}$.

Actually, we should talk about $(k,d+1-k)$-simplices, for $k=1,\dots,d$. The $d+1$ vertices of the $d$-simplex are divided into two sets, of $k$ and $d+1-k$ vertices, respectively. Within each set, the vertices are spacelike-separated, with an invariant edge-length-squared $l_s^2 =a^2$ The set of $k$ vertices is “earlier” than the set of $d+1-k$ vertices, and the $k(d+1-k)$ edges that stretch between them are time-like, with invariant length-squared $l_t^2=-\alpha a^2$ $\alpha$ is a free-parameter of the model.

^{4}~~This leads to a host of other potential sicknesses, which we might enjoy discussing, once Ambjørn ~~*et al* tell us what prescription for summing over shellings they wish to use.

^{5} Identical remarks apply to $X=M\times S^1$, which is what they *actually* use for their simulations.

## Re: CDT

I agree that it could be useful to think about CDT, but only if people have a clear view of its limitations and negative properties. As I understand it, the main claimed victory about CDT is that it is a four-dimensional theory, according to numerical results. But I am not sure that the interpretation of this work is right. I talk about this here.