Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 2, 2006


After writing my two previous posts on approaches to quantum gravity, various people asked me to write something about Causal Dynamical Triangulations, a lattice model that has enjoyed a certain amount of favourable ‘buzz’ recently. I’ve been procrastinating about following up on that request because, to do a halfway decent job would require at least two post, one about generalities about lattice models of quantum gravity and one specifically about CDT. Alas, I’m not that interested in the subject, so I’ve always been able to find something else I’d rather write about …

Anyway, as a bit of New Years resolve, here’s a stab at such a post which will, alas, fall far short of what’s really required.

There are two broad classes of lattice models: those which use a fixed triangulation, and those which use random triangulations. Most lattice field theories fall into the former sort, including the Regge-Calculus approach to quantum gravity (in which the edge-lengths of the bonds are the dynamical variables). But random lattice models have been proposed as a solution to the Fermion Doubling problem. And, of course, they’ve been used in the Dynamical Triangulations approach to quantum gravity, the precursor to CDT.

Whether you’re working with a fixed or a random lattice, the real problem is not formulating a suitable lattice model (lattice models are a dime-a-dozen). The real problem is finding/proving the existence of a continuum limit.

There are many, many more lattice theories than there are continuum field theories. Think about lattice Yang-Mills theory in arbitrary dimension, dd. Only for d4d\leq 4 does a continuum theory exist. It is only in the continuum limit that the (infinite) details of the latticization drop out1, and the resulting continuum theory depends on only a finite number of parameters (and hence is predictive). Continuum theories are relevant (infrared repulsive) deformations of Renormalization Group fixed points, and not every lattice model has such a suitable fixed point.

Conversely, just because you can write down a latticization of some QFT doesn’t mean that lattice theory has anything to do with the behaviour of the continuum theory you would like to describe. Consider the case of QED. It is absolutely straightforward to write down a lattice version of QED. However, 4d Lattice QED does not have a continuum limit. If you hold the low-energy effective gauge coupling (the fine-structure constant) fixed, then the lattice coupling diverges at a finite value of the lattice spacing. You cannot send the lattice spacing to zero.

This is not a big deal because:

  1. We know that QED is just a low-energy effective theory, which is embedded in some larger theory at short distances. No one is wedded to the idea of putting QED itself on the lattice, as it’s not really fundamental.
  2. The lattice spacing at which Lattice QED breaks down is exponentially smaller than the electron Compton wavelength (the length scale at which QED effects become important). This is many, many orders of magnitude shorter than the length scale at which QED is subsumed into the aforementioned larger theory.

In the case of lattice gravity, the challenge of finding a continuum limit is to be able to hold the effective Newton’s constant, G NG_N and cosmological constant, Λ\Lambda, fixed, while sending the lattice spacing to zero. The problem is more acute than in the QED case because, in the absence of a suitable RG fixed point, the length scale at which quantum gravity effects become important is the same as the lattice spacing, rather than being exponentially longer.

The precursor to CDT was the Dynamically Triangulated Random Surface model, a model of Euclidean 2d quantum gravity (coupled to matter). This model garnered a lot of interest in the early 1990s, because of its relationship to c<1c\lt 1 noncritical string theories which were being actively studied in those days. But attempts to add more matter (c>1c\gt 1) or to go to higher dimensions (d>2d\gt 2) ran into the basic problem of the unboundedness from below of the Euclidean Einstein action. The symptom, in the lattice model, is that the triangulations which dominate the functional integral represent a branched polymer or a crumpled phase, rather than anything resembling a smooth manifold.

CDT represents an important technical advance. Rather than Euclidean gravity, it is a discretization of Wick-rotated Minkowski gravity, and does not have the wrong-signature mode that leads to trouble in the Euclidean theory. To do this, requires a specialization to manifolds which are globally-foliated by spacelike hypersurfaces of a fixed topology (usually, S d1S^{d-1}). This is a highly problematic restriction and, if I were inclined to write a second post, it would be largely devoted to why that restriction is problematic. But, for present purposes, this stratagem does avoid the aforementioned sickness of the Euclidean theory. For d=3d=3 (and, possibly, d=4d=4), the theory avoids degenerating into a branched polymer or crumpled phase.

That’s rather nice, and an important step towards addressing the real issue, which is whether the theory has a continuum limit. In particular, we would to send (G N) eff(G_N)_{\text{eff}} and Λ eff\Lambda_{\text{eff}} to zero in lattice units, as we send that lattice spacing a0a\to 0 (with Λ eff(G N) eff 31\Lambda_{\text{eff}}(G_N)_{\text{eff}}^3 \ll 1).

In that limit (if it exists), you know what configurations should dominate the functional integral: those configurations which are “close” to a classical solution1 of the (Wick-rotated) vacuum Einstein equations with cosmological constant. For positive Λ\Lambda, these are solutions in which the 3-geometry expands and then recollapses.

So does this Minkowskian theory have a suitable RG fixed point, which would lead to a continuum limit? Reuter et al claim that Euclidean Einstein Gravity (with a cosmological constant) has an RG fixed point. But, as we’ve seen, their alleged fixed point corresponds to a solution of an essentially perturbative RG equation lying well beyond the realm of validity of perturbation theory. In other words, it is a mirage.

What one really needs is a truly nonperturbative analysis, and that’s what a lattice model affords us the possibility of attempting. No one has actually tackled the question of the existence of a fixed point head-on (through some numerical ‘block-spinning’ of an ensemble of triangulations). But one very crude test is to compare the Monte-Carlo simulations that they have done with what one might expect from a semiclassical analysis.

Ambjørn et al are rather upbeat about the agreement between their simulations and a mini-superspace analysis. But, when I stare at their results, it looks to me as if the effective cosmological constant is quite large (the comparison is insensitive to the effective value of G NG_N), and the departures from semiclassical behaviour (which one expects for small 3-geometries) set in at a scale-factor not all that far removed from the maximal size of the universe. In other words, their results don’t look terribly semiclassical at all.

Perhaps, this is because they are still far from the continuum limit (with their pitifully small average number of simplices), and larger simulations will start looking more semiclassical. That would be evidence (but, still, very weak evidence) for the existence of a continuum limit.

More likely, as I’ve argued previously, pure quantum gravity doesn’t have a suitable RG fixed point. Only by adding some as yet unknown (and, for practical purposes, unknowable) set of matter degrees of freedom would such an RG fixed point emerge. In that case, you wouldn’t even expect the lattice theory of “pure” gravity to have a continuum limit.

Still, regardless of whether CDT has a continuum limit, it is likely to be an interesting source of insights, just as lattice QED has proven to fruitful to think about, despite the fact that it doesn’t have a continuum limit, either.

Update (2/4/2006): Shellings

Bah! After further private discussions with Greg, I no longer believe this objection is pertinent. A shelling implies an actual ordering of the dd-simplices. A discrete time function only determines a partial ordering of the dd-simplices. For triangulations of X=M×X=M\times\mathbb{R}, that partial ordering both exists and is unique. On the other hand, Ambjørn et al are largely interested in X=M×IX=M\times I or X=M×S 1X=M\times S^1, for which existence is not guaranteed.

I thought I should move one of Greg Kuperberg’s critiques of CDT from the comments below to someplace “above the fold.” As I said above, we are specializing to d-manifolds of a very special sort, X (d)=M (d1)×X^{(d)}= M^{(d-1)}\times \mathbb{R}. In our lattice model, we are going to sum over piece-wise flat, globally time-oriented, Minkowski-signature metrics on XX, corresponding to triangulations3 of XX.

Every triangulation, TT, of XX gives rise to such a metric, so one might hope for a discrete approximation to “the space of all Minkowskian metrics on XX, modulo diffeomorphisms” by simply summing over triangulations, TT.

However, to actually define the lattice model, one needs to do a certain “Wick-rotation.” This requires us to choose a (discrete) time-coordinate, tt, on each triangulation, TT, that is, an assignment of an integer, t it_i, to each vertex, v iv_i, of the triangulation. Since we are trying to model a diffeomorphism-invariant theory, we would like this time-coordinate to be determined by the triangulation itself, rather than being an arbitrary extra piece of data.

For a single (k,d+1k)(k,d+1-k)-simplex, this is easy: simply assign an integer tt to the first kk vertices, and the integer t+1t+1 to the remaining d+1kd+1-k vertices. The question is:

  • Can you do this compatibly for all of the dd-simplices of the triangulation?
  • Is the resulting assignment unique?


For d=2d=2, the answer to both questions is affirmative. So the triangulation, itself, defines a unique discrete time-coordinate, with respect to which one can do the required Wick-rotation.

For d>2d\gt 2, however, the answer to both questions is, in general, “No!”

  • There are triangulation, TT, for which no compatible global choice of discrete time coordinate is possible. Such triangulations are called, “non-shellable.”
  • There are triangulations for which there is more than one possible assignment of global time-coordinate. That is, the triangulation is “shellable,” but admits more than one shelling.

The first problem is, perhaps, surmountable. Ambjørn et al can simply say, “We will throw away all the non-shellable triangulations.” There may be nothing wrong with the corresponding piecewise-flat, globally time-oriented, Minkowskian metrics. But we’ll just not sample that part of the configuration space.

The second problem is more serious. Are you supposed to choose a particular shelling (a particular choice of how to Wick-rotate)? That clearly corresponds to introducing some extraneous, non diffeomorphism-invariant, data into your model. Are you supposed to sum over different choices of shelling (different choices of Wick-rotation)? That, too, would be bizarre, because there would no longer be any sort of 1-1 correspondence between configurations in the “Wick-rotated” lattice model and configurations of the Lorentzian theory we are supposed to be modelling4.

This seems like quite a serious objection to CDT, as currently formulated. If you look at what Ambjørn et al actually do, it corresponds to arbitrarily choosing a particular shelling and proceeding. Even if the resulting theory has a continuum limit (which, as I said, seems doubtful), the arbitrary choice of shelling means that there is no reason to believe that the resulting continuum theory is diffeomorphism-invariant.

Update (1/5/2006):

Ambjørn et al, however, are mostly interested in computing transition amplitudes. That is, they are interested in the case X=M×IX=M\times I, where M×{0}M\times \{0\} and M×{1}M\times\{1\} are spacelike hypersurfaces5. In this case, a discrete time coordinate, as described above is unique, if it exists. To see that, simply set t=0t=0 everywhere on the initial hypersurface, and determine it on all of the other vertices of the triangulation by induction. That yields the desired discrete time coordinate if and only if the procedure ends up yielding t=constt=\text{const} on the final hypersurface.

But, rather trivially, “most” triangulations of X=M×IX=M\times I will not have that property. There will be paths with different numbers of timelike links (counted with sign) between the initial and the final hypersurface.

The restriction to triangulations (equivalently, piece-wise flat metrics) for which the above-described discrete time coordinate does exist is a very strange and highly non-local restrictions on the metrics you include in the functional integral. Essentially, you are restricting to metrics for which the geodesic distance between the initial and final slices (for all geodesics between the initial and final slice) is fixed. Even if there is a continuum limit, it’s not clear at all that the resulting continuum theory is a local, diffeomorphism-invariant theory.

1 In practical applications, the (infinite) freedom to tinker with the details of the latticization (both the choice of lattice, and the choice of the lattice action) can be used to advantage. Rather than work with the simple Wilson action, contemporary lattice gauge theorists prefer to use a more complicated lattice action, tuned so that the convergence to the continuum limit is more rapid.

2 The classical solutions themselves, indeed, the set all smooth manifolds, is a set of measure-zero in the functional integral. So, in our simulations, we don’t expect to see the configuration corresponding precisely to the classical solution. What we do expect to see is lots of configurations which are small fluctuations about the classical solution. Note that they are not doing the (oscillatory) Minkowskian functional integral, but the “Wick-rotated” one, in which all configurations enter with positive weight.

3 That is, each d-simplex of the triangulation is flat, and they are glued-together along (d1)(d-1)-simplices, so that the time-orientations are compatible. Each d-simplex has a “standard” Minkowski metric on it, induced by embedding in d1,1\mathbb{R}^{d-1,1}.

Actually, we should talk about (k,d+1k)(k,d+1-k)-simplices, for k=1,,dk=1,\dots,d. The d+1d+1 vertices of the dd-simplex are divided into two sets, of kk and d+1kd+1-k vertices, respectively. Within each set, the vertices are spacelike-separated, with an invariant edge-length-squared l s 2=a 2l_s^2 =a^2 The set of kk vertices is “earlier” than the set of d+1kd+1-k vertices, and the k(d+1k)k(d+1-k) edges that stretch between them are time-like, with invariant length-squared l t 2=αa 2l_t^2=-\alpha a^2 α\alpha is a free-parameter of the model.

4This leads to a host of other potential sicknesses, which we might enjoy discussing, once Ambjørn et al tell us what prescription for summing over shellings they wish to use.

5 Identical remarks apply to X=M×S 1X=M\times S^1, which is what they actually use for their simulations.

Posted by distler at January 2, 2006 3:35 AM

TrackBack URL for this Entry:

14 Comments & 1 Trackback


I agree that it could be useful to think about CDT, but only if people have a clear view of its limitations and negative properties. As I understand it, the main claimed victory about CDT is that it is a four-dimensional theory, according to numerical results. But I am not sure that the interpretation of this work is right. I talk about this here.

Posted by: Greg Kuperberg on January 2, 2006 10:51 AM | Permalink | Reply to this


Yes, there are many technical points that I haven’t touched.

Another one is whether the “moves” described by Ambjørn et al are ergodic in the space of (shelled? shellable?) 4d triangulations. At least in the DT (as opposed to CDT) case, this was true in 2d, but there were serious doubts about whether it was true in 4d.

If the moves are not ergodic, then their Monte Carlo simulations are not, in fact, properly sampling the configuration space.

Posted by: Jacques Distler on January 2, 2006 11:11 AM | Permalink | PGP Sig | Reply to this

Re: Shellable

My view is that the construction is so ad hoc that you can say, tautologically, that they properly sample whatever they sample. They would then need some kind of consistency check to see glimmers of universality in their model. Their dimension result could be, or could have been, an example of such a consistency check.

On second thought, they do seem to claim (in hep-th/0105267) that they ergodically sample something with an independent definition. Aside from whether such a claim is really necessary to understand this model, I agree that it isn’t clear that any such claim is true. Almost certainly they would be looking at shelled rather than shellable triangulations.

Here is a relevant theorem: The number of moves that you need to connect two triangulations with n simplices of the same 4-manifold is known not to be a recursive function of n. (If an oracle provides you with values of that function, then you can solve the halting problem.) So even if you find an ergodic process for this, it is intractibly slow for large triangulations. This result is not known for the simplest 4-manifolds such as S^4, but it is natural to conjecture the same bad news.

Since their 4-manifolds are shelled, it is possible that their model is computationally 3-dimensional rather than 4-dimensional. People do not think that connecting 3-dimensional triangulations by local moves is non-recursive. Indeed, the Geometrization Conjecture implies some recursive bound. But even here, no one knows a good upper bound. (Good either in the sense of close to the truth or encouraging for simulations.)

Posted by: Greg Kuperberg on January 2, 2006 3:13 PM | Permalink | Reply to this


What I find convenient in CDT is that it is formulated from the get-go in field theory/renormalization language that makes it understandable in terms of conventional physics.

In this language I am wondering what will clinch the case for having real 4dim gravity in their model. Seems to me much more difficult than just having *some* continuum limit, even a 4dim one.

(and even though you did not touch on it, some of their claim to fame is the odd statement that spacetime is two dimensional on short distances, never was able to understand what this could possibly mean)

Posted by: Moshe Rozali on January 2, 2006 12:12 PM | Permalink | Reply to this


In this language I am wondering what will clinch the case for having real 4dim gravity in their model. Seems to me much more difficult than just having some continuum limit, even a 4dim one.

The restriction on triangulations in CDT is not, strictly, a local one. So it’s not a-priori clear that a continuum limit exists or that, if there is one, that it corresponds to a local diffeomorphism-invariant field theory.

That’s one of the reasons that fans of CDT are so enthusiastic about the work of Reuter et al. If it were correct, it would provide some justification for believing that you will get 4d Einstein gravity in the continuum limit.

and even though you did not touch on it, some of their claim to fame is the odd statement that spacetime is two dimensional on short distances, never was able to understand what this could possibly mean

I have no particular problem with the assertion that the spectral dimension, d sd_s, approaches 2 at short distances. Greg, above, expressed some scepticism that their numerical results actually support the assertion that d s4d_s\to 4 at long distances.

For that and other reasons, I tend to think that their simulations are too small to conclude anything about the long-distance behaviour of this theory.

Posted by: Jacques Distler on January 2, 2006 12:35 PM | Permalink | PGP Sig | Reply to this


The spectral dimension is a lattice construct, which may or may not correspond to some well-defined quantity in the continuum limit. I have no candidate for what that quantity would be in semi-classical gravity.

Posted by: Moshe Rozali on January 2, 2006 12:50 PM | Permalink | Reply to this

Spectral dimension

I certainly didn’t understand Reuter et al’s assertion that it is somehow related to the anomalous dimension of a certain operator in the continuum.

Posted by: Jacques Distler on January 2, 2006 12:58 PM | Permalink | PGP Sig | Reply to this

Re: Spectral dimension

I see, to be slightly more precise I am having trouble finding any well-defined observable that can be meaningfully called the “dimension of spacetime at short distances”, whether it is in this formalism or others. Anomalous dimensions of operators usually don’t measure the dimension of spacetime, quite possibly I am missing something…

Anyways, interesting review, thanks.

Posted by: Moshe Rozali on January 2, 2006 1:29 PM | Permalink | Reply to this

Spectral Dimension

The spectral dimension is a lattice construct, which may or may not correspond to some well-defined quantity in the continuum limit. I have no candidate for what that quantity would be in semi-classical gravity.

Doesn’t it amount to determining the dimension of spacetime by looking at the eigenvalues of some Laplace operator? Isn’t that closely related to determining the number of effective dimensions by looking, for instance, at the exponent which determines the fall-off of the gravitational potential, in accordance with your statement here ?

Posted by: Urs on January 2, 2006 1:25 PM | Permalink | Reply to this

Re: Spectral Dimension

Wow, I should be careful what I write here and there, apparently some people are reading…

In QFT in fixed background I can imagine the static potential between two probes depends on the distance between them in a way that in some sense interpolates between two and four dimensional-type behavior. Possibly this flow is encoded is anomalous dimensions of appropriately chosen operators.

But I thought we are doing quantum gravity here, so I am not sure how to repeat all these words… I guess I am stuck on the basic clash between locality and diffeomeorphism invariance.

Posted by: Moshe Rozali on January 2, 2006 1:49 PM | Permalink | Reply to this

Re: Spectral Dimension

But I thought we are doing quantum gravity here

So how do you propose to measure the dimension of spacetime in nonperturbative quantum gravity? Or are you saying that you wouldn’t believe any claim in nonperturbative QG concerning the dimension of spacetime?

I am really just asking. I haven’t looked at any technical details of any CDT paper or anything. It’s just that looking at the spectral dimension as measured by some suitable Laplace operator seems to me to be one of the few ‘good’ candidates to define dimension ‘intrinsically’ and ‘quantumly’. Of course, that’s essentially the Alain-Connes-approach to studying quantum gravity.

Of course I agree, one would have to cook up a way to say what it means to measure the spectral dimension at ‘short’ scales in a diffeo invariant way. How do Loll et al. deal with that?

Posted by: Urs on January 2, 2006 2:05 PM | Permalink | Reply to this

Re: Spectral Dimension

Your last statement is precisely what I am confused about, I am not trying to make any statements beyond that.

Posted by: Moshe Rozali on January 2, 2006 2:09 PM | Permalink | Reply to this
Read the post CDT
Weblog: Musings
Excerpt: Some thoughts on Causal Dynamical Triangulation models.
Tracked: September 4, 2006 11:39 AM


>> Even if there is a continuum limit, it is not clear at all that the resulting continuum theory is a local, diffeomorphism-invariant theory

more recent results indicate (according to the authors) a similarity to Horava-Lifshitz gravity. I do not understand this as good news.

Posted by: wolfgang on April 17, 2010 11:44 AM | Permalink | Reply to this


Indeed, you are right.

If CDT has a continuum limit, it is far more likely to be related to Horava-Lifshitz gravity, than to Einstein-Hilbert.

When I wrote this post (long before Horava’s work), it was not known that a suitable fixed-point theory (with spatial diffeomorphism symmetry, and anisotropic scaling) existed. Now that Horava has presented a candidate, it seems very likely that’s what Ambjorn et al are studying on the lattice.

Certainly worthy of study, but probably not meriting the previous hype.

Posted by: Jacques Distler on April 18, 2010 11:01 PM | Permalink | PGP Sig | Reply to this

Post a New Comment