## July 30, 2008

### Pre- and Postdictions of the NCG Standard Model

#### Posted by Urs Schreiber

At HIM this week there is a Noncommutative Geometry Conference.

Just heard Thomas Schücker talk about The noncommutative standard model and its post- and predictions, which, as it turns out, closely followed his entry for the Encyclopedia of Mathematical Physics: Noncommutative geometry and the standard model

The setup

Recall from our discussion here that the “noncommutative standard model”, due to Alain Connes and collaborators, is a Kaluza-Klein model – a model of particle physics where all observed forces on a pseudo-Riemannian spacetime $X$ are derived from pure gravity on a spacetime $X \times Y$ for $Y$ a compact Riemannian space with “essentially vanishing volume” – where now the crucial ingredient is that noncommutative geometry is used to give the idea of “essentially vanishing volume” a precise meaning:

$Y$ is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it. Only its dimension as seen by gauge theory, it’s KO-dimension, is higher – namely 6 mod 8. So $Y$ is like a manifold shrunk to a point that still remembers some of its inner structure. In particuar its Riemannian structure.

Using spectral triples, the Riemannian geometry of the 4+[6]-dimensional spacetime $X \times Y$ is entirely encoded in how it is probed by the dynamics of a spinning quantum particle roaming around in it. Algebraically this is given, essentially, by a Hilbert space $H$ of states of that particle, by an algebra $A$ of position observables of that particle, acting on $H$ and, crucially, by the Dirac operator $D$ of that particle, whose eigenvalues are the essentially the possible energies that the particle can acquire while zipping through $X \times Y$. The Riemannian structure of $X \times Y$ is encoded in these energy eigenvalues.

Given such a quantum particle, one would want to see what its second quantization is, which would be a quantum field theory describing many such particles propagating on $X \times Y$ and interacting with each other. Such a quantum field theory would traditionally been given by a functional – the action functional – depending on the Riemannian metric on $X\times Y$ as well as on the “condensate” fields of these particles. All these quantities are supposed to be encoded in a pair consisting of the spectral triple and a vector $\psi$ in the Hilbert space.

Connes gave an argument that there is an essentially unique functional $S_{\mathrm{spec}} : PointedSpectralTriples \to \mathbb{R}$ on such pairs which satisfies the obvious requirement that it be additive under disjoint unions of Riemannian spaces. This he called the spectral action functional.

Evaluate such functional on spectral triples describing Kaluza-Klein models $X \times Y$ as above. One finds that, as in ordinary commutative Kaluza-Klein theory, the Riemannian structure on such products can be interpreted as a Riemannian structure on $X$ together with a connection on a principal bundle over $X$ – the gauge bundle. Restrict attention to the subset

$Con \subset PointedSpectralTriples$

of all spectral triples which describe $\mathbb{R}^4 \times Y$ with the standard flat metric on $\mathbb{R}^4$ and such that the gauge group of the induced gauge bundle is that observed in the standard model and such the metric on $Y$ has certain fixed values, which later one identifies with Yukawa coupling terms. On this subset the spectral action

$S_{\mathrm{spec}} : Con \to \mathbb{R}$

restricts to a functional of the connection on that gauge bundle and of a section of a spinor bundle over $\mathbb{R}^4 \times Y$ (the element in the Hilbert space).

The standard model action functional is precisely a functional of such a kind. See table 2 on page 10. So then the task is to adjust the remaining details of the spectral triple (in particular the metric on the [6]-dimensional compact $Y$) such that $S_{\mathrm{spec}}|_{Con}$ coincides entirely with the standard model action (as far as that is fixed).

When that is achived, one has found a noncommutative Kaluza-Klein realization of the standard model.

How to get predictions

There is a list of axioms about the precise interdependence of the three ingredients in a spectral triple. The statement is that there is a choice $Con$ such that $S_{\mathrm{spec}}|_{Con}$ does yield the standard model. There is a bit of wiggle room then, but not much, due to the various axioms on a spectral triple. Correspondingly, not all parameters of the standard model are entirely known at the moment. Most notably, the mass of the Higgs particle is yet to be measured, hopefully by LHC.

As a result, after identifying in the landscape of all spectral triples those regions which are compatible with the known parameters of the standard model under the above procedure – see figure 3 on p. 9 for a cartoon of these landscape regions – one can check what the remaining, unknown, parameters of the standard model derived from spectral triples in these regions would be. Doing so yields the desired predictions deriving from the noncommutative approach.

The concrete predictions

According to Thomas Schücker’s review, the main post- and predictions are the following (see his review article for more details):

Higgs sector: there is a single Higgs and its mass is $m = 171.6\pm 5 GeV \,.$ The presence of the single Higgs is derived from some representation theoretic arguments for the spectral triple. I don’t know how that works. The mass of the Higgs is obtained as follows:

the spectral model demands strong relations between the gauge coupling. Namely $g_2 = g_3 = 3\lambda$ for the $su(2)$ and $su(3)$ gauge couplings $g_2$ and $g_3$ and the Higgs self-coupling $\lambda$, respectively. Then use the ordinary renormalization flow to run the couplings by increasing the energy scale until this identification is achieved. See the figure on p. 11

This assumes the usual “big desert” hypothesis is true, that no new physics appears up to this point. Take the resulting energy scale $\Lambda$ to be the fundamental scale of the NCG model.

Fundamental NCG scale: This $\Lambda$ is predicted to be $\Lambda = 10^{17} GeV$

(At this energy scale, so the idea, should one expect also the $X$-factor in $X \times Y$ to begin to look non-commutative.) The spectral action then expresses the Higgs mass somehow as a function of the gauge couplings (I am not sure I recall how). So this fixes the Higgs mass at scale $\Lambda$. Then run the couplings back to the observed energy scale to obtain the above prediction.

(Notice a couple of crucial assumptions here: the “big desert” and that ordinary renormalization flow makes sense up to the scale $\Lambda$ where some more fundamental theory is expected to take over. Also the number of generations enters this computation, which is not predicted by the model but set to $N_c = 3$ by hand.).

Top quark mass. From a similar computation apparently the top quark mass is “postdicted” to be $m_t \lt 186 GeV \,.$ The observed value is apparently $m_t = 174.3 \pm 5.1 GeV$.

$\rho_0$ I forget the details of this. But there is that parameter $\rho_0$ (which is one over the $cos^2$ of some angle which the inclined reader will surely remind me of) and which is measured to be $\rho_0 = 1.0002 \pm something.$ The NCG model predicts exactly $\rho_0 = 1 \,.$

Posted at July 30, 2008 11:29 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1754

### Re: Pre- and Postdictions of the NCG Standard Model

If that Higgs prediction were correct, would LHC be expected to see it?

Posted by: David Corfield on July 30, 2008 2:05 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

If that Higgs prediction were correct, would LHC be expected to see it?

Yes. I am hoping somebody more expert than me will chime in, but this is the expectation that one usually hears.

For instance Sabine Hossenfelder mentions it in her blog entry The Higgs mass (last sentence) and gives more literature.

The generally accepted current experimental bounds are apparently

$114 GeV \lt m_{Higgs} \lt 182 GeV \,.$

Posted by: Urs Schreiber on July 30, 2008 2:28 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Now checked with an expert:

If that Higgs prediction were correct, would LHC be expected to see it?

Answer: yes, within two years after the beam is set up.

Information about higher mass regions will be available before some of the lower mass regions, due to differences in cross sections.

Posted by: Urs Schreiber on July 30, 2008 2:48 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Urs wrote:

Answer: yes, within two years after the beam is set up.

I was going to say that any $m_H\gt 160\, \text{GeV}$ will be easy to see. The “difficult range” is something like $125\, \text{GeV}\lesssim m_H\lesssim 160\, \text{GeV}$.

But Urs beat me to it.

1. You say, that at the high scale, one has $g_2 = g_3 = 3\lambda$. What about $g_1$? Why do only two of the three SM gauge couplings unify? (In their model, somehow or other, $\lambda$ is also interpreted as a gauge coupling — which sort of motivates the idea that it might unify with the SM gauge couplings.)
2. Where does their prediction of the top mass come from? The matrix of Yukawa couplings is the greatest mystery of the SM. The top Yukawa coupling is very nearly precisely equal to 1. There are other eigenvalues which are as small as $O(10^{-6})$, and there is a complicated structure of mixing angles. To “predict” the top mass, one has to say something about this matrix of Yukawa couplings. What is it that they are able to say?
3. I assume that the statement that the $\rho$ parameter is 1 (and, presumably that the other Peskin-Takeuchi parameters also vanish) is just the statement that the scale of new physics is $10^{17}\, \text{GeV}$ (the “desert” to which you referred). What about neutrino masses? One usually says that this requires new physics at a lower ($\sim 10^{14}\, \text{GeV}$) scale. That still means vanishing Peskin-Takeuchi parameters, but it does indicate that the “desert” is not completely barren.
Posted by: Jacques Distler on July 30, 2008 3:26 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Part of the motivation for posting this was that it would force me to learn more details about the NCG model in order to be able to answer comments here. :-)

I suppose I should open Chamseddine, Connes, Marcolli at this point, spend some time reading and then get back to you.

But let me see how far I get:

First of all

You say, that at the high scale, one has $g_2 = g_3 = 3 \lambda$.

Ah, as you noticed, that was a typo. It should be $g_2^2 + g_3^3 = 3\lambda \,.$

What about $g_1$?

Thomas Schücker in his talk emphasized that the $U(1)$ factor enters on a different footing than the $SU(2)$ and $SU(3)$ factors. The gauge group has to arise in this model as the automorphism group of an algebra and we have $Aut(\mathbb{H}) = SU(2)/\mathbb{Z}_2$ and $Aut(M_3(\mathbb{C})) = SU(3)/\mathbb{Z}_3 \,.$ On the other hand, the $U(1)$-part appears later as a central extension somewhere. I think because the rep on the Hilbert space is gonna be projective.

Anyway, for reasons of that sort in the talk $g_1$ was suppressed. But on page 52 of “$m c^2$” (Chamseddine, Connes, Marcolli) it has $g_2^2 = g_3^2 = \frac{5}{3}g_1^2 \,.$

Where does their prediction of the top mass come from?

Apparently there is an estimate for the sum of squares of all the masses. If I read my hastily written notes correctly, the above equation actually extends to

$g_3^2 = g_2^2 = 3\lambda = \frac{1}{4}\sum g_Y^2 \,,$

where I think $g_Y$ are the Yukawa terms, I suppose. That sum will be highly dominated by the large top mass, which is where the estimate comes from.

But let me try to check that again.

I am being told that they can be accomodated for. But I don’t know any further details at the moment.

Posted by: Urs Schreiber on July 30, 2008 4:34 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Anyway, for reasons of that sort in the talk $g_1$ was suppressed. But on page 52 of “$m c^2$” (Chamseddine, Connes, Marcolli) it has $g_2^2=g_3^2=\frac{5}{3}g_1^2\, .$

But we already know that relation fails! The couplings don’t actually unify, with just the SM degrees of freedom.

That’s one of the pieces of evidence for supersymmetry: adding the contributions of the superpartners to the RG running does cause the couplings to unify (at around $10^{16}$-$10^{17}$ GeV).

If the prediction is $g_2^2=g_3^2={\color{purple}\frac{5}{3}g_1^2} = {\color{red} 3\lambda}$ then I would say that prediction has already been falsified (even before one gets to the red part of the equation)!

If I read my hastily written notes correctly, the above equation actually extends to

(1)$g_3^2=g_2^2=3\lambda=\tfrac{1}{4}\sum g_Y^2$

That’s an even more mysterious relation.

I can imagine — if the Higgs is something like a gauge field — that the Yukawa couplings are something like gauge couplings. But why do we get only a constraint on the sum of the squares of the Yukawa couplings, and not on the individual Yukawa couplings themselves?

I am being told that they can be accomodated for.

It’s not that hard to generate the requisite dimension-5 operator. The question is: why does it have such a large coefficient, if it is generated at the scale $\Lambda= 10^{17}\, \text{GeV}$, as opposed to at some lower scale?

Posted by: Jacques Distler on July 30, 2008 5:32 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Hi,

I had posted this yesterday but there was a server error and parts of it was lost. Then I had to run to catch a train. Here is what survived. A part c) on the interpretation of the internal metric as the Yukawa couplings is missing.

Concerning gauge coupling unification:

I am not sure what precisely the attitude towards this is among the NCG model builders. From what I have read I get the impression that there is the vague hope along the lines: that experimentally there is almost gauge coupling unification shows that the NCG model is on the right track, that it fails to be exactly a unifications shows that the big desert hypothesis is oversimplifying. On p. 5 of CCM it says:

Naturally, one does not really expect the “big desert” hypothesis to be satisfied. The fact that the experimental values show that the coupling constants do not exactly meet at unification scale is an indication of the presence of new physics. A good test for the validity of the [NCG] approach will be whether a fine tuning of the finite geometry can incorporate additional experimental data at higher energies.

You write:

The couplings don’t actually unify, with just the SM degrees of freedom.

That’s one of the pieces of evidence for supersymmetry:

It is clear from the talks that I have heard that there is a certain inclination to dislike supersymmtric extensions of the SM among the NCG model builders. But that seems to be more a matter of taste than of principle. I don’t see an a priori reason why susy versions of this model should not exist.

Concerning Higgs as a gauge field:

In the general setup of NCG, given a generalized Dirac operator $D$ a gauge field is given by an expression of the form

$\sum_i a_i [D,b_i]$

where $\{a_i,b_i\}$ are elements of the algebra. Clearly, for the special case of $D = \gamma^\mu \partial_\mu$ the standard Dirac operator on flat space, this gives the usual slashed gauge potentials $\gamma^\mu A_\mu \,.$

So Connes builds his Dirac operator which encodes the gravitational and gauge field background by letting $D = D^{(1,0)} \otimes Id + \gamma^5 \otimes D^{(0,1)}$ be the external Dirac operator $D^{(1,0)}$ on flat $\mathbb{R}^4$ and some internal Dirac operator $D^{(0,1)}$ on that $Y$ factor, which eventually encodes the Yukawa couplings.

Then graviton and gauge boson fields are “turned on” by addind “fluctuations” of the above sort, i.e. roughly

$D \mapsto D_{A,\phi} := D + \sum_i a_i [D,b_i] \,.$

That sum decomposes into two sums. The external one which yields the usual gauge bosons $A := \sum_i a_i [D^{(1,0)},b_i]$ and the internal one $H := \sum_i a_i [D^{(0,1)},b_i] \,.$ This internal gauge boson gets identified with the Higgs field.

that this $H$ really can be interpreted as the Higgs is proposition 3.5 on p. 23 combined with the way this term shows up in the spectral action, p. 31 and following.

Posted by: Urs Schreiber on July 31, 2008 9:03 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Just heard a talk by Ali Chamseddine on the NCG standard model (or standard model model, rather).

He said that they are working on trying to see if the following can help to deal with the gauge coupling unification issue:

the gauge and gravity part $S_{gauge-gravity}$ of the spectral action is “unique” only up to a choice of function $f$ which enters $S_{gauge,gravity} : SpectralTriple \to \mathbb{R}$ $(A,D,H) \mapsto Tr_H \exp (f(D/\Lambda))$ for $\Lambda$ the constant later to be identified with the energy scale where the gauge coupling unification is required/predicted/assumed. (p. 3)

There are some standard formulas for such traces of exponentials which say that this depends only on the even moments of $f$ and that the $2 n$th moment controls the coefficient of ${\Lambda}^n$ or the like.

The standard model + Einstein gravity action functional is obtained by truncating this expansion after the first three contributions, i.e. at including $\Lambda^4$, as on the top of page 31.

But really one should take all furher terms into account, too. That introduces one free choice of parameter, the moment of $f$ at the given order, per order. these higher order corrections would modify the RG flow, I suppose. So maybe there is a choice of the higher moments of $f$ such that with the modified RG flow one gets precise gauge coupling unification.

This is what Ali Chamseddine said they have started to look at.

Posted by: Urs Schreiber on July 31, 2008 12:20 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

I can imagine — if the Higgs is something like a gauge field — that the Yukawa couplings are something like gauge couplings. But why do we get only a constraint on the sum of the squares of the Yukawa couplings, and not on the individual Yukawa couplings themselves?

I have now asked Ali Chamseddine about how this works. Something like this:

when computing the spectral action $Tr \exp(f(D/\Lambda))$ one finds terms with scalar coefficients given by traces of various parts of $D$. In particular there is the trace $Tr( (D^{0,1})^2 )$ of the square of the Dirac operator on the internal space $Y$, equation 3.14, p. 24. This in turn involves a trace over products of the Yukawa coupling matrices, the quantities denoted $a$ and $c$ in equation 3.16, p. 25.

This factor $a$ then turns out to appear as the coefficient of the Higgs kinetic energy term (in 3.41, p. 31 ) multiplied with the 0th moment $f_0$ of that function used in the definition of the spectral action.

To normalize, one divides the entire spectral action by a corresponding factor to remove these factors in front of the Higgs kinetic action. But since the fermionic part “$\langle \psi , D \psi \rangle$” is part of the spectral action, this will be changed by a global prefactor. But the internal part of the $D$ here encodes the Yukawa couplings. With that prefactor now, it turns out that the sum of the square of the fermion masses is fixed to some value.

Roughly.

Posted by: Urs Schreiber on July 31, 2008 1:24 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

To normalize, one divides the entire spectral action by a corresponding factor to remove these factors in front of the Higgs kinetic action.

Why the heck do you do that? Why not just rescale the Higgs field to absorb this factor?

In fact, if you look at their equation (3.41), none of the bosonic kinetic terms are canonically-normalized. So you have to do a bunch of field redefinitions to get the canonically normalized field.

As you are well-aware, rescaling the whole action does precisely nothing at the classical level. (Such a rescaling can be absorbed in a redefinition of $\hbar$.)

Looking at the fifth line of their (3.41), we should rescale the Higgs field by a factor of $\sqrt{a f_0/2\pi^2}$ to give it a canonical kinetic term. If the original matrix of Yukawa couplings is $\lambda'$ (I refuse to use their deliberately godawful notation), $a = tr ((\lambda')^\dagger\lambda')$ and the Yukawa coupling for the rescaled field is $\lambda = \frac{\lambda'}{\sqrt{a f_0/2\pi^2}}$

Similarly, if the original gauge coupling (for any factor in the gauge group) was $g'$, we need to rescale the corresponding gauge field by $g'\sqrt{2f_0}/\pi$ to give it a canonically-normalized kinetic term. The gauge coupling for the canonically-normalized gauge field is $g = \frac{\pi}{\sqrt{2f_0}}$

So, we have a relation like $g^2 = \tfrac{1}{4} tr(\lambda^\dagger \lambda)$

Carrying through the same rescaling on the quartic Higgs self-coupling, one finds that its coefficient is $4 g^2 tr(\lambda^\dagger\lambda)^2$ which is not quite the result you quoted, but I’m not going to worry about piddly grade-school algebra at this point.

the gauge and gravity part $S_{\text{gauge−gravity}}$ of the spectral action is “unique” only up to a choice of function $f$ which enters $\begin{gathered} S_{\text{gauge−gravity}}: \text{SpectralTriple}\to\mathbb{R}\\ (A,D,H)\mapsto Tr_H \exp(f(D/\Lambda)) \end{gathered}$ for $\Lambda$ the constant later to be identified with the energy scale where the gauge coupling unification is required/predicted/assumed.

But really one should take all furher terms into account, too.

Egads! Then you have an infinite number of coupling constants to specify (and no a priori prescription for specifying them, as $f$ is an arbitrary function).

Are these really supposed to affect the running at scales well below $\Lambda$ (I don’t really see how)? Or are they simply supposed to introduce large threshold corrections, which explain why the coupling constants don’t actually meet at the unification scale? If the latter, then I don’t see which part of (1) (and the “predictions” that would follow from it) is supposed to survive these threshold corrections.

Of course, staring at their action, (3.14), for more than 30 seconds makes you wonder why the electroweak symmetry-breaking scale isn’t the unification scale.

In a sense, this is just the usual fine-tuning problem, except that, in their prescription, one is not free to fine-tune the coefficient of the quadratic term in the Higgs potential. The only parameters one is allowed to play with are the moments, $f_k$. Having fixed the gauge coupling at the unification scale, and the value of Newton’s constant, there is no further freedom to tune the electroweak symmetry-breaking scale.

This seems like a rather bigger problem than the (lack of) coupling constant unification.

Posted by: Jacques Distler on July 31, 2008 7:34 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

we should rescale the Higgs field

I suppose that’s what I should have said.

Egads! Then you have an infinite number of coupling constants to specify (and no a priori prescription for specifying them, as $f$ is an arbitrary function).

That’s true. This is swept under the rug a bit in most presentations. I am not sure what the general idea about this is among the practitioners. For instance how $f$ is supposed to be related to renormalization.

Are these really supposed to affect the running at scales well below $\Lambda$

Well, as I said this was what I think Ali Chamseddine said is an idea that they are working on. But since no details have been made public, maybe it is vain to speculate about what exactly he has in mind.

This seems like a rather bigger problem than the (lack of) coupling constant unification.

Hm, it would be good to get someone more expert than me to reply to this. I’ll see if I can find someone…

Posted by: Urs Schreiber on August 1, 2008 12:25 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Jacques,

by the way, I would enjoy to hear your opinion on the following opinion of mine.

This is how I think about the Connes-NCG standard model model:

Maybe it fails to pass the detailed test in the end. But it comes pretty close with relatively little conceptual input, repackaging an impressive amount of structure into a simpler structure.

Whether or not this particular model will turn out to be phenomenologically viable, I think it shows one thing: the potential importance of “non-geometric phases”.

There is the work by Roggenkamp and Wendland Limits and Degenerations of Unitary Conformal Field Theories and the unpublished work by Soibelman which shows that for any 2d CFT there is a systematic precise way to take a point particle limit and extract a spectral triple describing the effective target space geometry of the CFT regarded as a string background.

So I am thinking of Connes’ NCG as a way to talk about effective target space geometries of point particle limits of general 2dCFTs (“general” meaning that they need not come from $\sigma$-models, i.e. can be non-geometric). Notice the constraint on the dimension to be 4+[6 mod 8] for the NCG model.

From that point of view the main message I am getting here is: non-geometric string backgrounds may have good chances to be phenomenologically viable. And that it might be worthwhile to concentrate more effort into understanding these than into understanding geometric flux compactifications. We know that every string background is approximated by a spectral triple and Connes’ work shows what happens when searching for the standard model in that space of spectral triples.

I am not sure what the general attitude is towards non-geometric points in the space of string backgrounds. My impression has always been that they are unduly ignored. I can imagine good reasons (namely practical reasons) for ignoring them, but I would like to see a discussion of whether or not this is a good thing. The closest to such a discussion that I ever found (but that need not mean much, I’d be happy to be educated) is

Fernando Marchesano, Progress in D-brane model building, where in section 5.3 on p. 22 it says

As stated before, the mirror of a type IIB flux background may not have a geometric description. While intuitively less clear, one can still make sense of these backgrounds as string theory compactifications […]. Finally, duality arguments suggest that the different kinds of ‘non-geometric fluxes’ that one may introduce in type II theories form a much larger class than the geometric ones [140]. While our knowledge of these non-geometric constructions is still quite poor, and mainly based on mirrors of toroidal compactifications, the above results have led many to believe that non-geometric backgrounds correspond to the largest fraction of the ‘type II landscape’. A fraction which has so far been unexplored.

(my emphasis).

I suppose this must be clear to anyone who ever thought about it, but it is rarely discussed: concentrating on geometric Calabi-Yau flux compactifications (either individually or as an ensemble) means making a huge restriction on the a priori possibilities without any guarantee that the solution is to be found there. No?

Posted by: Urs Schreiber on July 31, 2008 3:47 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Maybe it fails to pass the detailed test in the end. But it comes pretty close with relatively little conceptual input, repackaging an impressive amount of structure into a simpler structure.

I completely fail to appreciate the “economy” of their way of rewriting the Standard Model.

What is interesting (perhaps) is that they find relations among the coupling constants over and above those guaranteed by gauge invariance and renormalizability (hence their “predictions”).

But there are serious problems (as noted above).

I am not sure what the general attitude is towards non-geometric points in the space of string backgrounds. My impression has always been that they are unduly ignored. I can imagine good reasons (namely practical reasons) for ignoring them, but I would like to see a discussion of whether or not this is a good thing.

It has been known forever that “most” string vacua are non-geometrical. What’s not understood in that context is moduli stabilization.

That’s one reason for concentrating on the classes of vacua where we do understand (and can compute in some controllable approximation) the stabilization of the moduli.

Even among those vacua, there’s a yet-smaller subset which is of particular interest. These are the vacua which have “decoupling limits” in which 4D gravity (and most of the moduli) decouple from the “Standard Model” particle physics1.

These are the vacua which are (at least in principle) predictive about particle physics. That makes them much more attractive objects of study than vacua where the problem of fine-tuning the cosmological constant and extracting particle physics are inextricably intertwined.

• Are there nongeometrical vacua where the moduli are stabilized?
• Among those, are there vacua with decoupling limits?

The answer to both these questions is surely “yes”. (We know, at least, an existence proof, because there are dualities relating some geometrical and nongeometrical vacua.) But the tools for studying these questions are much less developed.

And, frankly, they are insufficiently developed in the geometrical case.

1 The quotes around “Standard Model” are particularly important here, because none of the vacua found to date have fully realistic particle physics.

Posted by: Jacques Distler on July 31, 2008 8:05 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

It has been known forever that “most” string vacua are non-geometrical.

Ah, good to hear that. I had some trouble a while ago convincing somebody that the space of string vacua should be of considerably larger cardinality than the infamous finite number $\sim 10^{\sim (10^2)}$ of Calabi-Yau flux compactifications.

(By the way, is it even clear that these CY flux vacua all really exist as full CFTs?)

But what exactly does “known” mean here? I certainly find it very plausible that if I pick a random (S)CFT (of given central charge) there are minimal chances that it comes from a $\sigma$-model. But is there a way to make that intuition more precise? Such as, is there maybe some characteristic quantity associated to a CFT which obstructs its realization as a $\sigma$-model or the like?

What’s not understood in that context is moduli stabilization.

Okay. But I thought there lots of other models which are considered while ignoring the moduli stabilization problem for the time being. Such as pretty much all the intersecting brane models and the handful of semi-realistic heterotic models. No?

Are there nongeometrical vacua where the moduli are stabilized?

Is it even clear how precisely moduli for nongeometric vacua behave? Can’t it happen that they become discrete parameters, for instance?

And, frankly, they are insufficiently developed in the geometrical case.

Good that you say that, thanks.

Posted by: Urs Schreiber on August 1, 2008 11:12 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

I completely fail to appreciate the “economy” of their way of rewriting the Standard Model.

In private discussion I wouldn’t try to convince you, but here in a public forum I want to reply to this in case anyone is eavesdropping on our conversation.

To recap, the claim is that all the structure is encoded in the geometry of a noncommutative space $Y$ fixed by three choices:

a) the algebra of functions on it is $C^\infty(Y) = \mathbb{C} \oplus \mathbb{H}^2 \oplus M_3(\mathbb{C})$;

b) it looks to fermionic particles zipping through it $(dim(Y) = [6])$ -dimensional;

c) the Hilbert space of states of a fermionic particle in this space is the direct sum of all distinct irreducible $C^\infty(Y)$-bimodules.

Take this $Y$ as the internal space of a Kaluza-Klein model, plug the data into the spectral action1 and out comes something pretty close to the standard model action (with the numerical values of the Yukawa couplings then still to be identified with the metric on $Y$). I’d think that’s economic, if maybe the result falls short of being a perfect match.

1 To get more than just data-repackaging one will eventually want to understand what passing to the spectral action means. Not much exists on this at the moment, but Chamseddine once checked to first order that forming the spectral action of the string’s Dirac-Ramond operator indeed coincides with forming its effective target space action. That makes me think that in as far as spectral triples are the point particle limit of 2dCFT, the spectral action is the 2dCFT’s effective background action (obtained from the $\beta$-functional, for instance) in that limit.

Posted by: Urs Schreiber on August 1, 2008 11:42 AM | Permalink | Reply to this

### Economy

I don’t see why this description is any more economical than saying that the 4D gauge group is $SU(3)\times SU(2)\times U(1)$, and that the fermions form a certain chiral, but anomaly-free representation of the gauge group.

In fact, even on the level of their story, surely the automorphism group of the algebra they cook up includes $U(1)_{B-L}$. Why isn’t that gauged2?

Moreover, once one gets to the point of writing down an action, the conventional prescription is:

Write down the most general renormalizable, gauge-invariant action with the specified field content.

In their case, the spectral action is neither the most general action compatible with the symmetries, nor is there (so far) a prescription for specifying it, as it involves a choice of an arbitrary function $f(D/\Lambda)$.

Nor, of course, is it clear how one is supposed to quantize the theory in their approach.

Even though it is nonlocal, is the theory, in some sense, renormalizable? That is, can divergences be absorbed in a redefinition of the function, $f$?

Presumably, the answer is “no.” When one does a heat-kernel expansion of the spectral action, and treats the result as a standard effective field theory (at scales well below $\Lambda$), it’s clear that no miracles occur, and the original form of the action is not respected. (Indeed, that’s the whole point of their RG analysis, and is clearly required if one wants to make contact with observed low-energy physics.)

2 $U(1)_{B-L}$ is not gauged in the Standard Model, because it would be anomalous but

1. They have a right-handed neutrino.
2. They’re only writing down a classical Lagrangian, so the anomaly should not be an impediment to them.
Posted by: Jacques Distler on August 1, 2008 3:37 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Oh, and one more question about the “prediction” of

(1)$g_2 = g_3 = 3\lambda$

This, supposedly, comes from thinking of the Higgs as another “gauge boson” on the internal space. I don’t follow the logic, but the ensuing relation seems a little surprising, nonetheless.

$\lambda$ is the coefficient of a quartic self-coupling. For the gauge bosons, the coefficient of the quartic self-coupling is $g^2$, not $g$. So I might have expected a relation like (1), but involving the squares of the SM gauge couplings, rather than the gauge couplings themselves.

Posted by: Jacques Distler on July 30, 2008 3:39 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

And if the result came in that the mass was close to 171 GeV, how much credence does that give to Connes’ model being onto something? Are there rival theories predicting in that range?

Posted by: David Corfield on July 30, 2008 10:30 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

And if the result came in that the mass was close to 171 GeV, how much credence does that give to Connes’ model being onto something?

It would mean that the model would still be alive in the water rather than ruled out. More cannot seriously be said, I’d think.

Are there rival theories predicting in that range?

Plenty. Thomas Schücker spent the last ten minutes or so of his talk having fun with slides that summarized a literature search he made in collecting all existing predictions of Higgs masses. The bulk of of them come from various susy theories, since in them you have many knobs that can be turned to modify the value. But there are plenty of other predictions.

He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.

Posted by: Urs Schreiber on July 31, 2008 12:28 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.

This is really tempting. Can you divulge some of these ranges, and are there any gambling sites available?

Posted by: Bruce Bartlett on August 2, 2008 1:29 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.

This is really tempting.

Ah, discovering the phenomenologist inside? :-)

Can you divulge some of these ranges

Sorry, I forget the precise numbers.

and are there any gambling sites available?

Certainly. The main one is called hep-ph. The bet is to be submitted in LaTeX format embedded in a more or less inspired story about how you came up with it. Among the right bets, the prize of eternal fame will be distributed according to how good the surrounding story you told is.

Posted by: Urs Schreiber on August 2, 2008 1:47 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

I wrote:

The presence of the single Higgs is derived from some representation theoretic arguments for the spectral triple. I don’t know how that works.

I should make that more precise:

as far as I understand, there is a single Higgs particle because the Higgs arises as the internal gauge boson with respect to the compactified factor $Y$ (that’s at least according to my summary of the model here).

But the statement in the talk was stronger: the NCG model requires single Higgs and it is fixed to live in the correct representation $H_S = (2,-\frac{1}{2},1)$ (equation 6, page 2)

Posted by: Urs Schreiber on July 30, 2008 2:54 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Amusingly, the predicted Higgs mass is right in the window where the Tevatron would plausibly rule it out in the near future.

Posted by: Matt Reece on July 30, 2008 3:41 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Dorigo described the situation as of March 10 here. My guess is that 4.5 months later, a 160 GeV Higgs is already excluded (at 95% CL), and a 170 GeV Higgs has serious trouble.

### Re: Pre- and Postdictions of the NCG Standard Model

Yes, the winter ‘08 combination was already getting close to excluding 160 GeV. (They’ve been lucky to get an observed limit better than the expected limit.)

I suggest keeping an eye on the ICHEP talks. There were talks on Higgs searches today but apparently they’re saving the CDF/D0 combined plot to be announced at a plenary talk on Sunday by Matthew Herndon. I wouldn’t be surprised if that talk announces the first exclusion of some region of SM Higgs near 160 GeV. (If not, then surely by the winter ‘09 conferences.)

Posted by: Matt Reece on August 1, 2008 3:48 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

170 GeV Higgs is excluded at 95% confidence, according to Matthew Herndon’s ICHEP talk (PPT file, unfortunately). At 165 GeV, the limit is 1.2 x SM; at 175 GeV, it’s 1.3 x SM. Note that these are direct searches, unlike the electroweak fits that have been discussed around the blogosphere, so they translate much more directly to scenarios where there is new physics beyond the Standard Model. In any case, since the “NCG Standard Model” is just the SM with some high-scale RG boundary condition, it’s doubly in trouble, both from the electroweak fits and from this direct search.

Posted by: Matt Reece on August 4, 2008 5:52 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Good point, but saying that the NCG SM is in trouble sort of misses the point that it’s no more in trouble than SM, strings and just about everything else. In fact, since NCG offers beautiful new non stringy techniques, it is probably in better shape than most.

Posted by: Kea on August 4, 2008 6:05 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Despite all the fun it makes to fix specific predictions of a model, one should not forget that most every model ever dreamed up usually has a bunch of knobs which can be turned that modify the predictions made. No theory in the world makes a prediction of one paraneter without first fitting lots of other parameters to known data.

Posted by: Urs Schreiber on August 4, 2008 11:58 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

No theory in the world makes a prediction of one parameter without first fitting lots of other parameters to known data.

You really should get out more.

Posted by: Kea on August 4, 2008 9:49 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Dear Urs,

I do not really understand your comment. In the ncg description of the standard model, this is precisely the striking fact: many parameters (as far as I know: hypercharges, Weinberg angle) fit well with the model. Other are free and are fit to experimental datas (masse of the fermions, CKM matrix, neutrino mixing angles).
Could you explain what you mean ?
Thanks

Posted by: pierre on August 5, 2008 7:01 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Could you explain what you mean ?

Sure: To make any prediction from this NCG model or pretty much every other model, you first need to fit parts of its structure to known data.

For instance the requirement of KO-dimension 4+[6] arises by fitting to data: only in that dimension do we have the ncg analog of Majorana-Weyl fermions and which guarantees the observed chiral fermionic particle content.

Or the particle content: Connes and Chamseddine showed that given some assumptions the algebraic structure quickly begins to constrain the remaining assumptions, but in general the choice of internal algebra and the choice of representation of it, hence the choice of particle content of the model, are parameters fitted to the data before one starts predicting anything.

And this particle content is apparently, as we see now, a knob that one might have to dial a bit more, should the current hints against the “big desert” hypothesis be further reinforced by coming measurements.

Then there is this “cutoff” function $f$ entering the spectral action: in principle this involves infinitely many parameters to choose.

Alain Connes wrote in his comment # that this function is thought to have to be chosen constant in a neighbourhood of 0. On the other hand Ali Chamseddine told me a few days ago (before the 170 GeV exclusion measurement result was publicly known (to us at least)) that they are trying to see if choosing the higher moments of this function can be used to better fit the model to the fact that the otherwise “predicted” gauge coupling unification is not quite observed.

It could well be that the current NCG model is wrong, but that a supersymmetric extension of it is right. That will introduce many more paranmeters that will have to be fitted.

Still, the model can make predictions. The point of good models is not that they predict everything in the world uniquely, but that they put constraints on the possible combinations of parameters that are allowed. Such constraints come in the NCG model mainly from the spectral action, which forces a bunch of otherwise independently variable coefficients to be equal.

Posted by: Urs Schreiber on August 5, 2008 7:16 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Such constraints come in the NCG model mainly from the spectral action, which forces a bunch of otherwise independently variable coefficients to be equal.

This is a virtue, but the actual predictions that emerge are deeply problematic. I emphasized (but perhaps not sufficiently) in my blog post that the most problematic relation that emerges is the one that governs

• the Planck scale
• the GUT scale
• the mass of the right-handed neutrino
• and the Electroweak scale.

These four parameters are governed by $\Lambda$, $f_2$, and $M$1. Even if you allow for fine-tuning it is impossible to adjust the latter in such a way as to give sensible values for all of the former.

As I explained, I don’t think one should take the “prediction” of the Higgs mass terribly seriously, so I don’t think it’s a big deal that that prediction has been falsified.

This issue seems to me to be the much more serious one, and changing the matter content a bit (say, by supersymmetrizing the construction) doesn’t seem likely to fix it.

1 They also depend on various traces of powers of the Yukawa couplings. In all analyses, including my discussion, of the NCG SM, these are assumed to be O(1). If, instead, they are large, then the theory is not weakly coupled at the GUT scale, which introduces a host of other problems.

Posted by: Jacques Distler on August 5, 2008 7:59 PM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

A report by Tommaso Dorigo.

Posted by: David Corfield on July 31, 2008 5:05 PM | Permalink | Reply to this
Read the post NCG-Standardmodell und Higgs-Teilchen
Weblog: Mathlog
Excerpt: In letzter Zeit hört und liest man vieles über das LHC: ob es den Weltuntergang herbeiführen wird, wozu es dient, was dort gemessen wird und ob es zur Bestätigung/Widerlegung der Stringtheorie führen wird....
Tracked: July 31, 2008 6:39 PM

### dimension 10.

I still believe that the most interesting postdiction of the new model is that the dimension of space time is, mod 8, equal to 10.

More interestingly, I wonder if it could be possible to move the coupling of the higgs field to reach the extreme limits of completely broken (M_W, M_Z = infinity) and completely unbroken (M_W, M_Z = zero) standard model. Does the dimension jumps to 11, in the later case? And if so, what about chirality?

And, in the opposite limit, does the dimension jump to 9? Remember that 9 is the minimum dimension for a SU(3)xU(1) group to live in, while 11 is the minimum for SU(3)xSU(2)xU(1).

Posted by: Alejandro Rivero on August 1, 2008 1:08 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

David didn’t actually state it here, so I will: the latest report says that m=171 is RULED OUT.

Posted by: Kea on August 2, 2008 1:38 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Hey, good point! This is much clearer from Dorigo’s new post, entitled New bounds for the Higgs: 115-135 GeV!

That’s fairly far from Schucker’s noncommutative geometry prediction, $171.6 \pm 5$ GeV.

Does this mean we don’t need to learn noncommutative geometry?

(Of course we’ve already learned it; indeed I sleep with Connes’ book under my pillow every night.)

Posted by: John Baez on August 2, 2008 7:03 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

I believe the correct statement of what his post tells us is that if the only corrections to the $\rho$-parameter are SM radiative corrections, then a Higgs heavier than 135 GeV is excluded at the 1$\sigma$ level (and, I’d guess, a 171 GeV Higgs is excluded at the 3$\sigma$ level or better).

Now, of course, if there are other beyond-the-SM corrections to the $\rho$-parameter, then the Higgs could be heavier. But then you’re not in the model of Connes et al.

The bottom line is: the Higgs is very light and/or there is significant beyond-the-SM physics lurking just around the corner.

Far more exciting than spectral triples, if you ask me.

Posted by: Jacques Distler on August 2, 2008 8:54 AM | Permalink | PGP Sig | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Could you explain this $\rho$ parameter? And more generally, the concept of ‘Peskin–Takeuchi parameter’? Sounds like part of some ‘parametrized post-Standard-Model physics’ scheme.

Posted by: John Baez on August 2, 2008 10:20 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

At tree-level, in the SM, the W and Z masses are related by

(1)$M_W^2 = M_Z^2 \cos^2(\theta_w)$

where $\tan(\theta_w) = \frac{g_1}{g_2}$ is the Weinberg angle.

The $\rho$-parameter is defined as the ratio of the LHS and the RHS of (1). It’s 1 at tree-level, but, of course, even in the SM, it receives radiative corrections.

Among the corrections are Higgs loops $\begin{svg} \end{svg}$ which correct $\rho = 1 - \frac{11 g_2^2}{96\pi^2} \tan^2(\theta_w) \log\left(\frac{m_h}{M_W}\right)$ Top/bottom loops produce a similar correction.

I’ll recommend the perfectly fine Wikipedia article for an explanation of the Peskin-Takeuchi parameters.

More fundamentally, I think it’s best to think about them in an effective Lagrangian approach to the SM. Instead of a Higgs, one has a gauged nonlinear $\sigma$-model.

When writing an effective Lagrangian, one should not stop at the 2-derivative terms. One should also write down all possible 4-derivative terms (with arbitrary coefficients). There are 11 or so such terms, and you can find them described in section 9.2 of Sally Dawson’s review.

The Peskin-Takeuchi parameters are particular linear combinations of these 11 coupling constants.

Posted by: Jacques Distler on August 2, 2008 4:36 PM | Permalink | PGP Sig | Reply to this
Weblog: Musings
Excerpt: On the "Noncommutative" Standard Model/
Tracked: August 2, 2008 9:14 AM

### Re: Pre- and Postdictions of the NCG Standard Model

[Alain Connes sent the following reply to the above discussion by email. With his kind explicit permission I forward it here. -Urs ]

Dear Urs,

The only relevant talk on that subject is the talk by Chamseddine #. I had the misfortune to look at the blog discussion and am too busy and sensitive to indulge into that, but I just want to give you the answer to several questions:

1) The choice of the function $f$ is that of a “cutoff” function (flat constant =1 near 0 and then gently decreasing to 0) and thus all the coefficients of the higher terms ($a_6$, $a_8$, etc) all vanish since they are the Taylor expansion of $f$ at $x=0$.

2) The algebra $\mathbb{C} + \mathbb{H} + M_3(\mathbb{C})$ is no longer an “input” but we have shown with Ali in the paper Why the standard model how to derive both the algebra and the representation from basic classification results.

3) The idea about “predictions” is simple: that the spectral action is an effective model at a specific scale (of the order of unification scale) and that one runs it down using the Wilsonian approach. Writing precise figures for the Higgs mass as Schucker does is ridiculous, what one has is, assuming the “big desert” an order of magnitude for the Higgs mass and it is at the upper side of the allowed interval.

All the comments of Distler are to the point and in particular the relation between the Higgs quartic coupling and the gauge couplings is the equation (5.6) page 53 of the paper with Chamseddine and Marcolli which indeed involves the square of the gauge coupling.

4) There is no “a priori” reason of incompatibility between ncg and susy all the more since in both cases one makes small corrections to the algebra of functions on space-time. However one has to wait until there is some real experimental evidence for susy before embarking in what promises to be a quite complicated stuff.

Recent precision data fit seem to prefer the lower part of the allowed interval for the Higgs mass, in case this was experimentally confirmed it would be a clear motivation to go ahead and try to merge susy with the ncg picture.

Hope this helps a bit to clarify the mess.

Posted by: Alain Connes on August 2, 2008 2:02 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Writing precise figures for the Higgs mass […] is ridiculous

Maybe for completeness one could add here what kind of precision is meant. I see that on page 55 of CCM it gives

a Higgs mass of order 170 GeV

assuming some numerical input justified by the “big desert” assumption. Which is a risky assumption, possibly, in light of the recent data which is apparently not favoring a mass even roughly of order 170 GeV.

In his talk here at HIM Ali Chamseddine said several times (in reply to questions, mostly) that supersymmetric extensions of the NCG modle are thinkable, but that he is hesitant to invest work into them as long as experimental indication is missing. Maybe an indication of a light Higgs as we have now (a few days after his talk) changes the situation?

Posted by: Urs Schreiber on August 4, 2008 3:24 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Urs wrote:

Y is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it

meaning ?? heat doesn’t diffuse??

jim

Posted by: jim stasheff on August 5, 2008 2:18 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Urs wrote:

$Y$ is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it

meaning ??

The starting point of spectral geometry (often called noncommutative geometry) is that one observes that all the metric geometry of a Riemannian spin manifold, in particular its dimension, can be reconstructed from the algebraic properties of the Dirac operator $D$ on that manifold. (Meaning: one can hear the shape of a drum – if one uses spinors as sensors.)

The idea is then that every operator which satisfies a handful of properties characteristic of Dirac operators on Riemannian spin manifold can be interpreted as encoding, in this algebraic way, some kind of generalized geometry.

In this context the dimension of a space is encoded in the spectral growth of the Dirac operator: the growth of the number of eigenvalues of the square of the Dirac operator operator below a given value:

$N_{D^2} : (\lambda \in \mathbb{R}) \mapsto number of eigenvalues of D^2 smaller than \lambda$

For $D$ an ordinary Dirac operator on a Riemannian spin manifold this number asymptotically grows as a power of $\lambda$

$N_{D^2}(\lambda) \to const \;\lambda^{n/2}$ as $\lambda \to \infty$. The exponent $n$ here is the dimension of the underlying manifold.

So given any generalized Dirac operator, we say that it defines a generalized Riemannian geometry whose metric dimension is twice the exponent of the asymptotic spectral growth of the operator.

For more details see for instance

Joseph Várilly, Dirac operators and spectral geometry, in particular around page 39,40.

We have to distinguish here “metric dimension” form other kinds of dimensions. For ordinary riemannian manifolds the dimension of the manifold affects not just the spectral growth of its Dirac operator, but also other things, such as the representation theory of the spinors on that manifold. It turns out that this representation theory of spinors is entirely encoded in three choices of signs which contrl the way the Dirac operator and other operators in the game commute with an operator called a “real structure”. These three choices of signs yield 8 different cases, numbered 0 to 7 and called the KO-dimension encoded by the Dirac operator. For ordinary Riemannian manifolds KO dimension is the ordinary dimension modulo 8. But for more general generalized spaces encoded by Dirac operators, KO-dimension and metric dimension need no longer be related this way. In particular, the metric dimension may disappear while the KO-dimension remains different from 0.

A KO-dimension of [6] is important in the discussion we had, because this is the KO-dimension which allows to have Majorana-Weyl spinors.

Finally: there is a well known physical interpretation of the eigenvalues of the square of a Dirac operator, i.e. of Laplace operators $\Delta$: a heat distribution $T : \Sigma \to \mathbb{R}$ on a Riemannian manifold $\Sigma$ changes in time $t$, due to diffusion, according to the heat equation

$\frac{\partial}{\partial t} T = \Delta T \,.$

therefore the eigenvalues of $\Delta$ control the way heat diffuses on $\Sigma$.

Of course the heat equation is just the Schrödinger equation in imaginary time. So the eigenvalues of $\Delta$ can also be regarded as controlling the energy and time propagation of quantum particles propagating on $\Sigma$.

And that’s precisely the point of spectral triples. A spectral triple is an axiomatization of quantum mechanics of spinorial particles (aka superparticles) coupled to gauge forces (= connections) and gravity (= Riemannian curvature). The statement of spectral geometry is that from the dynamics of these particles alone can one read off the properties of the geometry in which they propagate.

Posted by: Urs Schreiber on August 5, 2008 10:27 AM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Another piece of the puzzle is that the heat equation is also the “time” component of exact 1-form

$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\mu\nu} \partial_\mu \partial_\nu T)$

that arises natural when one introduces noncommutativity between space and time components via

$[dx^\mu,t] = [dt,t] = 0$

and

$[dx^\mu,x^\nu] = g^{\mu\nu} dt$

as outlined in Equation 55 here.

I called this a “right martingale” as opposed to a “left martingale”. The concept of right and left martingales were introduced because of the relation

\begin{aligned} dT &= dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\mu\nu} \partial_\mu \partial_\nu T) \\ &= (\partial_\mu T) dx^\mu + (\partial_t T - g^{\mu\nu} \partial_\mu \partial_\nu T) dt, \end{aligned}

where, due to the noncommutativity, it matters on which side of the basis elements you write the components.

Note: I am aware that there is a term involving the connection coefficients missing from the Laplace-Beltrami operator, but I haven’t yet figured out if there is a way to get it in there naturally via the defining commutative relations.

Since this naturally describes stochastic processes on manifolds, I fully expect there to be some beautiful little trick to arrive at something like

$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - \Delta T).$

If anyone got bored, it might be a fun little exercise for a back of a napkin or something to work this out.

Posted by: Eric on August 5, 2008 5:04 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Oops! Some bad cut and paste there. Swapping from right to left components changes the sign in the heat equation. It should have been

\begin{aligned} dT &= dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\mu\nu} \partial_\mu\partial_\nu T) \\ &= (\partial_\mu T) dx^\mu + (\partial_t T + g^{\mu\nu} \partial_\mu\partial_\nu T) dt. \end{aligned}

It is kind of neat. Switching coefficients from before the time component to after the time component has the effect of reversing time in the heat equation.

Posted by: Eric on August 5, 2008 5:10 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

More than six years after writing that paper, I finally got it.

I “guessed” Equation 48

$\stackrel{\leftarrow}{\partial_t} = \partial_t + \frac{1}{2} g^{\mu\nu} \partial_\mu\partial_\nu$

simply because it was fairly obvious that it satisfied the required expression

$\stackrel{\leftarrow}{\partial_t}(fg) = (\stackrel{\leftarrow}{\partial_t} f) g + f (\stackrel{\leftarrow}{\partial_t} g) + g^{\mu\nu} (\partial_\mu f)(\partial_\nu g).$

Checking wikipedia, I see that the Laplace-Beltrami operator satisfies

$\Delta(fg) = (\Delta f)g + f(\Delta g) + g^{\mu\nu} (\partial_\mu f)(\partial_\nu g).$

Therefore, I could have and should have defined

$\stackrel{\leftarrow}{\partial_t} = \partial_t + \frac{1}{2} \Delta.$

Now it is obvious that the required properties are satisfied and we get the much nicer version (as I guessed we should have)

$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - \Delta T).$

The connection (no pun intended) between stochastic process on manifolds and noncommutative geometry is established (if it wasn’t already). Woohoo! :)

That was fun for napkin math.

Posted by: Eric on August 5, 2008 7:09 PM | Permalink | Reply to this

### Re: Pre- and Postdictions of the NCG Standard Model

Therefore, I could have and should have defined […]

In local coordinates such that $g^{\mu\nu}$ are constants the Laplace operator is of course $\g^{\mu\nu}\partial_\mu \partial_\nu$. That equation you cite tells us that even in other cases, this is still the symbol of the Laplace operator, i.e. its component in second order differential operators.

Posted by: Urs Schreiber on August 5, 2008 7:28 PM | Permalink | Reply to this