### Pre- and Postdictions of the NCG Standard Model

#### Posted by Urs Schreiber

At HIM this week there is a Noncommutative Geometry Conference.

Just heard Thomas Schücker talk about The noncommutative standard model and its post- and predictions, which, as it turns out, closely followed his entry for the Encyclopedia of Mathematical Physics: Noncommutative geometry and the standard model

**The setup**

Recall from our discussion here that the “noncommutative standard model”, due to Alain Connes and collaborators, is a Kaluza-Klein model – a model of particle physics where all observed forces on a pseudo-Riemannian spacetime $X$ are derived from pure gravity on a spacetime $X \times Y$ for $Y$ a compact Riemannian space with “essentially vanishing volume” – where now the crucial ingredient is that noncommutative geometry is used to give the idea of “essentially vanishing volume” a precise meaning:

$Y$ is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it. Only its dimension as seen by gauge theory, it’s KO-dimension, is higher – namely 6 mod 8. So $Y$ is like a manifold shrunk to a point that still remembers some of its inner structure. In particuar its Riemannian structure.

Using spectral triples, the Riemannian geometry of the 4+[6]-dimensional spacetime $X \times Y$ is entirely encoded in how it is probed by the dynamics of a spinning quantum particle roaming around in it. Algebraically this is given, essentially, by a Hilbert space $H$ of states of that particle, by an algebra $A$ of position observables of that particle, acting on $H$ and, crucially, by the Dirac operator $D$ of that particle, whose eigenvalues are the essentially the possible energies that the particle can acquire while zipping through $X \times Y$. The Riemannian structure of $X \times Y$ is encoded in these energy eigenvalues.

Given such a quantum particle, one would want to see what its *second quantization* is, which would be a quantum field theory describing many such particles propagating on $X \times Y$ and interacting with each other. Such a quantum field theory would traditionally been given by a functional – the *action functional* – depending on the Riemannian metric on $X\times Y$ as well as on the “condensate” fields of these particles. All these quantities are supposed to be encoded in a pair consisting of the spectral triple and a vector $\psi$ in the Hilbert space.

Connes gave an argument that there is an essentially unique functional
$S_{\mathrm{spec}} : PointedSpectralTriples \to \mathbb{R}$
on such pairs which satisfies the obvious requirement that it be additive under disjoint unions of Riemannian spaces. This he called the *spectral action* functional.

Evaluate such functional on spectral triples describing Kaluza-Klein models $X \times Y$ as above. One finds that, as in ordinary commutative Kaluza-Klein theory, the Riemannian structure on such products can be interpreted as a Riemannian structure on $X$ together with a connection on a principal bundle over $X$ – the gauge bundle. Restrict attention to the subset

$Con \subset PointedSpectralTriples$

of all spectral triples which describe $\mathbb{R}^4 \times Y$ with the standard flat metric on $\mathbb{R}^4$ and such that the gauge group of the induced gauge bundle is that observed in the standard model and such the metric on $Y$ has certain fixed values, which later one identifies with Yukawa coupling terms. On this subset the spectral action

$S_{\mathrm{spec}} : Con \to \mathbb{R}$

restricts to a functional of the connection on that gauge bundle and of a section of a spinor bundle over $\mathbb{R}^4 \times Y$ (the element in the Hilbert space).

The standard model action functional is precisely a functional of such a kind. See table 2 on page 10. So then the task is to adjust the remaining details of the spectral triple (in particular the metric on the [6]-dimensional compact $Y$) such that $S_{\mathrm{spec}}|_{Con}$ coincides entirely with the standard model action (as far as that is fixed).

When that is achived, one has found a noncommutative Kaluza-Klein realization of the standard model.

**How to get predictions**

There is a list of axioms about the precise interdependence of the three ingredients in a spectral triple. The statement is that there is a choice $Con$ such that $S_{\mathrm{spec}}|_{Con}$ does yield the standard model. There is a bit of wiggle room then, but not much, due to the various axioms on a spectral triple. Correspondingly, not all parameters of the standard model are entirely known at the moment. Most notably, the mass of the Higgs particle is yet to be measured, hopefully by LHC.

As a result, after identifying in the landscape of all spectral triples those regions which are compatible with the known parameters of the standard model under the above procedure – see figure 3 on p. 9 for a cartoon of these landscape regions – one can check what the remaining, unknown, parameters of the standard model derived from spectral triples in these regions would be. Doing so yields the desired *predictions* deriving from the noncommutative approach.

**The concrete predictions**

According to Thomas Schücker’s review, the main post- and predictions are the following (see his review article for more details):

**Higgs sector**: there is a single Higgs and its mass is
$m = 171.6\pm 5 GeV
\,.$
The presence of the single Higgs is derived from some representation theoretic arguments for the spectral triple. I don’t know how that works. The mass of the Higgs is obtained as follows:

the spectral model demands strong relations between the gauge coupling. Namely $g_2 = g_3 = 3\lambda$ for the $su(2)$ and $su(3)$ gauge couplings $g_2$ and $g_3$ and the Higgs self-coupling $\lambda$, respectively. Then use the ordinary renormalization flow to run the couplings by increasing the energy scale until this identification is achieved. See the figure on p. 11

This assumes the usual “big desert” hypothesis is true, that no new physics appears up to this point. Take the resulting energy scale $\Lambda$ to be the fundamental scale of the NCG model.

**Fundamental NCG scale:** This $\Lambda$ is predicted to be
$\Lambda = 10^{17} GeV$

(At this energy scale, so the idea, should one expect also the $X$-factor in $X \times Y$ to begin to look non-commutative.) The spectral action then expresses the Higgs mass somehow as a function of the gauge couplings (I am not sure I recall how). So this fixes the Higgs mass at scale $\Lambda$. Then run the couplings back to the observed energy scale to obtain the above prediction.

(Notice a couple of crucial assumptions here: the “big desert” and that ordinary renormalization flow makes sense up to the scale $\Lambda$ where some more fundamental theory is expected to take over. Also the number of generations enters this computation, which is not predicted by the model but set to $N_c = 3$ by hand.).

** Top quark mass**. From a similar computation apparently the top quark mass is “postdicted” to be
$m_t \lt 186 GeV
\,.$
The observed value is apparently $m_t = 174.3 \pm 5.1 GeV$.

**$\rho_0$** I forget the details of this. But there is that parameter $\rho_0$ (which is one over the $cos^2$ of some angle which the inclined reader will surely remind me of) and which is measured to be
$\rho_0 = 1.0002 \pm something.$
The NCG model predicts exactly
$\rho_0 = 1
\,.$

## Re: Pre- and Postdictions of the NCG Standard Model

If that Higgs prediction were correct, would LHC be expected to see it?