## July 22, 2014

### The Ten-Fold Way (Part 2)

#### Posted by John Baez

How can we discuss all the kinds of matter described by the ten-fold way in a single setup?

It’s bit tough, because 8 of them are fundamentally ‘real’ while the other 2 are fundamentally ‘complex’. Yet they should fit into a single framework, because there are 10 super division algebras over the real numbers, and each kind of matter is described using a super vector space — or really a super Hilbert space — with one of these super division algebras as its ‘ground field’.

Combining physical systems is done by tensoring their Hilbert spaces… and there does seem to be a way to do this even with super Hilbert spaces over different super division algebras. But what sort of mathematical structure can formalize this?

Here’s my current attempt to solve this problem. I’ll start with a warmup case, the threefold way. In fact I’ll spend most of my time on that! Then I’ll sketch how the ideas should extend to the tenfold way.

Fans of lax monoidal functors, Deligne’s tensor product of abelian categories, and the collage of a profunctor will be rewarded for their patience if they read the whole article. But the basic idea is supposed to be simple: it’s about a multiplication table.

### The $\mathbb{3}$-fold way

First of all, notice that the set

$\mathbb{3} = \{1,0,-1\}$

is a commutative monoid under ordinary multiplication:

$\begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}$

Next, note that there are three (associative) division algebras over the reals: $\mathbb{R}, \mathbb{C}$ or $\mathbb{H}$. We can equip a real vector space with the structure of a module over any of these algebras. We’ll then call it a real, complex or quaternionic vector space.

For the real case, this is entirely dull. For the complex case, this amounts to giving our real vector space $V$ a complex structure: a linear operator $i: V \to V$ with $i^2 = -1$. For the quaternionic case, it amounts to giving $V$ a quaternionic structure: a pair of linear operators $i, j: V \to V$ with

$i^2 = j^2 = -1, \qquad i j = -j i$

We can then define $k = i j$.

The terminology ‘quaternionic vector space’ is a bit quirky, since the quaternions aren’t a field, but indulge me. $\mathbb{H}^n$ is a quaternionic vector space in an obvious way. $n \times n$ quaternionic matrices act by multiplication on the right as ‘quaternionic linear transformations’ — that is, left module homomorphisms — of $\mathbb{H}^n$. Moreover, every finite-dimensional quaternionic vector space is isomorphic to $\mathbb{H}^n$. So it’s really not so bad! You just need to pay some attention to left versus right.

Now: I claim that given two vector spaces of any of these kinds, we can tensor them over the real numbers and get a vector space of another kind. It goes like this:

$\begin{array}{cccc} \mathbf{\otimes} & \mathbf{real} & \mathbf{complex} & \mathbf{quaternionic} \\ \mathbf{real} & real & complex & quaternionic \\ \mathbf{complex} & complex & complex & complex \\ \mathbf{quaternionic} & quaternionic & complex & real \end{array}$

You’ll notice this has the same pattern as the multiplication table we saw before:

$\begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}$

So:

• $\mathbb{R}$ acts like 1.
• $\mathbb{C}$ acts like 0.
• $\mathbb{H}$ acts like -1.

There are different ways to understand this, but a nice one is to notice that if we have algebras $A$ and $B$ over some field, and we tensor an $A$-module and a $B$-module (over that field), we get an $A \otimes B$-module. So, we should look at this ‘multiplication table’ of real division algebras:

$\begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C}[2] \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C}[2] & \mathbb{R}[4] \end{array}$

Here $\mathbb{C}[2]$ means the 2 × 2 complex matrices viewed as an algebra over $\mathbb{R}$, and $\mathbb{R}[4]$ means that 4 × 4 real matrices.

What’s going on here? Naively you might have hoped for a simpler table, which would have instantly explained my earlier claim:

$\begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} &\mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}$

This isn’t true, but it’s ‘close enough to true’. Why? Because we always have a god-given algebra homomorphism from the naive answer to the real answer! The interesting cases are these:

$\mathbb{C} \to \mathbb{C} \oplus \mathbb{C}$ $\mathbb{C} \to \mathbb{C}[2]$ $\mathbb{R} \to \mathbb{R}[4]$

where the first is the diagonal map $a \mapsto (a,a)$, and the other two send numbers to the corresponding scalar multiples of the identity matrix.

So, for example, if $V$ and $W$ are $\mathbb{C}$-modules, then their tensor product (over the reals! — all tensor products here are over $\mathbb{R}$) is a module over $\mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$, and we can then pull that back via $f$ to get a right $\mathbb{C}$-module.

What’s really going on here?

There’s a monoidal category $Alg_{\mathbb{R}}$ of algebras over the real numbers, where the tensor product is the usual tensor product of algebras. The monoid $\mathbb{3}$ can be seen as a monoidal category with 3 objects and only identity morphisms. And I claim this:

Claim. There is an oplax monoidal functor $F : \mathbb{3} \to Alg_{\mathbb{R}}$ with $\begin{array}{ccl} F(1) &=& \mathbb{R} \\ F(0) &=& \mathbb{C} \\ F(-1) &=& \mathbb{H} \end{array}$

What does ‘oplax’ mean? Some readers of the $n$-Category Café eat oplax monoidal functors for breakfast and are chortling with joy at how I finally summarized everything I’d said so far in a single terse sentence! But others of you see ‘oplax’ and get a queasy feeling.

The key idea is that when we have two monoidal categories $C$ and $D$, a functor $F : C \to D$ is ‘oplax’ if it preserves the tensor product, not up to isomorphism, but up to a specified morphism. More precisely, given objects $x,y \in C$ we have a natural transformation

$F_{x,y} : F(x \otimes y) \to F(x) \otimes F(y)$

If you had a ‘lax’ functor this would point the other way, and they’re a bit more popular… so when it points the opposite way it’s called ‘oplax’.

(In the lax case, $F_{x,y}$ should probably be called the laxative, but we’re not doing that case, so I don’t get to make that joke.)

This morphism $F_{x,y}$ needs to obey some rules, but the most important one is that using it twice, it gives two ways to get from $F(x \otimes y \otimes z)$ to $F(x) \otimes F(y) \otimes F(z)$, and these must agree.

Let’s see how this works in our example… at least in one case. I’ll take the trickiest case. Consider

$F_{0,0} : F(0 \cdot 0) \to F(0) \otimes F(0),$

that is:

$F_{0,0} : \mathbb{C} \to \mathbb{C} \otimes \mathbb{C}$

There are, in principle, two ways to use this to get a homomorphism

$F(0 \cdot 0 \cdot 0 ) \to F(0) \otimes F(0) \otimes F(0)$

or in other words, a homomorphism

$\mathbb{C} \to \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C}$

where remember, all tensor products are taken over the reals. One is

$\mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{1 \otimes F_{0,0}}{\longrightarrow} \mathbb{C} \otimes (\mathbb{C} \otimes \mathbb{C})$

and the other is

$\mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{F_{0,0} \otimes 1}{\longrightarrow} (\mathbb{C} \otimes \mathbb{C})\otimes \mathbb{C}$

I want to show they agree (after we rebracket the threefold tensor product using the associator).

Unfortunately, so far I have described $F_{0,0}$ in terms of an isomorphism

$\mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$

Using this isomorphism, $F_{0,0}$ becomes the diagonal map $a \mapsto (a,a)$. But now we need to really understand $F_{0,0}$ a bit better, so I’d better say what isomorphism I have in mind! I’ll use the one that goes like this:

$\begin{array}{ccl} \mathbb{C} \otimes \mathbb{C} &\to& \mathbb{C} \oplus \mathbb{C} \\ 1 \otimes 1 &\mapsto& (1,1) \\ i \otimes 1 &\mapsto &(i,i) \\ 1 \otimes i &\mapsto &(i,-i) \\ i \otimes i &\mapsto & (1,-1) \end{array}$

This may make you nervous, but it truly is an isomorphism of real algebras, and it sends $a \otimes 1$ to $(a,a)$. So, unraveling the web of confusion, we have

$\begin{array}{rccc} F_{0,0} : & \mathbb{C} &\to& \mathbb{C}\otimes \mathbb{C} \\ & a &\mapsto & a \otimes 1 \end{array}$

Why didn’t I just say that in the first place? Well, I suffered over this a bit, so you should too! You see, there’s an unavoidable arbitrary choice here: I could just have well used $a \mapsto 1 \otimes a$. $F_{0,0}$ looked perfectly god-given when we thought of it as a homomorphism from $\mathbb{C}$ to $\mathbb{C} \oplus \mathbb{C}$, but that was deceptive, because there’s a choice of isomorphism $\mathbb{C} \otimes \mathbb{C} \to \mathbb{C} \oplus \mathbb{C}$ lurking in this description.

This makes me nervous, since category theory disdains arbitrary choices! But it seems to work. On the one hand we have

$\begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} &\mathbb{C} \otimes \mathbb{C} &\stackrel{1 \otimes F_{0,0}}{\longrightarrow}& \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & a \otimes (1 \otimes 1) \end{array}$

On the other hand, we have

$\begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} &\stackrel{F_{0,0} \otimes 1}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & (a \otimes 1) \otimes 1 \end{array}$

So they agree!

I need to carefully check all the other cases before I dare call my claim a theorem. Indeed, writing up this case has increased my nervousness… before, I’d thought it was obvious.

But let me march on, optimistically!

### Consequences

In quantum physics, what matters is not so much the algebras $\mathbb{R}$, $\mathbb{C}$ and $\mathbb{H}$ themselves as the categories of vector spaces — or indeed, Hilbert spaces —-over these algebras. So, we should think about the map sending an algebra to its category of modules.

For any field $k$, there should be a contravariant pseudofunctor

$Rep: Alg_k \to Rex_k$

where $Rex_k$ is the 2-category of

• $k$-linear finitely cocomplete categories,

• $k$-linear functors preserving finite colimits,

• and natural transformations.

The idea is that $Rep$ sends any algebra $A$ over $k$ to its category of modules, and any homomorphism $f : A \to B$ to the pullback functor $f^* : Rep(B) \to Rep(A)$.

(Functors preserving finite colimits are also called right exact; this is the reason for the funny notation $Rex$. It has nothing to do with the dinosaur of that name.)

Moreover, $Rep$ gets along with tensor products. It’s definitely true that given real algebras $A$ and $B$, we have

$Rep(A \otimes B) \simeq Rep(A) \boxtimes Rep(B)$

where $\boxtimes$ is the tensor product of finitely cocomplete $k$-linear categories. But we should be able to go further and prove $Rep$ is monoidal. I don’t know if anyone has bothered yet.

(In case you’re wondering, this $\boxtimes$ thing reduces to Deligne’s tensor product of abelian categories given some ‘niceness assumptions’, but it’s a bit more general. Read the talk by Ignacio López Franco if you care… but I could have used Deligne’s setup if I restricted myself to finite-dimensional algebras, which is probably just fine for what I’m about to do.)

So, if my earlier claim is true, we can take the oplax monoidal functor

$F : \mathbb{3} \to Alg_{\mathbb{R}}$

and compose it with the contravariant monoidal pseudofunctor

$Rep : Alg_{\mathbb{R}} \to Rex_{\mathbb{R}}$

giving a guy which I’ll call

$Vect: \mathbb{3} \to Rex_{\mathbb{R}}$

I guess this guy is a contravariant oplax monoidal pseudofunctor! That doesn’t make it sound very lovable… but I love it. The idea is that:

• $Vect(1)$ is the category of real vector spaces

• $Vect(0)$ is the category of complex vector spaces

• $Vect(-1)$ is the category of quaternionic vector spaces

and the operation of multiplication in $\mathbb{3} = \{1,0,-1\}$ gets sent to the operation of tensoring any one of these three kinds of vector space with any other kind and getting another kind!

So, if this works, we’ll have combined linear algebra over the real numbers, complex numbers and quaternions into a unified thing, $Vect$. This thing deserves to be called a $\mathbb{3}$-graded category. This would be a nice way to understand Dyson’s threefold way.

### What’s really going on?

What’s really going on with this monoid $\mathbb{3}$? It’s a kind of combination or ‘collage’ of two groups:

• The Brauer group of $\mathbb{R}$, namely $\mathbb{Z}_2 \cong \{-1,1\}$. This consists of Morita equivalence classes of central simple algebras over $\mathbb{R}$. One class contains $\mathbb{R}$ and the other contains $\mathbb{H}$. The tensor product of algebras corresponds to multiplication in $\{-1,1\}$.

• The Brauer group of $\mathbb{C}$, namely the trivial group $\{0\}$. This consists of Morita equivalence classes of central simple algebras over $\mathbb{C}$. But $\mathbb{C}$ is algebraically closed, so there’s just one class, containing $\mathbb{C}$ itself!

See, the problem is that while $\mathbb{C}$ is a division algebra over $\mathbb{R}$, it’s not ‘central simple’ over $\mathbb{R}$: its center is not just $\mathbb{R}$, it’s bigger. This turns out to be why $\mathbb{C} \otimes \mathbb{C}$ is so funny compared to the rest of the entries in our division algebra multiplication table.

So, we’ve really got two Brauer groups in play. But we also have a homomorphism from the first to the second, given by ‘tensoring with $\mathbb{C}$’: complexifying any real central simple algebra, we get a complex one.

And whenever we have a group homomorphism $\alpha: G \to H$, we can make their disjoint union $G \sqcup H$ into monoid, which I’ll call $G \sqcup_\alpha H$.

It works like this. Given $g,g' \in G$, we multiply them the usual way. Given $h, h' \in H$, we multiply them the usual way. But given $g \in G$ and $h \in H$, we define

$g h := \alpha(g) h$

and

$h g := h \alpha(g)$

The multiplication on $G \sqcup_\alpha H$ is associative! For example:

$(g g')h = \alpha(g g') h = \alpha(g) \alpha(g') h = \alpha(g) (g'h) = g(g'h)$

Moreover, the element $1_G \in G$ acts as the identity of $G \sqcup_\alpha H$. For example:

$1_G h = \alpha(1_G) h = 1_H h = h$

But of course $G \sqcup_\alpha H$ isn’t a group, since “once you get inside $H$ you never get out”.

This construction could be called the collage of $G$ and $H$ via $\alpha$, since it’s reminiscent of a similar construction of that name in category theory.

Question. What do monoid theorists call this construction?

Question. Can we do a similar trick for any field? Can we always take the Brauer groups of all its finite-dimensional extensions and fit them together into a monoid by taking some sort of collage? If so, I’d call this the Brauer monoid of that field.

### The $\mathbb{10}$-fold way

If you carefully read Part 1, maybe you can guess how I want to proceed. I want to make everything ‘super’.

I’ll replace division algebras over $\mathbb{R}$ by super division algebras over $\mathbb{R}$. Now instead of 3 = 2 + 1 there are 10 = 8 + 2:

• 8 of them are central simple over $\mathbb{R}$, so they give elements of the super Brauer group of $\mathbb{R}$, which is $\mathbb{Z}_8$.

• 2 of them are central simple over $\mathbb{C}$, so they give elements of the super Brauer group of $\mathbb{C}$, which is $\mathbb{Z}_2$.

Complexification gives a homomorphism

$\alpha: \mathbb{Z}_8 \to \mathbb{Z}_2$

namely the obvious nontrivial one. So, we can form the collage

$\mathbb{10} = \mathbb{Z}_8 \sqcup_\alpha \mathbb{Z}_2$

It’s a commutative monoid with 10 elements! Each of these is the equivalence class of one of the 10 real super division algebras.

I’ll then need to check that there’s an oplax monoidal functor

$G : \mathbb{10} \to SuperAlg_{\mathbb{R}}$

sending each element of $\mathbb{10}$ to the corresponding super division algebra.

If $G$ really exists, I can compose it with a thing

$SuperRep : SuperAlg_{\mathbb{R}} \to Rex_{\mathbb{R}}$

sending each super algebra to its category of ‘super representations’ on super vector spaces. This should again be a contravariant monoidal pseudofunctor.

We can call the composite of $G$ with $SuperRep$

$SuperVect: \mathbb{10} \to \Rex_{\mathbb{R}}$

If it all works, this thing $SuperVect$ will deserve to be called a $\mathbb{10}$-graded category. It contains super vector spaces over the 10 kinds of super division algebras in a single framework, and says how to tensor them. And when we look at super Hilbert spaces, this setup will be able to talk about all ten kinds of matter I mentioned last time… and how to combine them.

So that’s the plan. If you see problems, or ways to simplify things, please let me know!

Posted at July 22, 2014 11:02 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2757

### Re: The Ten-Fold Way (Part 2)

In retrospect I’m making things too hard on myself, and even doing things wrong, in the ‘tricky’ case of tensoring two real vector spaces equipped with a complex structure! I should just treat them as complex vector spaces and tensor them over $\mathbb{C}$. That’s physically right (for combining quantum systems), and it also seems to correspond to what we should do by taking the Brauer group of the complex numbers seriously, as a separate ‘chunk’ of our collage.

I believe that with this fix, the functor $F : \mathbb{3} \to Alg_{\mathbb{R}}$ will still be lax monoidal, and devoid of funny arbitrary choices.

Posted by: John Baez on July 25, 2014 11:40 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

From this point of view, where’s “periodicity”? Given an object in the category 10, why is there a “next” object?

Posted by: Allen Knutson on July 25, 2014 3:59 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Well, super division algebras in the submonoid

$\mathbb{Z}_8 \subset \mathbb{10}$

happen to be Morita equivalent to the super algebras $Cl_0, \dots, Cl_7$ (real Clifford algebras), while those in the submonoid

$\mathbb{Z}_2 \subset \mathbb{10}$

happen to be Morita equivalent to the super algebras $\mathbb{C}\mathrm{l}_0, \mathbb{C}\mathrm{l}_1$ (complex Clifford algebras).

So in the former case $\mathbb{Z}_8$ has a distinguished generator, $Cl_1$ (actually a super division algebra itself), which gives $\mathbb{Z}_8$ a cyclic ordering! But I only know this generator is distinguished using Clifford algebra theory, not from staring at the super Brauer group $\mathbb{Z}_8$ in abstract.

The second case is less interesting: $\mathbb{Z}_2$ has a distinguished generator simply because it has only one possible generator.

Posted by: John Baez on July 25, 2014 4:45 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Do you think (or share Greg Moore’s optimism) that this tenfold way is somehow the same as Dyson’s original tenfold way (discussed in his paper on the various threefold ways)?

The 10=8+2 split, as well as the group structure and canonical generators seem to be important aspects of this particular tenfold way. Dyson’s seems to involve 10=3x3+1, and various other “tenfold ways” in the literature don’t obviously have this “next object” that Allen mentioned. For the connection to $K$-theory, this extra structure seems to be crucial. If all these “tenfold ways” are not actually the same in any non-trivial sense, then what they classify must be very different things, which would be quite worrying!

Posted by: Guo Chuan Thiang on July 25, 2014 10:00 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Chuan Thiang wrote:

Do you think (or share Greg Moore’s optimism) that this tenfold way is somehow the same as Dyson’s original tenfold way (discussed in his paper on the various threefold ways)?

I haven’t delved into this question enough… thanks for giving me a nudge; it would be fun to investigate this. I’m pretty sure the (8+2)-fold way I’m discussing here is related to a (3×3+1)-fold way based on the study of time reversal and charge conjugation symmetry… but I don’t know if that’s Dyson’s original 10-fold way.

Posted by: John Baez on July 26, 2014 2:51 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Is there some way to define the Brauer monoid directly, without performing the collage construction? For example, can it be identified as consisting of Morita equivalence classes of [adjectives] algebras over $\mathbb{R}$ under some natural tensor product?

Posted by: Tim Campion on July 25, 2014 4:44 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I’ve tried things like that but haven’t been able to get it to work. The basic problem is that

$\mathbb{C}\otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$

whereas the three-fold way and ten-fold way want the tensor product of two $\mathbb{C}$-modules to be another $\mathbb{C}$-module.

Central simple algebras over a field $k$ are closed under tensor product, and every one is Morita equivalent to a division algebra over $k$ whose center is $k$. This gives the usual Brauer group. The problem is that $\mathbb{C}$ is not central simple over $\mathbb{R}$. $\mathbb{C}\otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$ is no longer even simple.

If $k$ has characteristic zero, semisimple algebras over $k$ are closed under tensor product (as well as direct sum). So, we can create a rig of Morita equivalence classes of semisimple algebras over $k$. This seems to be the most reasonable thing to do. But this rig consists of finite linear combinations of $\mathbb{R}, \mathbb{C},$ and $\mathbb{H}$ with the following multiplication table:

$\begin{array}{cccc} \mathbf{\otimes_{\mathbb{R}}} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}$

I don’t see a systematic way to chop this rig down to the monoid $\mathbb{3} = \{1, 0, -1\}$. The problem is

$\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} = \mathbb{C} \oplus \mathbb{C}$

again.

So, I think the field extensions of our base field $k$ should be treated with some extra respect; this leads to the idea of taking all the Brauer groups and glomming them together into a monoid.

Posted by: John Baez on July 26, 2014 2:34 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

What you call “oplax” monoidal functors, I prefer to call colax:

There, got that out of my system.

Posted by: Tom Leinster on July 25, 2014 9:21 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I’m thinking my ‘collage’ construction might be a special case of a general classification result for commutative monoids. The way people usually state that result is:

Any commutative semigroup has a grading by a semilattice such that the homogeneous components are Archimedean semigroups.

I don’t understand this result very well yet, but the basic idea seems to be that a commutative semigroup can be broken up into pieces called ‘components’, and the set of components is partially ordered (in fact a semilattice). Adding elements in two components $X$ and $Y$ can only yield an element in a component that’s at least as far up as both $X$ and $Y$.

I believe that the commutative monoid $\mathbb{10}$ will have two ‘components’, which are the groups $\mathbb{Z}_8$ and $\mathbb{Z}_2$. The component $\mathbb{Z}_2$ is ‘further up’, since whenever we add something in $\mathbb{Z}_8$ to something in $\mathbb{Z}_2$ we get something in $\mathbb{Z}_2$, and whenever we add two things in $\mathbb{Z}_2$ we get another thing in $\mathbb{Z}_2$. We can go up, but we can never go back down.

I think Pierre W. Grillet’s book Commutative Semigroups is the place to really learn this stuff, though it was first introduced in

• T. Tamura and N. Kimura, On decompositions of a commutative semigroup, Kodai Math. Sem. Rep. 1954 (1954), 109-112.
Posted by: John Baez on July 26, 2014 3:16 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Let me try to explain this structure theorem:

A commutative semigroup has a grading by a semilattice such that the homogeneous components are Archimedean semigroups.

First, a meet-semilattice is a poset $(L,\le)$ where any pair of elements $\alpha, \beta$ has a greatest lower bound, denoted $\alpha \wedge \beta$. Any such semilattice is automatically a commutative semigroup obeying an extra law, the idempotence law:

$\alpha \wedge \alpha = \alpha$

Suppose we have a commutative semigroup $(S, \cdot)$ and a homomorphism $f : S \to L$ where $L$ is a semilattice. Then we can partition $S$ into (possibly empty) subsets, one for each element $\alpha \in L$:

$S_\alpha = f^{-1} \{\alpha\}$

It’s easy to check that thanks to the idempotence law, each $S_\alpha$ is a semigroup!

So, a homomorphism $f : S \to L$ to a semilattice $L$ is a great way to chop up a commutative semigroup into smaller commutative semigroups.

Posted by: John Baez on July 26, 2014 8:28 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

So, given a commutative semigroup $(S,\cdot)$, how do we get a homomorphism to a semilattice?

We can define a preorder on $S$ by

$a \le b \iff b \cdot c = a^n \; for \; some \; c \in S, n > 0$

Roughly speaking, $a \le b$ if you can get from $b$ to some power of $a$ by multiplying $b$ with something. I think this definition is ‘upside down’ compared to how I would visualize things, but I will stick with Grillet’s notation for now to keep from getting completely confused!

This is just one of many preorders people put on semigroups, but this one is particularly nice for our purposes.

We can define an equivalence relation on $S$ by

$a \sim b \iff a \le b \; and \; b \le a$

Then $S/\sim$ becomes a semilattice with the operation $\wedge$ coming from the multiplication $\cdot$ in $S$ and a partial order coming from the preorder $\le$ in $S$. The quotient map

$S \to S/\sim$

is a homomorphism of semigroups, so we’ve got what we want.

Even better, $\sim$ is the finest equivalence relation such that $S /\sim$ becomes a semilattice!

This is part of Theorem 1.2 in Grillet’s book Commutative Semigroups. It looks quite easy to prove.

Posted by: John Baez on July 26, 2014 8:47 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Let me just check that $S / \sim$ is a semilattice.

I’ll take it for granted that $\le$ as defined above is a preorder on our semigroup $S$. It’s easy to check that this preorder gives one on $S / \sim$.

Note that the preorder $\le$ on $S$ is not in general a partial order: we don’t have

$a \le b \; and \; b \le a \implies a = b$

since for a group we have $a \sim b$ for all $a,b$. However, to form $S/\sim$ we identify two elements whenever $a \le b$ and $b \le a$. So the preorder on $S / \sim$ is actually a partial order.

Why is it a semilattice? We need to show that given $[a], [b] \in S/\sim$ they have a greatest lower bound. The only possible choice is $[a \cdot b]$, so let’s check that it works.

I would visualize $a \cdot b$ as bigger than $a$ and $b$, but I think my visualization is upside down compared to Grillet’s. Indeed, we have

$a \cdot b \le a$

since $a$ times something equals some power of $a \cdot b$. Similarly

$a \cdot b \le b$

so $a \cdot b$ is a lower bound of $a$ and $b$ in $S$. Thus $[a \cdot b]$ is a lower bound of $[a]$ and $[b]$ in $S / \sim$.

Why is it the greatest lower bound?

Suppose $[c]$ is any lower bound of $[a]$ and $[b]$. Then $c \le a$ and $c \le b$. So,

$c^m = a \cdot x$

for some $n, x$ and

$c^n = b \cdot y$

for some $m, y$. Following my old nose, I get

$c^{m + n} = a \cdot b \cdot x \cdot y$

which implies that $c \le a \cdot b$. So, $[a \cdot b]$ is the greatest lower bound of $[a]$ $[b]$.

Apart from all the inequalities pointing the opposite direction from how I would visualize them, this was very easy.

Posted by: John Baez on July 26, 2014 9:24 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

And now for the final piece of this structure theorem! We’ve already seen that starting from a commutative semigroup $S$, we get a semilattice $S/\sim$ and a homomorphism, the quotient map

$f: S \to S/\sim$

It follows that each component

$S_\alpha = f^{-1} \{ \alpha \}$

is a semigroup. Moreover, $S$ becomes graded by the semilattice $S / \sim$, meaning

$S_\alpha \cdot S_\beta \subseteq S_{\alpha \wedge \beta }$

But the final piece is this: each component $S_\alpha$ is an Archimedean semigroup. This is one for which any pair of elements $a, b$ obeys $a \le b$, where $\le$ is the preorder we’ve seen before:

$a \le b \iff b \cdot c = a^n \; for \; some \; c \in S, n > 0$

This last piece is obvious, since each component is defined to be an equivalence class of elements of $S$, where $a \sim b$ iff $a \le b$ and $b \le a$.

You can see how the Archimedean property is a generalization of the one we know in real analysis.

Posted by: John Baez on July 26, 2014 9:40 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Ah, Theorem 2.1 in Grillet’s Commutative Monoids gives a possible strategy for building the ‘Brauer monoid’ of a field out of the Brauer groups of all its algebraic extensions!

Let me first state the theorem. We say a commutative semigroup $S$ is graded by a semilattice $L$ if we can write $S$ as a disjoint union of ‘components’ $S_\alpha$ for $\alpha \in L$, and

$S_\alpha \cdot S_\beta \subseteq S_{\alpha \vee \beta}$

for all $\alpha, \beta \in L$. This implies that each component is a semigroup.

We’ve seen that every commutative semigroup has a canonical grading of this sort.

We say $S$ is a Clifford semigroup if it is equipped with a grading by a semilattice $L$ (not necessarily the canonical one) with the additional property that each component is a group. The components will obviously then be abelian groups.

The theorem says that from a Clifford semigroup we can build a functor from $L$ (viewed as a category with a unique morphism from $\alpha$ to $\beta$ when $\beta \le \alpha$) to the category of abelian groups. Conversely, given a functor from a semilattice to the category of abelian groups, we can build a Clifford semigroup.

(I believe these two processes are ‘inverses’: that is, they form an equivalence between the category of $L$-graded semigroups whose components are groups, and the category of functors from $L$ to $AbGp$. But Grillet doesn’t say this.)

The idea seems pretty simple. Given a Clifford semigroup, we define a functor from $L$ to $AbGp$ as follows. To each $\alpha \in L$ we assign the group $S_\alpha$. And whenever $\beta \le \alpha$, we have a homomorphism $S_\alpha \to S_\beta$ sending $x \in S_\alpha$ to $x 1_\beta$, where $\beta$ is the identity of $S_\beta$. This indeed maps $S_\alpha$ to $S_\beta$ since

$x \in S_\alpha, 1_\beta \in S_\beta \; \implies \; x 1_\beta \in S_{\alpha \wedge \beta} = S_\beta$

And it’s clearly a homomorphism!

It’s also pretty easy to see that these homomorphisms fit together to define a functor from $L$ to $AbGp$.

(Again Grillet seems to be doing things ‘upside down’ by treating $L$ as a category with a unique morphism from $\alpha$ to $\beta$ when $\beta \le \alpha$. But this is just a matter of convention. I won’t change conventions while I’m still reading his book!)

Posted by: John Baez on July 26, 2014 4:22 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

This reminds me of two things: the Grothendieck construction of a functor, and the spectrum of a ring. Any relation to either one, do you think?

Posted by: Mike Shulman on July 27, 2014 4:29 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I’ve been thinking of it as very close to the Grothendieck construction of a functor.

This is slightly obscured by the fact that people in semigroup theory don’t like identity morphisms.

But for my application this extra generality is pointless! I like identities. So I would use monoids instead of semigroups, and also work with meet-semilattices that have a top element. (I now see that some reputable people insist that their meet-lattices have a top element. Good.)

Then we could think of it this way:

We’ve got a functor $F : \Lambda \to AbGp$ where $\Lambda$ is a specially nice symmetric monoidal category (namely a semilattice, viewed as a symmetric monoidal posetal category). And we use some version of the Grothendieck construction to turn this into a specially nice symmetric monoidal category over $\Lambda$ (namely a $\Lambda$-graded commutative monoid).

Perhaps $AbGp$ here is a watered-down version of $SymMonCat$.

So it seems to involve a symmetric monoidal version of the Grothendieck construction. Have you heard about that?

Posted by: John Baez on July 27, 2014 5:55 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

So it seems to involve a symmetric monoidal version of the Grothendieck construction. Have you heard about that?

Yep! (Theorem 12.7)

Posted by: Mike Shulman on July 28, 2014 5:50 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Although you know you should beef up the assignment $k \mapsto Br(k)$ to a functor assigning to $k$ the symmetric monoidal 2-category of Azumaya algebras over $k$, invertible bimodules and bimodule isos. Although, as Mike’s result stands, isomorphism classes of bimodules would work better.

Also, the domain could the (opposite of the) topos of finite field extensions, or equivalently the category of continuous $Gal(K/k)$-sets…

Posted by: David Roberts on July 28, 2014 6:42 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Sorry, that was meant in response to John. Miscounted the quote levels… :-/

Posted by: David Roberts on July 28, 2014 6:44 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Great! For those too lazy to click and read, this theorem says (among other things) that there’s an equivalence

$SMF_C \simeq [C^{op}, SymMonCat ]$

where $C$ is a cartesian monoidal category, $SMF_C$ is the 2-category of symmetric monoidal fibrations over $C$, some sort of commuting squares of these, and natural transformations, and I’m guessing $[\cdot, \cdot]$ is the internal hom in $Cat$.

A meet-semilattice is indeed a cartesian monoidal category, right? So I think this theorem is indeed a kind of generalization of the result I mentioned. In that result $C$ is replaced by a meet-semilattice, $SymMonCat$ is replaced by $AbGp$, and $SMF_C$ is replaced by the category of $C$-graded commutative monoids with the property that each grade is an abelian group.

Posted by: John Baez on July 28, 2014 6:25 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Right! Except that $[\cdot,\cdot]$ denotes the category of pseudofunctors, not strict ones — but in your decategorified case there is no difference.

Posted by: Mike Shulman on July 28, 2014 7:08 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I don’t really see how this construction is like the spectrum of a ring, except in my application where there are bunch of different fields running around.

Posted by: John Baez on July 27, 2014 6:04 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Well, maybe the analogy to the spectrum of a ring is a bit of a stretch. I’m not thinking only of the ordinary prime spectrum of a commutative ring, but also of various other kinds of “spectra of rings” that I think I read about in Stone Spaces. In general the idea is that given a ring, one constructs from it a posetal sort of gadget (like the frame of opens of its prime ideal spectrum), and then decomposes the ring into a family of “simpler” rings “indexed over” that poset in some way (like the structure sheaf of the prime ideal spectrum). You seem to be doing something similar to a semigroup.

Posted by: Mike Shulman on July 28, 2014 5:55 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I just got around to reading J.C. Cole’s preprint The bicategory of topoi, and spectra which was discussed on the categories list last week. He presents a general theory of “spectra” in terms of right adjoints to maps of slice 2-categories of $Topos$ over classifying topoi, which includes the usual prime spectrum of a ring, where the classifying toposes are the Zariski topos (for local rings) and the classifying topos of rings. The right adjoint assigns to any ringed topos a locally ringed topos, and I guess when restricted to ordinary rings in $Set$ it gives the usual Zariski spectrum. Its existence is equivalent to the (constructive) best factorization of a ring homomorphism with local codomain through a local map (the other factor being a localization). I wonder whether this decomposition of a commutative semigroup into an “Archimedean piece” and a “semilattice piece” can be viewed as one of these “spectra”.

Posted by: Mike Shulman on July 29, 2014 6:26 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

A whiff of fracture theorem there (blog post, nLab)?

Posted by: David Corfield on July 29, 2014 6:45 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Mike wrote:

I wonder whether this decomposition of a commutative semigroup into an “Archimedean piece” and a “semilattice piece” can be viewed as one of these “spectra”.

Just so I don’t forget: someone should try to boost up this decomposition (called Clifford’s theorem) to a decomposition for symmetric monoidal categories!

You can easily form a symmetric monoidal preorder $P$ from a symmetric monoidal category $C$ by saying that for $a,b \in C$ we have $a \le b$ if there exists a morphism from $a$ to $b$. We get a forgetful functor

$F: C \to P$

which is symmetric monoidal, and thus $C$ becomes `$P$-graded’. We can replace $P$ by an equivalent poset if we like.

But Clifford’s clever idea — which he implemented only for commutative monoids — was to work a bit harder and replace $P$ with a semilattice.

Hmm. Ten minutes trying to generalize this to full-fledged symmetric monoidal categories failed to yield a method that works. Maybe we need to categorify the concept of ‘semilattice’ or something.

Getting some sort of ‘structure theorem’ for symmetric monoidal categories would be very nice.

Posted by: John Baez on July 29, 2014 9:39 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Maybe we need to categorify the concept of ‘semilattice’ or something.

One categorification of ‘meet-semilattice’ is ‘cartesian monoidal category’.

Posted by: Mike Shulman on July 30, 2014 12:43 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

So here’s my latest idea for defining the ‘Brauer monoid’ of a field $k$. We let $K$ be a finite extension of $k$, and let $L$ be the lattice of subfields of $K$ containing $k$.

For each field $F \in L$, let $Br(F)$ be the Brauer group of $F$. If $F$ is contained in a bigger field $F' \in L$, there’s a homomorphism

$Br(F) \to Br(F')$

sending each Morita equivalence class $[A]$ of central simple algebras over $F$ to the class $[F' \otimes_F A]$ of central simple algebras over $F'$. (Apparently people call this homomorphism a ‘restriction map’.)

I believe this should give a functor from the lattice $L$ to AbGp — and thus, by the theorem I mentioned in my last comment, an $L$-graded commutative semigroup!

(Now our conventions have flipped, so we’re thinking of $L$ as a category with a unique morphism from $F$ to $F'$ when $F \subseteq F'$. But that’s okay.)

This commutative semigroup should be a commutative monoid: the Brauer monoid of the extension $k \subseteq K$.

This seems like a more spiffy version of my ‘collage’ idea.

Posted by: John Baez on July 26, 2014 4:39 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

It may seem odd that I described a Brauer monoid not for a field but for a field $k$ and a finite extension $K$. This gives a bit of extra flexibility, but we may not want that flexibility. I restricted myself to finite extensions of the field $k$, that is those where the larger field $K$ is a finite-dimensional vector space over $k$, because I only want my Brauer monoid to know about finite-dimensional division algebras over $k$.

If the algebraic closure of $k$ is a finite extension of $k$, we can use that as our choice of $K$ and speak simply of the Brauer monoid of $k$.

This condition holds for $k = \mathbb{R}$, but not $k = \mathbb{Q}$.

Now I see that we can easily avoid this condition: let $K$ be the algebraic closure of $k$ but let $L$ be the lattice of fields $k \subseteq F \subseteq K$ that are finite extensions of $k$. Define a functor from $L$ to $AbGp$ sending each such field $F$ to the Brauer group $Br(F)$, and sending each inclusion $F \hookrightarrow F'$ to the canonical map $Br(F) \to Br(F')$. Use Theorem 2.1 in Grillet’s book to create a commutative monoid from this functor. This is the Brauer monoid of $k$.

It agrees with the previously mentioned one when the algebraic closure of $k$ is a finite extension of $K$.

Next I’d like to describe the Brauer monoid a bit more directly, without mentioning that theorem in Grillet’s book.

Posted by: John Baez on July 27, 2014 4:35 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

How do we explicitly describe the commutative semigroup coming from a functor

$F : L \to AbGp$

where $L$ is a lattice? Grillet explains this in the proof of his Theorem 2.1, a theorem that goes back to here:

• A. H. Clifford, Semigroups admitting relative inverses, Annals of Math. 42 (1941) 1037–1049.

To get our commutative semigroup, we start by taking the disjoint union

$S = \coprod_{\alpha \in L} F(\alpha)$

Then we put a multiplication on it. Say we are given $a \in F(\alpha)$ and $b \in F(\beta)$ and we want to multiply them. Since we have

$\alpha, \beta \ge \alpha \wedge \beta$

our functor $F$ gives group homomorphisms from $F(\alpha)$ and $F(\beta)$ to $F(\alpha \wedge \beta)$ (using Grillet’s upside-down conventions). We use these to map $a$ and $b$ into the semigroup $F(\alpha \wedge \beta)$, and then we multiply them in there.

It’s pretty easy to see that this recipe makes $S$ into a commutative semigroup: the associative law is the fun thing to check here.

And if our semilattice $L$ has a top element $\top$, then the identity $1 \in F(\top)$ will serve as an identity for $S$, so $S$ will be a commutative monoid.

(There should be some quick name for a meet-semilattice with a top element, just as there’s a quick name for a semigroup with an identity element.)

Posted by: John Baez on July 27, 2014 5:33 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

So, let me unravel all the constructions and describe my ‘Brauer monoid of a field’ more directly.

We start with a field $k$ and pick an algebraic closure of it, say $K$. We let $L$ be the collection of all intermediate fields $k \subseteq F \subseteq K$ that are finite extensions of $k$. We form the disjoint union

$\mathbf{BR}(k) = \coprod_{F \in L} Br(F)$

where $Br(F)$ is the usual Brauer group of $F$. An element of $Br(F)$ is a Morita equivalence class $[A]$ of central simple algebras over $F$.

Now we make $\mathbf{BR}(k)$ into a commutative monoid. Suppose we have two elements of $\mathbf{BR}(k)$:

$[A] \in Br(F), \quad [A'] \in Br(F')$

How do we multiply them? We take the smallest field $E \in L$ that contains both $F$ and $F'$. We extend our algebras $A$ and $A'$ to algebras over $E$, getting

$\tilde{A} = E \otimes_F A , \qquad \tilde{A'} = E \otimes_{F'} A$

If I know what I’m doing, these are both central simple algebras over $E$. So, we can tensor them over $E$ and get another central simple algebra $\tilde{A} \otimes_E \tilde{A'}$. This gives an element

$[\tilde{A} \otimes_E \tilde{A'}] \in Br(E) \subseteq \mathbf{BR}(k)$

And that’s how we multiply two guys in $\mathbf{BR}(k)$.

It’s pretty simple when all is said and done. The detour through commutative semigroup theory could be avoided, but I needed it to reach this idea… and it was fun to learn that stuff: commutative semigroups don’t get the respect they deserve. (Or at least commutative monoids don’t: semigroups that aren’t monoids deserve to be spat on and kicked, or else mercifully provided with an identity element.)

Now I can just generalize this stuff to the ‘super’ case and define the super Brauer monoid of any field. When that field is $\mathbb{R}$, this should be the commutative monoid $\mathbb{10}$, and we’ll get a ‘$\mathbb{10}$-graded symmetric monoidal category’

Posted by: John Baez on July 27, 2014 6:20 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

but don’t forget that INVERSE semigroups look like semigroups of partial automorphisms and have lots of identities rather than just one, and they are essentially the same as ordered groupoids, so spitting on the many object case seems unlike you, John, champion of the many object view of mathematics. ;-)

Posted by: Tim Porter on July 27, 2014 10:10 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Dear John:

You shouldn’t be so harsh to something with an identity crisis, like a semigroup without a “1”.

Although you can just add an identity element, this sometimes masks some important features of the structure. Well, I’ve spent about 40 years thinking about them, so the margins of this response don’t allow me to amplify this remark. Glad to discuss it if you like.

Best, Stuart Margolis

PS There’s also a fairly extensive literature on Commutative Monoids that I could point you to if you’d like.

Posted by: Stuart Margolis on July 27, 2014 12:55 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I’d hope it’s clear I’m joking when I say a mathematical object deserves to be “spat on and kicked”. However, I’ve rarely needed to think about semigroups that weren’t monoids or conveniently sitting inside monoids. Ditto for ‘semicategories’. But when I need them I will embrace them.

Posted by: John Baez on July 28, 2014 6:34 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

David wrote:

you should beef up the assignment $k \mapsto Br(k)$ to a functor assigning to $k$ the symmetric monoidal 2-category of Azumaya algebras over $k$, invertible bimodules and bimodule isos.

At one point my aesthetic was always to categorify everything as much as possible — but now ‘everyone’ is doing it, and I’m getting old and tired, so I’ve switched to a more minimalist aesthetic. I really just want to understand the $\mathbb{10}$ in the ten-fold way as a unified mathematical entity, instead of a listing of things.

However, if this works, it will be good for someone to milk it for all it’s worth!

Back in 2004 I was pushing people to study a monoidal bicategory $Alg(R)$ associated to any commutative ring $R$. The idea was that $Alg(R)$ has:

• $R$-algebras as objects

• bimodules as morphisms

• bimodule homomorphisms as 2-morphisms

The ‘core’ of this, consisting of the weakly invertible stuff, would be the 3-group with:

• Azumaya algebras as objects

• Morita equivalences as morphisms

• bimodule isomorphisms as 2-morphisms

and the $\pi_1, \pi_2$ and $\pi_3$ of this would be the Brauer group, Picard group and unit group of $R$.

At the time I was unable to rigorously verify that $Alg(R)$ is a monoidal category. In fact, it should be a symmetric monoidal bicategory. I guess Mike’s paper on constructing symmetric monoidal bicategories includes a proof that this is really true. (Talk is cheap; proofs less so.)

But now you’re telling me to go a bit further and think about how all these $Alg(R)$ guys fit together as we vary $R$. I’d thought about that a little bit, before.

The funny thing is that we have homomorphisms between commutative rings and also bimodules as potential morphisms between different $R$s. And something similar happens even if we fix one particular commutative ring $R$: we have homomorphisms as well as bimodules going between different $R$-algebras. Mike deals with that by saying $Alg(R)$ is not merely a symmetric monoidal bicategory, but something better: a fibrant symmetric monoidal double category.

So, something like Mike’s idea should also hold at the level where we allow $R$ to vary. But somehow this doesn’t seem to buy us much more, since all different commutative rings are already sitting there inside $Alg(\mathbb{Z})$.

We can talk about commutative algebras over commutative algebras over …. a commutative ring as long as we want, but is there any reason to care?

Posted by: John Baez on July 29, 2014 7:56 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I guess Mike’s paper on constructing symmetric monoidal bicategories includes a proof that this is really true.

Yep. You probably also know about Niles Johnson’s work using this bicategory (and related ones) for Morita theory.

So, something like Mike’s idea should also hold at the level where we allow $R$ to vary.

One way to assemble this sort of data is what I called in the paper a “2x1-category”: a (pseudo) 2-category internal to 1-categories, just as a double category (or “1x1-category”) is a a 1-category internal to 1-categories. You have a category of objects (commutative rings and ring homomorphisms), a category of 1-cells (two-sided algebras and algebra homomorphisms), and a category of 2-cells (bimodules and bimodule maps). There are also other ways of presenting these data, e.g. you could talk about a functor $CRing \to DblCat$. Neither of these includes bimodules between commutative rings explicitly, but you can see them by regarding a commutative ring as an algebra over itself in the tautological way. I guess you could maybe try to include them explicitly in a more triple-categorical sort of structure.

I’m can’t say based on my own experience whether or not this buys us anything, but I remember Peter May caring about structures of this sort at some point — I think towards the end of Parametrized homotopy theory they had to look at an analogous thing where for every space $B$ there was a bicategory (actually a fibrant double category) of spaces-over-$B$ and parametrized-spectra-over-spaces-over-$B$, varying as $B$ varies. And IIRC Chris Douglas and his collaborators have used related structures to talk about “conformal nets”, whatever those are.

Posted by: Mike Shulman on July 29, 2014 7:15 PM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

Mike wrote:

Yep. You probably also know about Niles Johnson’s work using this bicategory (and related ones) for Morita theory.

No, I didn’t! Thanks for pointing it out.

Posted by: John Baez on July 30, 2014 4:41 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I knew that you were not trying to discover a huge intricate sunken cathedral, but being concrete (and real-world relevant); I’m just in a bimodule/algebra frame of mind at the moment.

One reason that people care about such things is that this iterated algebra construction is one way we can access $n$-vector spaces. Also, Urs is currently thinking about how to globalise the arithmetic part of algebraic topology, instead of working ‘prime-by-prime’, in his cohesive setup, and this feels very much like it. All these things pasted together should say something about twists of differential algebraic K-theory, but perhaps that’s a bit of wishful thinking.

Posted by: David Roberts on July 30, 2014 6:47 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

I am vaguely trying to get a glimpse about what you are talking here and what this is motivated by. Warning: Since I understand only a tiny bit of the above (and part 1) and probably less then 1 % of that category theory jargon the following question may eventually appear strange. I also haven’t looked at your division algebra-quantum theory paper.

If I understood you correctly then with the threefold way you have some way to “cook up” higher dimensional Hilbert spaces, via some tensor product construction. I have though no idea how you came up with the results in the product table (is this in the division algebra paper?). That is on those product Hilbertspaces one has again 1 of the two major symmetries acting. Within the “tenfold way” that could either be time reversal or charge conjugation. Since as said I do not know how this construction works, I have no idea wether this construction can be used to construct new Hilbert space operators and in particular wether this construction would be partially extendable for the tenfold way, like by assuming some product behaviour for the “missing symmetry”.

Like could one use your construction by assuming, no symmetry for the missing symmetry (i.e. 0) would still be no symmetry for that missing symmetry also after taking the product?

### Re: The Ten-Fold Way (Part 2)

If I understood you correctly then with the threefold way you have some way to “cook up” higher dimensional Hilbert spaces, via some tensor product construction.

In quantum mechanics, as you know, when we combine two systems we take the tensor product of their Hilbert spaces. This is fine if we’re working only with complex Hilbert spaces, and it’s also fine if we’re working only with real Hilbert spaces, but it doesn’t work with quaternionic Hilbert spaces!

The reason is that the quaternions are noncommutative. A quaternionic vector space is a left module over the quaternions, but you can’t tensor two left modules over a noncommutative algebra and get another left module. Stephen Adler wrote a book on quaternionic quantum mechanics where he got rather confused by this issue (in my humble opinion).

You might wonder why we should care about real or quaternionic Hilbert spaces. The answer is that even if you take complex Hilbert spaces as fundamental, the study of conjugate-linear symmetries, like time reversal and charge conjugation, gives rise to real and quaternionic Hilbert spaces. I explained this here:

and this paper probably provides some of the motivation — the ‘why should we do this?’ stuff — that you’re missing.

But very roughly:

• quantum systems that don’t have time reversal symmetry are described by complex Hilbert spaces;

• quantum systems that do have time reversal symmetry are described by complex Hilbert spaces with an operator $T$ that is norm-preserving, conjugate-linear ($i T = -T i$) and obeys $T^2 = \pm 1$. When $T^2 = 1$ we can obtain from this data a real Hilbert space; when $T^2 = -1$ we can obtain from this data a quaternionic Hilbert space. Conversely, from a real or quaternionic Hilbert space we can get a complex Hilbert space with this extra data!

We can then think about combining these systems. So, we’re tensoring two complex Hilbert spaces equipped with extra data — or no extra data. But it turns out this problem is the same as the problem of tensoring real, complex and quaternionic Hilbert spaces!

For example, if we have two complex Hilbert spaces $H$ and $H'$ with time reversal operators $T$ and $T'$ obeying $T^2 = -1$, ${T'}^2 = -1$, the Hilbert space $H \otimes H$ gets an operator $T \otimes T'$ with $(T \otimes T')^2 = 1$. This corresponds to how we can tensor two quaternionic Hilbert spaces and get a real one!

Further analysis (done in this paper) reveals the multiplication table here:

$\begin{array}{cccc} \mathbf{\otimes} & \mathbf{real} & \mathbf{complex} & \mathbf{quaternionic} \\ \mathbf{real} & real & complex & quaternionic \\ \mathbf{complex} & complex & complex & complex \\ \mathbf{quaternionic} & quaternionic & complex & real \end{array}$

This ultimately arises from the fact that if we tensor the algebras $\mathbb{R}, \mathbb{C}, \mathbb{H}$, thinking of them as algebras over the real numbers, we get other algebras as follows:

$\begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C}[2] \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C}[2] & \mathbb{R}[4] \end{array}$

But how, exactly, does it arise?

My post here, and especially my comments on it, work out exactly how, in a way that generalizes to the 10-fold way, which is the ‘super’ version of the same story… arising naturally when you consider charge conjugation as well as time reversal.

Posted by: John Baez on July 30, 2014 5:20 AM | Permalink | Reply to this

### Re: The Ten-Fold Way (Part 2)

thanks for the explanations.

$T^2= -1$ we can obtain from this data a quaternionic Hilbert space.

For example, if we have two complex Hilbert spaces H and H’ with time reversal operators T and T’ obeying…

I understand this, but when you construct this “oplax functor” (who is still quite ominous to me) then you use intrinsically some “reductions”, which you can “trivially” “blow up” to the full tensor product.

Furthermore you wrote:

So, we can form the collage…

Do you assume that if you have two tensors A and A’ (as time reversal and charge conjugation), which e.g. square to 1 and -1 and B and B’ which square to 0 and -1, then your “new” time reversal would probably be (?) $A \otimes B$ (squaring to 0) and the new charge conjugation $A' \otimes B'$ (squaring to 1)?

I’ll then need to check that there’s an oplax monoidal functor…

So it seems you want to check wether you can “blow up” the corresponding product table spaces up to a full tensor product ?

The answer is that even if you take complex Hilbert spaces as fundamental, the study of conjugate-linear symmetries, like time reversal and charge conjugation, gives rise to real and quaternionic Hilbert spaces. I explained this here:

Unfortunately I can’t afford to read your division algebra-quantum theory paper, I just want to get a rough overview about what’s happening here. It may be mathematically interesting to construct quaternionic quantum spaces, for me it is at the moment sufficient to talk about that operators $T$, with their given properties. In particular it is not clear to me, what happens to this topological insulator classification if you perform those tensor products.