Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

June 1, 2025

Tannaka Reconstruction and the Monoid of Matrices

Posted by John Baez

You can classify representations of simple Lie groups using Dynkin diagrams, but you can also classify representations of ‘classical’ Lie groups using Young diagrams. Hermann Weyl wrote a whole book on this, The Classical Groups.

This approach is often treated as a bit outdated, since it doesn’t apply to all the simple Lie groups: it leaves out the so-called ‘exceptional’ groups. But what makes a group ‘classical’?

There’s no precise definition, but a classical group always has an obvious representation, you can get other representations by doing obvious things to this obvious one, and it turns out you can get all the representations this way.

For a long time I’ve been hoping to bring these ideas up to date using category theory. I had a bunch of conjectures, but I wasn’t able to prove any of them. Now Todd Trimble and I have made progress:

We tackle something even more classical than the classical groups: the monoid of n×nn \times n matrices, with matrix multiplication as its monoid operation.

The monoid of n×nn \times n matrices has an obvious nn-dimensional representation, and you can get all its representations from this one by operations that you can apply to any representation. So its category of representations is generated by this one obvious representation, in some sense. And it’s almost freely generated: there’s just one special relation. What’s that, you ask? It’s a relation saying the obvious representation is nn-dimensional!

That’s the basic idea. We need to make it more precise. We do it using the theory of 2-rigs, where for us a 2-rig is a symmetric monoidal linear category that is Cauchy complete. All the operations you can apply to any representation of a monoid are packed into this jargon.

Let’s write M(n,k)\text{M}(n,k) for the monoid of n×nn \times n matrices over a field kk, and Rep(M(n,k))\mathsf{Rep}(\text{M}(n,k)) for its 2-rig of representations. Then we want to say something like: Rep(M(n,k))\mathsf{Rep}(\text{M}(n,k)) is the free 2-rig on an object of dimension nn. That’s the kind of result I’ve been dreaming of.

To get this to be true, though, we need to say what kind of representations we’re talking about! Clearly we want finite-dimensional ones. But we need to be careful: we should only take finite-dimensional algebraic representations. Those are representations ρ:M(n,k)M(m,k)\rho: \text{M}(n,k) \to \text{M}(m,k) where the matrix entries of ρ(x)\rho(x) are polynomials in the matrix entries of xx. Otherwise, even the monoid of 1×11 \times 1 matrices gets lots of 1-dimensional representations coming from automorphisms of the field kk. Classifying those is a job for Galois theorists, not representation theorists.

So, we define Rep(M,k)\mathsf{Rep}(M,k) to be the category of algebraic representations of the monoid M(n,k)\text{M}(n,k), and we want to say Rep(M,k)\mathsf{Rep}(M,k) is the free 2-rig on an object of dimension nn. But we need to say what it means for an object xx of a 2-rig to have dimension nn.

The definition that works is to demand that the (n+1)(n+1)st exterior power of xx should vanish:

Λ n+1(x)0. \Lambda^{n+1}(x) \cong 0 .

But this is true for any vector space of dimension less than or equal to nn. So in our paper we say xx has subdimension nn when this holds. (There’s another stronger condition for having dimension exactly nn, but interestingly this is not what we want here. You’ll see why shortly.)

So here’s the theorem we prove, with all the fine print filled in:

Theorem. Suppose kk is a field of characteristic zero and let Rep(M(n,k))\mathsf{Rep}(\text{M}(n,k)) be the 2-rig of algebraic representations of the monoid M(n,k)\text{M}(n,k). Then the representation of M(n,k)\text{M}(n,k) on x=k nx = k^n by matrix multiplication has subdimension nn. Moreover, Rep(M(n,k))\text{Rep}(\text{M}(n,k)) is the free 2-rig on an object of subdimension nn. In other words, suppose R\mathsf{R} is any 2-rig containing an object rr of subdimension nn. Then there is a map of 2-rigs,

F:Rep(M(n,k))R, F: \mathsf{Rep}(\text{M}(n,k)) \to \mathsf{R} ,

unique up to natural isomorphism, such that F(x)=rF(x) = r.

Or, in simple catchy terms: M(n,k)\text{M}(n,k) is the walking monoid with a representation of subdimension nn.

To prove this theorem we need to deploy some concepts.

First, the fact that we’re talking about algebraic representations means that we’re not really treating M(n,k)\text{M}(n,k) as a bare monoid (a monoid in the category of sets). Instead, we’re treating it as a monoid in the category of affine schemes. But monoids in affine schemes are equivalent to commutative bialgebras, and this is often a more practical way of working with them.

Second, we need to use Tannaka reconstruction. This tells you how to reconstruct a commutative bialgebra from a 2-rig (which is secretly its 2-rig of representations) together with a faithful 2-rig map to Vect\mathsf{Vect} (which secretly sends any representation to its underlying vector space).

We want to apply this to the free 2-rig on an object xx of subdimension nn. Luckily because of this universal property it automatically gets a 2-rig map to Vect\mathsf{Vect} sending xx to k nk^n. So we just have to show this map is faithful, apply Tannaka reconstruction, and get out the commutative bialgebra corresponding to M(n,k)\text{M}(n,k)!

Well, I say ‘just’, but it takes some real work. It turns out to be useful to bring in the free 2-rig on one object. The reason is that we studied the free 2-rig on one object in two previous papers, so we know a lot about it:

We can use this knowledge if we think of the free 2-rig on an object of subdimension nn as a quotient of the free 2-rig on one object by a ‘2-ideal’. To do this, we need to develop the theory of ‘2-ideals’. But that’s good anyway — it will be useful for many other things.

So that’s the basic plan of the paper. It was really great working with Todd on this, taking a rough conjecture and building all the machinery necessary to make it precise and prove it.

What about representations of classical groups like GL(n,k),SL(n,k)\text{GL}(n,k), \text{SL}(n,k), the orthogonal and symplectic groups, and so on? At the end of the paper we state a bunch of conjectures about these. Here’s the simplest one:

Theorem. Suppose kk is a field of characteristic zero and let Rep(GL(n,k))\mathsf{Rep}(\text{GL}(n,k)) be the 2-rig of algebraic representations of GL(n,k).\text{GL}(n,k). Then the representation of GL(n,k)\text{GL}(n,k) on x=k nx = k^n by matrix multiplication has dimension nn, meaning its nnth exterior power has an inverse with respect to tensor product. Moreover, Rep(GL(n,k))\text{Rep}(\text{GL}(n,k)) is the free 2-rig on an object of dimension nn.

This ‘inverse with respect to tensor product’ stuff is an abstract way of saying that the determinant representation det(g)\text{det}(g) of gGL(n)g \in \text{GL}(n) has an inverse, namely the representation det(g) 1\text{det}(g)^{-1}.

It will take new techniques to prove this. I look forward to people tackling this and our other conjectures. Categorified rig theory can shed new light on group representation theory, bringing Weyl’s beautiful ideas forward into the 21st century.

Posted at June 1, 2025 4:25 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3600

5 Comments & 0 Trackbacks

Re: Tannaka Reconstruction and the Monoid of Matrices

I like the theorem!

Here’s a thought. It’s a striking fact that to understand groups, studying their actions on vector spaces is by far the most effective strategy, compared to studying their actions on sets, or rings, or measure spaces, or anything else. That’s not to say that you get no information from studying group actions on other objects, but the success of linear representation theory is the star of the show. So much so that we just say “representation theory” without even specifying that we’re representing groups in the category of vector spaces rather than somewhere else. It’s taken for granted.

I call this a “fact”, but of course it’s just a social observation, and who’s to say that things might not be different in a century’s time if human beings are still around. So I’ve sometimes wondered whether there’s some actual theorem attesting to the special role of the category of vector spaces relative to the category of groups. It might say something like: there is a categorical machine which when fed as input the category of groups produces as output the category of vector spaces, or perhaps the category of pairs consisting of a group and a linear representation of it.

Do you have any thoughts on this? It seems to me that your theorem is somewhat in this direction.

Posted by: Tom Leinster on June 2, 2025 3:46 PM | Permalink | Reply to this

Re: Tannaka Reconstruction and the Monoid of Matrices

I sometimes like to think that vector spaces are “more quantum” than sets. Whether or not that’s a good notion, certainly it does allow more compression in the representation, which makes them better than permutation representations. There are groups for which the linear representations are not as good as they could be.

Let’s say a permutation group is a group equipped with a small faithful permutation representation. “Small” is not a technical term. Any group can be made into a permutation group if you allow the representation to be big. I don’t want to call those “permutation groups”. I might also say that a linear group is a group equipped with a small linear representation. Perhaps “small” should mean that it is efficient to calculate with that representation?

The symmetric and alternating groups are definitely permutation groups. The Mathieu groups are a nice family of exceptional permutation groups. These permutation representations are very small: the group itself has order the factorial (!) of the representation. In other words, the group is of size comparable to the full symmetric group.

Every Lie group comes with a preferred linear representation, namely its adjoint representation. Every nontrivial representation of a simple group is faithful. So every simple Lie group is a linear group, if you admit the adjoint representation. The classical groups, as mentioned already, have a better, smaller linear representation, whose dimension grows as the square root of the adjoint. In other words, the group is of size comparable to the full matrix group.

I tend to count G 2G_2 also as a linear group. Whether F 4,F_4, \dots count or not is a bit of a grey-area question. Here is another way to measure the size of a (real, say) representation. Look at V:GO(n)V : G \to \mathrm{O}(n), and look at the induced map on, say, π 3\pi_3, which is a map \mathbb{Z} \to \mathbb{Z} if GG is simple and n5n \geq 5. The cokernel of this map measures something about the size. This is called the Dynkin index of the representation. The classical groups and G 2G_2 are precisely the groups with a representation VV of Dynkin index 11.

I don’t count E 8E_8 as a linear group. Its smallest vector representation is its adjoint, of Dynkin index 3030, which is somewhat large.

The finite simple groups of Lie type have linear representations over finite fields (debatable for exceptional types).

The sporadic finite simple groups come in four families. The Pariahs have no real pattern to them. The First Generation are the Mathieu groups. These, as already mentioned, are permutation. The Second Generation are linear groups. A finite linear group always preserves a lattice. The Second Generation preserves the Leech Lattice.

The Third Generation are not linear groups. The Monster group is the most extreme. It is not of particularly large order (in spite of its reputation): it is smaller than S 24S_{24} or GL 24(𝔽 2)\mathrm{GL}_{24}(\mathbb{F}_2). What makes the Monster large is the relative size of its smallest permutation and linear representations compared to its own order. On the other hand, the Monster does have a “very small” representation: it acts on a VOA of c=24c=24.

There is a fully faithful inclusion of groupoids {real vector spaces}{VOAs}\{\text{real vector spaces}\} \to \{\text{VOAs}\} that sends a vector space n\mathbb{R}^n to a certain VOA of c=n/2c=n/2 (called the “free fermion model”); there is also a complex version that sends n\mathbb{C}^n to a \mathbb{Z}-graded VOA of c=nc=n. So the central charge is a measure of “dimension”.

But VOAs allow compression already for Lie groups. The smallest VOA representation of a Lie group is of central charge comparable to the rank of the Lie group (with equality in the simply laced case). For the classical groups, the rank is proportional to the dimension of the defining representation, and indeed this smallest VOA representation is essentially the one coming from the linear representation.

Some VOA representations arise from taking a lattice (i.e. linear) representation and “second quantizing” it. I call this “second quantization” because the lattice itself sometimes comes from quantizing a permutation representation. (I am aware that this is not quite the standard usage of the term “second quantization”, but it is of the same flavour.)

Igor Frenkel dreams that every group has a preferred smallest VOA representation, which is always “small”. His dream moreover goes: the Dynkin-Cartan approach to using the adjoint representation to combinatorially classify Lie groups should extend to more general VOA representations.

Posted by: Theo Johnson-Freyd on June 2, 2025 7:24 PM | Permalink | Reply to this

Re: Tannaka Reconstruction and the Monoid of Matrices

Just a comment on Tom’s comment–another striking fact is how close classical physics comes to quantum physics without actually being the same thing. Mathematically, this is reflected in the fact that studying the action of a group on its coadjoint orbits (which are symplectic manifolds) is in some sense a “near miss” to the idea of studying the action of a group on vector spaces. A challenge for the “categorical machine” you are imagining would be to ask if it can also detect the existence of this “near miss.”

Posted by: Bob on June 4, 2025 5:06 PM | Permalink | Reply to this

Re: Tannaka Reconstruction and the Monoid of Matrices

Tom wrote:

So I’ve sometimes wondered whether there’s some actual theorem attesting to the special role of the category of vector spaces relative to the category of groups. It might say something like: there is a categorical machine which when fed as input the category of groups produces as output the category of vector spaces, or perhaps the category of pairs consisting of a group and a linear representation of it.

Do you have any thoughts on this? It seems to me that your theorem is somewhat in this direction.

Yes, I have a lot of thoughts on this. As others have pointed out, the fact that the universe is fundamentally quantum-mechanical is a hint that not only human mathematicians, but physical reality itself, is fond of vector spaces. The state of the world is not a point in a mere set: it’s more like a vector in a vector space! There are caveats here, but never mind: I’m talking about the

12(live cat+dead cat) \frac{1}{\sqrt{2}} \Big(\text{live cat} + \text{dead cat}\Big)

stuff that freaked out Schrödinger and everyone since.

Mathematically, one reason quantum mechanics works well is that groups act better on vector spaces that on sets. There are lots of manifestations of this, but one is that it’s easier to “continuously turn around” a vector space than a set. This is particularly true of complex vector spaces: the circle is also the group U(1)\mathrm{U}(1) of unitary 1×11 \times 1 matrices, so you can turn around a 1-dimensional complex vector space with linear transformations. Such rotation symmetries play a fundamental role throughout math and physics.

I spent a lot of time trying to free quantum mechanics from the domination of complex vector spaces, as explained in From finite sets to Feynman diagram with James Dolan and later HDA7: Groupoidification with Alex Hoffnung and Christopher Walker, but I couldn’t figure out a good substitute for U(1)\mathrm{U}(1). In Categorified algebra and quantum mechanics, my student Jeffrey Morton bit the bullet and introduced ‘phased sets’ and ‘phased groupoids’ to blend the advantages of U(1)\mathrm{U}(1) and groupoids.

I’ve continued thinking about this, and most recently I’ve gotten interested in what the Brauer induction theorem implies about it all. I think there’s a lot left to discover about how groups and vector spaces—particularly complex vector spaces—are connected.

Posted by: John Baez on June 4, 2025 11:16 PM | Permalink | Reply to this

Dumb question

I have a dumb category theory question:

How much of point set topology can one lift from open sets (or closed sets) to comonads (or monads)?

Like it seems an obvious thing to try so probably isn’t useful?

Posted by: Christina upshaw on June 2, 2025 10:24 PM | Permalink | Reply to this

Post a New Comment