## November 4, 2007

### Geometric Representation Theory (Lecture 7)

#### Posted by John Baez Today in the Geometric Representation Theory seminar, Jim continues to explain Hecke operators.

In his last talk, he proved a wonderful but actually very easy theorem. Whenever a group acts on a set $X$, we get a representation of that group on the vector space $\mathbb{C}^X$ in an obvious way. This is called a permutation representation. And the theorem says: if $G$ is a finite group acting on finite sets $X$ and $Y$, we get a basis of intertwining operators

$\mathbb{C}^X \to \mathbb{C}^Y$

from atomic invariant relations between $X$ and $Y$: that is, relations that are invariant under the action of our group, and can’t be chopped into a logical ‘or’ of smaller invariant relations.

The intertwining operators we get this way are called ‘Hecke operators’. They’re very easy to get ahold of, because we see atomic invariant relations all the time in geometry: a typical example is ‘the point $x \in X$ lies on the line $y \in Y$’.

So: Hecke operators link the world of geometry to the world of group representations.

This time, in lecture 7, Jim explains two nice applications of Hecke operators. In the second one, he starts using Hecke operators to grind down the permutation representations of the group $n!$ into irreducible pieces, one for each $n$-box Young diagram. This process involves thinking of the representations of $n!$ as forming a ‘2-Hilbert space’ — a categorified Hilbert space. In fact, the category of representations of any compact topological group is a 2-Hilbert space. In this talk, Jim keeps the concept of 2-Hilbert space quite loose and informal. Since I’ve written a paper on this subject, I’ll include a link to that if you want a precise definition.

• Lecture 7 (Oct. 18) - James Dolan on applications of Hecke operators. Theorem: if a finite group $G$ acts in a doubly transitive way on a finite set $X$, then the resulting permutation representation of $G$ on $\mathbb{C}^X$ is the direct sum of two irreducible representations, one being the trivial representation. Proof: every permutation representation contains the trivial representation, and there are only two Hecke operators from $\mathbb{C}^X$ to itself. Lemma: if $G$ is a finite group, $Rep(G)$ is a 2-Hilbert space with the irreducible representations of $G$ as an orthonormal basis. (This is a combination of Schur’s Lemma and Maschke’s Theorem.)

Another application: using Gram-Schmidt orthonormalization to take the permutation representations of $G = n!$ coming from $n$-box Young diagrams and turn them into an “orthonormal basis” of $Rep(G)$: that is, a complete collection of irreducible representations. Beginning of an explicit calculation for $n = 4$.

Posted at November 4, 2007 6:26 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1477

### Re: Geometric Representation Theory (Lecture 7)

A comment about doubly transitive $G$-sets $X$.

$X \times X$ has two orbits, the diagonal and the undiagonal, as pointed out in the lecture, and these give a basis for $Hom_{G}(\mathbb{C}[X],\mathbb{C}[X])$. But I think the diagonal gives the identity operator rather than the projection onto the trivial representation, as claimed. I’ll explain, and please correct me if I’m wrong.

If $X=\{x_1, ..., x_n\}$, then a basis for $Hom(\mathbb{C}[X],\mathbb{C}[X])$ is given by things of the form $x^{i} \otimes x_{j}$, where $x^{i}(x_{j})=\delta_ij$. The diagonal orbit then gives the operator $1=\sum_{i} x^{i}\otimes x_{i}$, right?

On the other hand, the undiagonal gives the operator $\sum_{i \neq j} x^{i} \otimes x_{j}$.

The sum of these two is the projection onto the trivial, I think.

Posted by: Chris Brav on November 9, 2007 7:15 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Accidentally posted the above. Couldn’t get the TEX to parse. Please assist.

Posted by: Chris Brav on November 9, 2007 7:18 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

As far as the TeXing is concerned: did you choose a suitable text filter? (To render it you need to use either Markdown with itex to MathML, or Textile with itex to MathML, or itex to MathML with parbreaks.) Most of your TeX grammar looks right for this dialect, but there are discrepancies, e.g., you need to use \mathbb{C} rather than {\mathbb C}. Check here to see what’s supported.

As far as the mathematics goes, let me write down an explicit formula for the basic Hecke operators which would correspond to the double cosets. Suppose $G$ is a finite group which acts transitively on $X$. Let $H$ be the stabilizer of a point of $X$ under the action of $G$, so that we can write $X = G/H$. For the double coset $H g H$, define a $G$-equivariant (Hecke) operator

$J_{H g H}: \mathbb{C}[G/H] \to \mathbb{C}[G/H]$

by the formula

$g' H \mapsto \frac{1}{|H|} \sum_{x \in H g} g' x H.$

(It’s not hard to check that this is well-defined and $G$-equivariant and independent of which $g$ was chosen to represent the double coset $H g H$.) Then, the orbit of the diagonal corresponds to the double coset where $g = 1$, and it becomes apparent from the formula that you were right about that, Chris. Nice spot.

As to your other claim: you were close, but actually it’s a weighted sum. We might try a simple example, where $S_3$ acts tautologically on the 3-element set. I think (unless I made a ridiculous mistake) that the basic Hecke operator $J$ corresponding to off-diagonal satisfies

$J^2 = \frac{1}{2}(J + 1)$

[and actually that makes sense: we can interpret $J$ as a stochastic operator operating on the 3-point set $\{1, 2, 3\}$ which randomly changes the point; apply $J$ twice and there’s a 50 percent chance you wind back where you started from]. The primitive idempotents in this 2-dimensional algebra are $\frac{2}{3}J + \frac{1}{3}$ and $\frac{2}{3} - \frac{2}{3}J$. It’s the first which is the projection down onto the trivial representation (given by the span of the linear combination $(1) + (2) + (3)$).

Posted by: Todd Trimble on November 9, 2007 7:20 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

John has already indicated that the comment to which I’m now replying was a bit obscurantist, so it may be well to add some further glosses in an attempt to clarify.

First of all, I want to explain a meaning of double cosets, which Jim and John have been talking about but in different language.

Suppose $G$ is a group (finite to make some things go more smoothly) and $X$ and $Y$ are finite permutation representations of $G$. Then Jim stated a theorem saying: the set of orbits of $(X \times Y)/G$ gives a canonical basis for $hom_G(\mathbb{C}^X, \mathbb{C}^Y)$.

In the language of earlier posts (here and here), these orbits are atomic $G$-invariant relations from $X$ to $Y$. So we have a logical view on these relations (they are atoms in a Boolean algebra; the Boolean algebra consists of 2-linear combinations of these atoms, where 2 is the rig of Boolean truth values). But we also have a representation theory view: by Jim’s theorem, they are a basis of the module $hom_G(\mathbb{C}^X, \mathbb{C}^Y)$, so that this module consists of $\mathbb{C}$-linear combinations of these elements. The change in point of view amounts to a change of base rig.

Let’s specialize to the case where the actions on $X$ and $Y$ are transitive, so that if $H$ and $K$ are stabilizers of points in $X$ and $Y$, then $X \cong G/H$ and $Y \cong G/K$. The theorem above then says that $(G/H \times G/K)/G$ provides a canonical basis for $hom_G(\mathbb{C}[G/H], \mathbb{C}[G/K])$. These are orbits of pairs $(g H, g' K)$ under the diagonal action by $G$.

Now here’s a nice trick: inversion in the group allows us to turn left $G$-modules into right $G$-modules. In this manner, one can sensibly identify left cosets $g H$ with right cosets $H g^{-1}$. This also suggests that to each pair $(g H, g' K)$ we can associate the set $H g^{-1}g' K$ as a subset of $G$. Such sets $H g K$ are called double cosets; they partition $G$ into equivalence classes. Notice that two pairs $(g H, g' K)$ and $(x g H, x g' K)$ that belong to the same $G$-orbit induce the same double coset, because

$H (x g)^{-1} x g' K = H g^{-1} g' K.$

In summary, the set of $G$-orbits of $G/H \times G/K$ is identified with the set of double cosets, denoted $H\backslash G/K$. But as Jim has pointed out to me repeatedly, this is really unevocative terminology which tends to hide what these things are really about: the ways in which a figure in a geometry $X$ can be invariantly related, or “situated”, or “oriented”, with respect to a figure in a geometry $Y$. So in our conversations, Jim and I have usually used the word “orientation” rather than “double coset”.

As long as I’m laying cards on the table, I may as well give another point of view on Jim’s theorem, at least in the case of transitive actions. We have canonical isomorphisms

$\array{ hom_G(\mathbb{C}[G/H], \mathbb{C}[G/K]) & \cong & \mathbb{C}[G/H]^* \otimes_G \mathbb{C}[G/K] \\ & \cong & \mathbb{C}[H\backslash G] \otimes_G \mathbb{C}[G/K] \\ & \cong & \mathbb{C}[H\backslash G \times_G G/K] \\ & \cong & \mathbb{C}[H\backslash G/K] }$

where the identification $\mathbb{C}[G/H]^* \cong \mathbb{C}[H\backslash G]$ comes about by the pairing: $\langle H g, g'H \rangle$ is $1$ if $g g' \in H$, and $0$ otherwise. By carefully working through all these isomorphisms, one eventually arrives at the formula I wrote down for a normalized jump or Hecke ($G$-equivariant) operator:

$J_{H g K}(g' H) = \frac{1}{|H|} \sum_{x \in H g} g' x K.$

Hopefully this makes my previous post a little clearer; I added a little more here in response to John.

Posted by: Todd Trimble on November 10, 2007 1:01 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Chris wrote:

A comment about doubly transitive $G$-sets $X$.

$X \times X$ has two orbits, the diagonal and the undiagonal, as pointed out in the lecture, and these give a basis for $Hom_{G}(\mathbb{C}[X],\mathbb{C}[X])$. But I think the diagonal gives the identity operator rather than the projection onto the trivial representation, as claimed. I’ll explain, and please correct me if I’m wrong.

You’re right. Thanks for catching that mistake of mine — a stupid slip I made in the heat of the moment.

The diagonal subset of $X \times X$ gives an $X \times X$ matrix with 1’s on the diagonal and 0’s off. This is just the identity operator,

$I : \mathbb{C}^X \to \mathbb{C}^X$

The off-diagonal subset of $X \times X$ gives a matrix with 1’s off the diagonal and 0’s on. This gives an operator

$J : \mathbb{C}^X \to \mathbb{C}^X$

Todd introduced this notation ‘$J$’, but he didn’t say what it stands for! It stands for jump.

Think of $X$ as the set of lily pads in a pond. Then the operator $J$ describes the jump of a frog who hops from any lily pad to a randomly chosen different lily pad.

$J$ is like a doubly stochastic matrix, except it’s not normalized. The rows and columns don’t add up to $1$: they add up to $n-1$, if there are $n$ lily pads.

Now suppose the frog makes two jumps. He might get back to where he started from… but he probably won’t, at least if $n$ is big.

There are $n-1$ ways for him to get back where he started, and $n-2$ ways for him to get to any other chosen lily pad. So,

$J^2 = (n-1) I + (n-2) J$

It took me a while to figure out the numbers here. My first guess was wrong, so think about it and see if I’m right.

Given this formula, or whatever the correct formula is, you can figure out which linear combinations

$P = a I + b J$

are projection operators ($P^2 = P$). Obviously $a = 1, b = 0$ works, but that’s boring — just the identity operator $I$. Obviously $a = 0, b = 0$ works, but this is even more boring. There will be two other more interesting choices that work. These will give two projections that sum to $I$.

One of these must be the projection onto the constant functions on $X$ — a one-dimensional subspace of $\mathbb{C}^X$. If we think of $\mathbb{C}^X$ as consisting of ‘wavefunctions’ for our probabilistic frog, the constant functions describe a frog whose position is completely randomized. So, this projection must be an operator that completely randomizes the position of our frog.

So, if you think about it, it’s obvious that this projection operator must be

$P = (I + J)/n$

This is a square matrix all of whose entries are $1/n$. It’s a doubly stochastic matrix describing the hop of a frog with an equal chance to land on any lily pad, including the lily pad he started on.

If we have a group acting on $X$, this operator projects onto a copy of the trivial representation sitting inside $\mathbb{C}^X$.

The other interesting projection is $I - P$.

Posted by: John Baez on November 9, 2007 5:47 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

By the way, speaking of this off-diagonal matrix $J$, here’s a little puzzle. Take the $4 \times 4$ case:

$J \;\; =\;\; \array{ 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 }$

Since this matrix is symmetric, we can think of it as a metric on $\mathbb{R}^4$ — that is, an ‘inner product’ that may not be positive definite.

What’s the signature of this metric?

Posted by: John Baez on November 9, 2007 6:11 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

By the way, my ‘jump’ operator $J$ is $n$ times Todd’s, so it’s possible we agree on everything even though our formulas don’t look the same.

It’s also possible one or both of us made some little error; one needs to sit down for ten minutes in a very quiet room to make sure all the normalization factors are correct.

Posted by: John Baez on November 9, 2007 6:24 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Yes, our answers are sufficiently different looking that one should carefully check for little slips; I think however we’re both ‘right’.

One of our primary examples (if this isn’t foreshadowing too much) is where the lily pads are points on the projective line over $F_q$, acted on by $PGL_2(F_q)$. This action is doubly transitive (actually it’s triply transitive, but we don’t need this now).

There are $n = q+1$ lily pads on the projective line, and John’s formula gives

$J^2 = (q-1)J + q$

for the jump operator. I was using a slightly different, stochastic operator, which Jim and I have been calling a ‘probabilistic jump (or Hecke) operator’: after two jumps, there’s a $\frac{1}{q}$ chance of the frog’s landing back at the lily pad he started from, and a $\frac{q-1}{q}$ chance of landing on a different pad. So the probabilistic operator $j$ would satisfy

$j^2 = (\frac{q-1}{q})j + \frac{1}{q}I$

and the relationship between these two operators is $J = q j$ (where in the example I was looking at, $n = q + 1 = 3$).

The operator John introduced is what we call a ‘combinatorial Hecke operator’: to compute $J^2$, one is summing over all possible combinations or ‘histories’ in a 2-step time process. Which normalization one prefers is a matter of taste or occasion.

Posted by: Todd Trimble on November 9, 2007 10:47 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Chris and Todd have focused attention on a certain important doubly stochastic matrix, which describes the hop of a ‘maximally random’ frog: the $n \times n$ matrix all of whose entries are $1/n$.

As we’ve seen, this matrix gives an operator

$T : \mathbb{C}^n \to \mathbb{C}^n$

which is an intertwining operator for the obvious representation of the group $n!$ on $\mathbb{C}^n$.

So, it’s interesting to note a general relation between doubly stochastic $n \times n$ matrices and this representation of $n!$: the Birkhoff–von Neumann theorem.

This theorem says that the doubly stochastic $n \times n$ matrices (matrices with nonnegative entries where each row and each columns sums to 1) are just the convex linear combinations of the permutation matrices (matrices with one 1 in each row and one 1 in each column, the other entries being zero).

This result seems utterly obvious, but the only proof I’ve seen takes a bit of work.

I’m wondering if we can exploit this theorem or reinterpret it in some nice way.

Does it have some generalization to other permutation representations of other groups? If the finite group $G$ acts on the finite set $X$ there some nice way to think about the convex hull of the image of $\mathbb{C}[G]$ in $End(\mathbb{C}^X)$?

Posted by: John Baez on November 10, 2007 2:56 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Ross Street once remarked to me that Hall’s marriage theorem (which is the crux of the proof of the Birkhoff-von Neumann theorem that John linked to above) is just begging to be categorified. (I think his actual words were “this is begging for a structural proof”.) Does anyone have an idea how to do this?

Posted by: Todd Trimble on November 11, 2007 11:47 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Todd, in this talk , Robert Borgersen shows that Hall’s marriage theorem is equivalent to six other theorems in combinatorics (including the Birkhoff-von Neumann theorem).

The unifying concept is that of bipartite graph. What is the category of graphs? Hmm, well Vaughan Jones showed that there is a general planar algebra associated with a bipartite graph. Does this mean that one would have to categorify the concept of planar algebra?

Posted by: Charlie Stromeyer Jr on November 12, 2007 2:52 AM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

I fixed Chris Brav’s original post on this thread. It was a tiresome job, since while other changes are easy to make, there’s no way for me change the ‘text filter’ of a comment after it’s appeared. So, I had to delete his comment, repost it myself while pretending to be him, change the text filter to ‘itex to MathML with parbreaks’, and fix an error which made his comment fail to compile — using {\mathbb C} instead of \mathbb{C}.

If I delete a comment, all the comments that reply to it also disappear. So, I also had to repost the comments by Chris and Todd which replied to Chris’ original comment.

So: if anyone has trouble getting TeX to work his blog, it’s best if they start by reading the TeXnical Issues thread, which is full of advice. If that doesn’t help, send me questions at

baez@math.removethis.ucr.andthis.edu

(fix this email address in the obvious way).

I know it’s a bit of a nuisance learning how to post in TeX, but I think it’s worth it for people who like discussing math online.

And, if you want to be really nice to me, don’t click on ‘Reply to this’ when discussing comments where the TeX is all screwed up. Either just wait ‘til I fix it, or click on ‘Post a New Comment’.

Posted by: John Baez on November 9, 2007 4:47 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Did someone already post a solution to the homework exercise? I’m working through the lectures slowly, so it might have been dealt with in a later lecture and I’ve just not seen it. If not, here’s what I think is a simple explanation of why that last element of the matrix is 7 (unless I’m seriously misunderstanding the problem):

We want to know how many ways one ordered pair from a 4-element set can relate to another ordered pair from the 4-element set. So pick the first ordered pair, and call its elements A and B. The other two elements aren’t distinguished, so call them * and *.

To make the second ordered pair we first need to pick one of these elements, which could be A, B, or *. Then we need to pick a second element. If we were free to pick any pair, we’d again have 3 choices, for a total of 9 possible ordered pairs. But the elements have to be distinct, so we can’t pick (A,A) or (B,B), although we can pick (*,*), since we have two elements called *. That rules out 2 of the 9 possibilities, leaving 7 that are OK.

So the 7 choices for this ordered pair are:
(A,B), (A,*),
(B,A), (B,*),
(*,A), (*,B), (*,*).

Posted by: stuart on November 19, 2007 1:13 PM | Permalink | Reply to this

### Re: Geometric Representation Theory (Lecture 7)

Did someone already post a solution to the homework exercise? I’m working through the lectures slowly, so it might have been dealt with in a later lecture and I’ve just not seen it. If not, here’s what I think is a simple explanation of why that last element of the matrix is 7 (unless I’m seriously misunderstanding the problem):

We want to know how many ways one ordered pair from a 4-element set can relate to another ordered pair from the 4-element set. So pick the first ordered pair, and call its elements A and B. The other two elements aren’t distinguished, so call them * and *.

To make the second ordered pair we first need to pick one of these elements, which could be A, B, or *. Then we need to pick a second element. If we were free to pick any pair, we’d again have 3 choices, for a total of 9 possible ordered pairs. But the elements have to be distinct, so we can’t pick (A,A) or (B,B), although we can pick (*,*), since we have two elements called *. That rules out 2 of the 9 possibilities, leaving 7 that are OK.

So the 7 choices for this ordered pair are:

(A,B), (A,*),

(B,A), (B,*),

(*,A), (*,B), (*,*).

yes, that’s correct. in a later lecture we gave a sort of matrix
notation for these relationships. for example the matrix notation for