Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 4, 2007

Geometric Representation Theory (Lecture 7)

Posted by John Baez

Today in the Geometric Representation Theory seminar, Jim continues to explain Hecke operators.

In his last talk, he proved a wonderful but actually very easy theorem. Whenever a group acts on a set XX, we get a representation of that group on the vector space X\mathbb{C}^X in an obvious way. This is called a permutation representation. And the theorem says: if GG is a finite group acting on finite sets XX and YY, we get a basis of intertwining operators

X Y \mathbb{C}^X \to \mathbb{C}^Y

from atomic invariant relations between XX and YY: that is, relations that are invariant under the action of our group, and can’t be chopped into a logical ‘or’ of smaller invariant relations.

The intertwining operators we get this way are called ‘Hecke operators’. They’re very easy to get ahold of, because we see atomic invariant relations all the time in geometry: a typical example is ‘the point xXx \in X lies on the line yYy \in Y’.

So: Hecke operators link the world of geometry to the world of group representations.

This time, in lecture 7, Jim explains two nice applications of Hecke operators. In the second one, he starts using Hecke operators to grind down the permutation representations of the group n!n! into irreducible pieces, one for each nn-box Young diagram. This process involves thinking of the representations of n!n! as forming a ‘2-Hilbert space’ — a categorified Hilbert space. In fact, the category of representations of any compact topological group is a 2-Hilbert space. In this talk, Jim keeps the concept of 2-Hilbert space quite loose and informal. Since I’ve written a paper on this subject, I’ll include a link to that if you want a precise definition.

  • Lecture 7 (Oct. 18) - James Dolan on applications of Hecke operators. Theorem: if a finite group GG acts in a doubly transitive way on a finite set XX, then the resulting permutation representation of GG on X\mathbb{C}^X is the direct sum of two irreducible representations, one being the trivial representation. Proof: every permutation representation contains the trivial representation, and there are only two Hecke operators from X\mathbb{C}^X to itself. Lemma: if GG is a finite group, Rep(G)Rep(G) is a 2-Hilbert space with the irreducible representations of GG as an orthonormal basis. (This is a combination of Schur’s Lemma and Maschke’s Theorem.)

    Another application: using Gram-Schmidt orthonormalization to take the permutation representations of G=n!G = n! coming from nn-box Young diagrams and turn them into an “orthonormal basis” of Rep(G)Rep(G): that is, a complete collection of irreducible representations. Beginning of an explicit calculation for n=4n = 4.

Posted at November 4, 2007 6:26 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1477

14 Comments & 0 Trackbacks

Re: Geometric Representation Theory (Lecture 7)

A comment about doubly transitive GG-sets XX.

X×XX \times X has two orbits, the diagonal and the undiagonal, as pointed out in the lecture, and these give a basis for Hom G([X],[X])Hom_{G}(\mathbb{C}[X],\mathbb{C}[X]). But I think the diagonal gives the identity operator rather than the projection onto the trivial representation, as claimed. I’ll explain, and please correct me if I’m wrong.

If X={x 1,...,x n}X=\{x_1, ..., x_n\}, then a basis for Hom([X],[X])Hom(\mathbb{C}[X],\mathbb{C}[X]) is given by things of the form x ix jx^{i} \otimes x_{j}, where x i(x j)=δ ijx^{i}(x_{j})=\delta_ij. The diagonal orbit then gives the operator 1= ix ix i1=\sum_{i} x^{i}\otimes x_{i}, right?

On the other hand, the undiagonal gives the operator ijx ix j\sum_{i \neq j} x^{i} \otimes x_{j}.

The sum of these two is the projection onto the trivial, I think.

Posted by: Chris Brav on November 9, 2007 7:15 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Accidentally posted the above. Couldn’t get the TEX to parse. Please assist.

Posted by: Chris Brav on November 9, 2007 7:18 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

As far as the TeXing is concerned: did you choose a suitable text filter? (To render it you need to use either Markdown with itex to MathML, or Textile with itex to MathML, or itex to MathML with parbreaks.) Most of your TeX grammar looks right for this dialect, but there are discrepancies, e.g., you need to use \mathbb{C} rather than {\mathbb C}. Check here to see what’s supported.

As far as the mathematics goes, let me write down an explicit formula for the basic Hecke operators which would correspond to the double cosets. Suppose GG is a finite group which acts transitively on XX. Let HH be the stabilizer of a point of XX under the action of GG, so that we can write X=G/HX = G/H. For the double coset HgHH g H, define a GG-equivariant (Hecke) operator

J HgH:[G/H][G/H]J_{H g H}: \mathbb{C}[G/H] \to \mathbb{C}[G/H]

by the formula

gH1|H| xHggxH.g' H \mapsto \frac{1}{|H|} \sum_{x \in H g} g' x H.

(It’s not hard to check that this is well-defined and GG-equivariant and independent of which gg was chosen to represent the double coset HgHH g H.) Then, the orbit of the diagonal corresponds to the double coset where g=1g = 1, and it becomes apparent from the formula that you were right about that, Chris. Nice spot.

As to your other claim: you were close, but actually it’s a weighted sum. We might try a simple example, where S 3S_3 acts tautologically on the 3-element set. I think (unless I made a ridiculous mistake) that the basic Hecke operator JJ corresponding to off-diagonal satisfies

J 2=12(J+1)J^2 = \frac{1}{2}(J + 1)

[and actually that makes sense: we can interpret JJ as a stochastic operator operating on the 3-point set {1,2,3}\{1, 2, 3\} which randomly changes the point; apply JJ twice and there’s a 50 percent chance you wind back where you started from]. The primitive idempotents in this 2-dimensional algebra are 23J+13\frac{2}{3}J + \frac{1}{3} and 2323J\frac{2}{3} - \frac{2}{3}J. It’s the first which is the projection down onto the trivial representation (given by the span of the linear combination (1)+(2)+(3)(1) + (2) + (3)).

Posted by: Todd Trimble on November 9, 2007 7:20 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

John has already indicated that the comment to which I’m now replying was a bit obscurantist, so it may be well to add some further glosses in an attempt to clarify.

First of all, I want to explain a meaning of double cosets, which Jim and John have been talking about but in different language.

Suppose GG is a group (finite to make some things go more smoothly) and XX and YY are finite permutation representations of GG. Then Jim stated a theorem saying: the set of orbits of (X×Y)/G(X \times Y)/G gives a canonical basis for hom G( X, Y)hom_G(\mathbb{C}^X, \mathbb{C}^Y).

In the language of earlier posts (here and here), these orbits are atomic GG-invariant relations from XX to YY. So we have a logical view on these relations (they are atoms in a Boolean algebra; the Boolean algebra consists of 2-linear combinations of these atoms, where 2 is the rig of Boolean truth values). But we also have a representation theory view: by Jim’s theorem, they are a basis of the module hom G( X, Y)hom_G(\mathbb{C}^X, \mathbb{C}^Y), so that this module consists of \mathbb{C}-linear combinations of these elements. The change in point of view amounts to a change of base rig.

Let’s specialize to the case where the actions on XX and YY are transitive, so that if HH and KK are stabilizers of points in XX and YY, then XG/HX \cong G/H and YG/KY \cong G/K. The theorem above then says that (G/H×G/K)/G(G/H \times G/K)/G provides a canonical basis for hom G([G/H],[G/K])hom_G(\mathbb{C}[G/H], \mathbb{C}[G/K]). These are orbits of pairs (gH,gK)(g H, g' K) under the diagonal action by GG.

Now here’s a nice trick: inversion in the group allows us to turn left GG-modules into right GG-modules. In this manner, one can sensibly identify left cosets gHg H with right cosets Hg 1H g^{-1}. This also suggests that to each pair (gH,gK)(g H, g' K) we can associate the set Hg 1gKH g^{-1}g' K as a subset of GG. Such sets HgKH g K are called double cosets; they partition GG into equivalence classes. Notice that two pairs (gH,gK)(g H, g' K) and (xgH,xgK)(x g H, x g' K) that belong to the same GG-orbit induce the same double coset, because

H(xg) 1xgK=Hg 1gK.H (x g)^{-1} x g' K = H g^{-1} g' K.

In summary, the set of GG-orbits of G/H×G/KG/H \times G/K is identified with the set of double cosets, denoted H\G/KH\backslash G/K. But as Jim has pointed out to me repeatedly, this is really unevocative terminology which tends to hide what these things are really about: the ways in which a figure in a geometry XX can be invariantly related, or “situated”, or “oriented”, with respect to a figure in a geometry YY. So in our conversations, Jim and I have usually used the word “orientation” rather than “double coset”.

As long as I’m laying cards on the table, I may as well give another point of view on Jim’s theorem, at least in the case of transitive actions. We have canonical isomorphisms

hom G([G/H],[G/K]) [G/H] * G[G/K] [H\G] G[G/K] [H\G× GG/K] [H\G/K] \array{ hom_G(\mathbb{C}[G/H], \mathbb{C}[G/K]) & \cong & \mathbb{C}[G/H]^* \otimes_G \mathbb{C}[G/K] \\ & \cong & \mathbb{C}[H\backslash G] \otimes_G \mathbb{C}[G/K] \\ & \cong & \mathbb{C}[H\backslash G \times_G G/K] \\ & \cong & \mathbb{C}[H\backslash G/K] }

where the identification [G/H] *[H\G]\mathbb{C}[G/H]^* \cong \mathbb{C}[H\backslash G] comes about by the pairing: Hg,gH\langle H g, g'H \rangle is 11 if ggHg g' \in H, and 00 otherwise. By carefully working through all these isomorphisms, one eventually arrives at the formula I wrote down for a normalized jump or Hecke (GG-equivariant) operator:

J HgK(gH)=1|H| xHggxK.J_{H g K}(g' H) = \frac{1}{|H|} \sum_{x \in H g} g' x K.

Hopefully this makes my previous post a little clearer; I added a little more here in response to John.

Posted by: Todd Trimble on November 10, 2007 1:01 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Chris wrote:

A comment about doubly transitive GG-sets XX.

X×XX \times X has two orbits, the diagonal and the undiagonal, as pointed out in the lecture, and these give a basis for Hom G([X],[X])Hom_{G}(\mathbb{C}[X],\mathbb{C}[X]). But I think the diagonal gives the identity operator rather than the projection onto the trivial representation, as claimed. I’ll explain, and please correct me if I’m wrong.

You’re right. Thanks for catching that mistake of mine — a stupid slip I made in the heat of the moment.

The diagonal subset of X×XX \times X gives an X×XX \times X matrix with 1’s on the diagonal and 0’s off. This is just the identity operator,

I: X XI : \mathbb{C}^X \to \mathbb{C}^X

The off-diagonal subset of X×XX \times X gives a matrix with 1’s off the diagonal and 0’s on. This gives an operator

J: X XJ : \mathbb{C}^X \to \mathbb{C}^X

Todd introduced this notation ‘JJ’, but he didn’t say what it stands for! It stands for jump.

Think of XX as the set of lily pads in a pond. Then the operator JJ describes the jump of a frog who hops from any lily pad to a randomly chosen different lily pad.

JJ is like a doubly stochastic matrix, except it’s not normalized. The rows and columns don’t add up to 11: they add up to n1n-1, if there are nn lily pads.

Now suppose the frog makes two jumps. He might get back to where he started from… but he probably won’t, at least if nn is big.

There are n1n-1 ways for him to get back where he started, and n2n-2 ways for him to get to any other chosen lily pad. So,

J 2=(n1)I+(n2)JJ^2 = (n-1) I + (n-2) J

It took me a while to figure out the numbers here. My first guess was wrong, so think about it and see if I’m right.

Given this formula, or whatever the correct formula is, you can figure out which linear combinations

P=aI+bJP = a I + b J

are projection operators (P 2=PP^2 = P). Obviously a=1,b=0a = 1, b = 0 works, but that’s boring — just the identity operator II. Obviously a=0,b=0a = 0, b = 0 works, but this is even more boring. There will be two other more interesting choices that work. These will give two projections that sum to II.

One of these must be the projection onto the constant functions on XX — a one-dimensional subspace of X\mathbb{C}^X. If we think of X\mathbb{C}^X as consisting of ‘wavefunctions’ for our probabilistic frog, the constant functions describe a frog whose position is completely randomized. So, this projection must be an operator that completely randomizes the position of our frog.

So, if you think about it, it’s obvious that this projection operator must be

P=(I+J)/nP = (I + J)/n

This is a square matrix all of whose entries are 1/n1/n. It’s a doubly stochastic matrix describing the hop of a frog with an equal chance to land on any lily pad, including the lily pad he started on.

If we have a group acting on XX, this operator projects onto a copy of the trivial representation sitting inside X\mathbb{C}^X.

The other interesting projection is IPI - P.

Posted by: John Baez on November 9, 2007 5:47 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

By the way, speaking of this off-diagonal matrix JJ, here’s a little puzzle. Take the 4×44 \times 4 case:

J=0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 J \;\; =\;\; \array{ 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 }

Since this matrix is symmetric, we can think of it as a metric on 4\mathbb{R}^4 — that is, an ‘inner product’ that may not be positive definite.

What’s the signature of this metric?

Explain the deep inner meaning of your answer.

Posted by: John Baez on November 9, 2007 6:11 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

By the way, my ‘jump’ operator JJ is nn times Todd’s, so it’s possible we agree on everything even though our formulas don’t look the same.

It’s also possible one or both of us made some little error; one needs to sit down for ten minutes in a very quiet room to make sure all the normalization factors are correct.

Posted by: John Baez on November 9, 2007 6:24 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Yes, our answers are sufficiently different looking that one should carefully check for little slips; I think however we’re both ‘right’.

One of our primary examples (if this isn’t foreshadowing too much) is where the lily pads are points on the projective line over F qF_q, acted on by PGL 2(F q)PGL_2(F_q). This action is doubly transitive (actually it’s triply transitive, but we don’t need this now).

There are n=q+1n = q+1 lily pads on the projective line, and John’s formula gives

J 2=(q1)J+qJ^2 = (q-1)J + q

for the jump operator. I was using a slightly different, stochastic operator, which Jim and I have been calling a ‘probabilistic jump (or Hecke) operator’: after two jumps, there’s a 1q\frac{1}{q} chance of the frog’s landing back at the lily pad he started from, and a q1q\frac{q-1}{q} chance of landing on a different pad. So the probabilistic operator jj would satisfy

j 2=(q1q)j+1qIj^2 = (\frac{q-1}{q})j + \frac{1}{q}I

and the relationship between these two operators is J=qjJ = q j (where in the example I was looking at, n=q+1=3n = q + 1 = 3).

The operator John introduced is what we call a ‘combinatorial Hecke operator’: to compute J 2J^2, one is summing over all possible combinations or ‘histories’ in a 2-step time process. Which normalization one prefers is a matter of taste or occasion.

Posted by: Todd Trimble on November 9, 2007 10:47 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Chris and Todd have focused attention on a certain important doubly stochastic matrix, which describes the hop of a ‘maximally random’ frog: the n×nn \times n matrix all of whose entries are 1/n1/n.

As we’ve seen, this matrix gives an operator

T: n nT : \mathbb{C}^n \to \mathbb{C}^n

which is an intertwining operator for the obvious representation of the group n!n! on n\mathbb{C}^n.

So, it’s interesting to note a general relation between doubly stochastic n×nn \times n matrices and this representation of n!n!: the Birkhoff–von Neumann theorem.

This theorem says that the doubly stochastic n×nn \times n matrices (matrices with nonnegative entries where each row and each columns sums to 1) are just the convex linear combinations of the permutation matrices (matrices with one 1 in each row and one 1 in each column, the other entries being zero).

This result seems utterly obvious, but the only proof I’ve seen takes a bit of work.

I’m wondering if we can exploit this theorem or reinterpret it in some nice way.

Does it have some generalization to other permutation representations of other groups? If the finite group GG acts on the finite set XX there some nice way to think about the convex hull of the image of [G]\mathbb{C}[G] in End( X)End(\mathbb{C}^X)?

Posted by: John Baez on November 10, 2007 2:56 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Ross Street once remarked to me that Hall’s marriage theorem (which is the crux of the proof of the Birkhoff-von Neumann theorem that John linked to above) is just begging to be categorified. (I think his actual words were “this is begging for a structural proof”.) Does anyone have an idea how to do this?

Posted by: Todd Trimble on November 11, 2007 11:47 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Todd, in this talk , Robert Borgersen shows that Hall’s marriage theorem is equivalent to six other theorems in combinatorics (including the Birkhoff-von Neumann theorem).

The unifying concept is that of bipartite graph. What is the category of graphs? Hmm, well Vaughan Jones showed that there is a general planar algebra associated with a bipartite graph. Does this mean that one would have to categorify the concept of planar algebra?

Posted by: Charlie Stromeyer Jr on November 12, 2007 2:52 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

I fixed Chris Brav’s original post on this thread. It was a tiresome job, since while other changes are easy to make, there’s no way for me change the ‘text filter’ of a comment after it’s appeared. So, I had to delete his comment, repost it myself while pretending to be him, change the text filter to ‘itex to MathML with parbreaks’, and fix an error which made his comment fail to compile — using {\mathbb C} instead of \mathbb{C}.

If I delete a comment, all the comments that reply to it also disappear. So, I also had to repost the comments by Chris and Todd which replied to Chris’ original comment.

So: if anyone has trouble getting TeX to work his blog, it’s best if they start by reading the TeXnical Issues thread, which is full of advice. If that doesn’t help, send me questions at

baez@math.removethis.ucr.andthis.edu

(fix this email address in the obvious way).

I know it’s a bit of a nuisance learning how to post in TeX, but I think it’s worth it for people who like discussing math online.

And, if you want to be really nice to me, don’t click on ‘Reply to this’ when discussing comments where the TeX is all screwed up. Either just wait ‘til I fix it, or click on ‘Post a New Comment’.

Posted by: John Baez on November 9, 2007 4:47 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Did someone already post a solution to the homework exercise? I’m working through the lectures slowly, so it might have been dealt with in a later lecture and I’ve just not seen it. If not, here’s what I think is a simple explanation of why that last element of the matrix is 7 (unless I’m seriously misunderstanding the problem):

We want to know how many ways one ordered pair from a 4-element set can relate to another ordered pair from the 4-element set. So pick the first ordered pair, and call its elements A and B. The other two elements aren’t distinguished, so call them * and *.

To make the second ordered pair we first need to pick one of these elements, which could be A, B, or *. Then we need to pick a second element. If we were free to pick any pair, we’d again have 3 choices, for a total of 9 possible ordered pairs. But the elements have to be distinct, so we can’t pick (A,A) or (B,B), although we can pick (*,*), since we have two elements called *. That rules out 2 of the 9 possibilities, leaving 7 that are OK.

So the 7 choices for this ordered pair are:
(A,B), (A,*),
(B,A), (B,*),
(*,A), (*,B), (*,*).

Posted by: stuart on November 19, 2007 1:13 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 7)

Did someone already post a solution to the homework exercise? I’m working through the lectures slowly, so it might have been dealt with in a later lecture and I’ve just not seen it. If not, here’s what I think is a simple explanation of why that last element of the matrix is 7 (unless I’m seriously misunderstanding the problem):

We want to know how many ways one ordered pair from a 4-element set can relate to another ordered pair from the 4-element set. So pick the first ordered pair, and call its elements A and B. The other two elements aren’t distinguished, so call them * and *.

To make the second ordered pair we first need to pick one of these elements, which could be A, B, or *. Then we need to pick a second element. If we were free to pick any pair, we’d again have 3 choices, for a total of 9 possible ordered pairs. But the elements have to be distinct, so we can’t pick (A,A) or (B,B), although we can pick (*,*), since we have two elements called *. That rules out 2 of the 9 possibilities, leaving 7 that are OK.

So the 7 choices for this ordered pair are:

(A,B), (A,*),

(B,A), (B,*),

(*,A), (*,B), (*,*).

yes, that’s correct. in a later lecture we gave a sort of matrix
notation for these relationships. for example the matrix notation for
your item “(B,*)” would be:

AB*
010 A’
001 B’
101 *’

.

Posted by: james dolan on November 19, 2007 8:56 PM | Permalink | Reply to this

Post a New Comment