Geometric Representation Theory (Lecture 11)
Posted by John Baez
This time in the Geometric Representation Theory seminar, Jim Dolan recalls how to describe Hecke operators between flag representations using certain matrices.
Does composition of the Hecke operators correspond to multiplying these matrices? No! — and yet, in a certain limit it does resemble matrix multiplication.
Here’s how these matrices work. Let’s take two uncombed Young diagrams with $n$ boxes, say $D$ and $E$. Let’s do an example we’ve already discussed, and take $n = 4$. Say you choose $D$ to be this:
Then a $D$flag on a 4element set is a way of chopping it into two parts — one for each row of your Young diagram — and since each row has two boxes, each part must contain 2 elements. So, to put a $D$flag on a set of 4 balls, you can just color 2 balls white and 2 balls black.
Say I take $E$ be this:
Then an $E$flag on a 4element set is a way of chopping it into 4 parts, each containing 1 element. So, to put an $E$flag on a set of 4 balls, I can color them red, yellow, green, and blue — one ball of each color.
What are the possible ways that a $D$flag and an $E$flag can be related? More precisely, what are the ‘atomic invariant relations’?
Here’s one: the 2 balls that you color white, I color red and green. The 2 balls that you color black, I color yellow and blue.
There are lots of other possibilities. But, we can keep track of all of them using matrices with one column for each row of your Young diagram, and one row for each row of my Young diagram. Here’s how it works in our example:
$\array{ \; & black & white \\ red & 0 & 1 \\ yellow & 1 & 0 \\ green & 0 & 1 \\ blue & 1 & 0 \\ }$
Get it? For example, since there is one ball colored white by you and red by me, we put a 1 in the “white / red” entry of the matrix.
Note that the entries in the $i$th column of this matrix add up to give to the length of the $i$th row of your Young diagram, $D$. The entries in the $j$th row of this matrix add up to give the length of the $j$th row of my Young diagram, $E$.
If you think about it, any matrix satisfying these conditions will give an atomic invariant relation between $D$flags and $E$flags. And if you think harder, you’ll see every atomic invariant relation between $D$flags and $E$flags arises this way!
So, applying the fundamental theorem on Hecke operators, we see that these matrices give a basis for the intertwining operators between $\mathbb{C}^{D(n)}$ and $\mathbb{C}^{E(n)}$. These are called ‘Hecke operators’.
In case you forget our notation: $D(n)$ is the set of $D$flags on our $n$element set, while $E(n)$ is the set of $E$flags. The symmetric group $n!$ acts on these sets, so $\mathbb{C}^{D(n)}$ and $\mathbb{C}^{E(n)}$ become permutation representations of $n!$, called flag representations. So: we’ve just gotten an explicit description of all intertwining operators between flag representations of $n!$.
We’re describing Hecke operators using matrices… but in a funny way: not the obvious way that we can always describe operators using matrices! So, the rule for composing them is very different. In particular, when we compose two of these Hecke operators, we don’t get a single Hecke operator: we get a linear combination of Hecke operators.
But, in today’s lecture, Jim shows that in a certain limit — the limit where we expand the rows of our Young diagrams by an everlarger scale factor — these funny matrices of ours do compose using a slightly modified version of matrix multiplication!
This has mysterious implications which Jim is still trying to work out. It may have something to do with the ‘classical limit’ of quantum mechanics.

Lecture 11 (Nov. 1)  James Dolan on Hecke operators between
flag representations. Describing these Hecke operators using
matrices with specified row and column sums. The problem of composing these operators: the composite
of two such operators is not a single operator but a “superposition”
of many. However, in the limit where we rescale our Young diagrams
by making the rows longer and longer, this superposition is
sharply peaked at some definite answer. The result looks like
an imitation of ordinary matrix multiplication, with a certain
“correction factor” thrown in.

Streaming
video in QuickTime format; the URL is
http://mainstream.ucr.edu/baez_11_1_stream.mov  Downloadable video
 Lecture notes by Alex Hoffnung
 Lecture notes by Apoorva Khare

Streaming
video in QuickTime format; the URL is
By the way: all this stuff works not just for flags on finite sets, but also flags on finitedimensional vector spaces over some field $F$. In our example above, a $D$flag is then a 2d subspace of the vector space $F^4$, while an $E$flag is a ‘complete flag’ on $F^4$. An atomic invariant binary relation between $D$flags and $E$flags is something we’ve studied before: we called it a Bruhat class in the Grassmannian
$\binom{4}{2}_F \;,$
that is, the space of all 2d subspaces of $F^4$. The closure of a Bruhat class is called a Schubert cell. So, matrices like this:
$\array{ \; & black & white \\ red & 0 & 1 \\ yellow & 1 & 0 \\ green & 0 & 1 \\ blue & 1 & 0 \\ }$
where each columns sums to 2 and each row sums to 1, are a convenient way to label the Schubert cells in $\binom{4}{2}_F$. There should be $\binom{4}{2} = 6$ of them. If you don’t believe me, check!
The same thing works for any Grassmannian
$\binom{n}{k}_F$
More generally, we can use matrices with specified column and row sums to label Schubert cells in any flag variety — that is, any space of $D$flags on $F^n$ — by taking $E$ to be the tall skinny Young diagram with $n$ boxes.
Re: Geometric Representation Theory (Lecture 11)
I just added a longer explanation of what’s going on. So, if you already glanced at this blog entry and didn’t get much out of it, you might want to glance at it again.