Geometric Representation Theory (Lecture 12)
Posted by John Baez
In 1925, Werner Heisenberg came up with a radical new approach to physics in which processes were described using matrices of complex numbers. What makes this especially remarkable is that Heisenberg, like most physicists of his day, had not heard of matrices!
It’s hard to tell what Heisenberg was thinking, but in retrospect we might say his idea was that given a system with some set of states, say $\{1,\dots,n\}$, a process $U$ would be described by a bunch of complex numbers $U^i_j$ specifying the ‘amplitude’ for any state $i$ to turn into any state $j$. He composed processes by summing over all possible intermediate states: $(V U)^i_k = \sum_j V^j_k U^i_j .$ Later he discussed his theory with his thesis advisor, Max Born, who informed him that he had reinvented matrix multiplication!
In 1928, Max Born figured out what Heisenberg’s mysterious ‘amplitudes’ actually meant: the absolute value squared $U^i_j^2$ gives the probability for the initial state $i$ to become the final state $j$ via the process $U$. This spelled the end of the deterministic worldview built into Newtonian mechanics.
More shockingly still, since amplitudes are complex, a sum of amplitudes can have a smaller absolute value than those of its terms. Thus, quantum mechanics exhibits destructive interference: allowing more ways for something to happen may reduce the chance that it does!
Heisenberg never liked the term ‘matrix mechanics’ for his work, because he thought it sounded too abstract. Similarly, when Feynman invented a way of doing physics using what we now call ‘Feynman diagrams’, he didn’t like it when Freeman Dyson called them by their standard mathematical name: ‘graphs’. He thought it sounded too fancy. Can you detect a pattern?
But no matter what we call matrix mechanics, its generalizations are the key to understanding how invariant relations between geometric figures give intertwining operators between group representations. And that’s what I talked about this time in the Geometric Representation Theory seminar.

Lecture 12 (Nov. 6)  John Baez on matrix mechanics and its generalizations.
Heisenberg’s original matrix mechanics, where a quantum process from a
set $X$ of states to a set $Y$ of states is described by a matrix of
complex “amplitudes”:
$F: X \times Y \to \mathbb{C}$
We can generalize this
by replacing the complex numbers with any rig $R$, obtaining a category $Mat(R)$
where the objects are finite sets, and the morphisms from $X$ to $Y$ are
$R$valued matrices
$F: X \times Y \to R$
$Mat(R)$ is equivalent to the category of finitely generated free $R$modules. For example, $Mat(\mathbb{C})$ is equivalent to the category of finitedimensional complex vector spaces, $FinVect_{\mathbb{C}}$ . If $\{0,1\}$ is the rig of truth values with “or” as addition and “and” as multiplication, $Mat(\{0,1\})$ is equivalent to the category with finite sets as objects and relations as morphisms, $FinRel$.
There’s an obvious map from $Mat(\{0,1\})$ to $Mat(\mathbb{C})$, which lets us reinterpret invariant relations as Hecke operators. But this is not a functor, so we don’t get a functor $FinRel \to FinVect_{\mathbb{C}}$. To fix this, we can consider $Mat(\mathbb{N})$, where $\mathbb{N}$ is the rig of natural numbers. This is equivalent to $FinSpan$, the category where morphisms are isomorphism class of spans between finite sets. The theory of spans as categorified matrix mechanics.

Streaming
video in QuickTime format; the URL is
http://mainstream.ucr.edu/baez_11_6_stream.mov  Downloadable video
 Lecture notes by Alex Hoffnung
 Lecture notes by Apoorva Khare

Streaming
video in QuickTime format; the URL is
Re: Geometric Representation Theory (Lecture 12)
Hmm. I’m a little confused by this. Why aren’t finitelygenerated free $\{0, 1\}$modules the same as Boolean algebras which are power sets of finite sets? Then a matrix of 0s and 1s of size $X \times Y$ acts on a column of 0s and 1s representing a subset of $Y$ to give a subset of $X$, just as a matrix of complex numbers acts on an element of $\mathbb{C}^{Y}$.
To get sets as freely generated modules, I thought we needed the generalized ring known as the ‘field with no elements’, i.e., the one associated with the identity functor.
The Kleisli category of the powerset monad is Rel. But then we’re not dealing here with Kleisli categories, are we? They have mappings from $X$ to $P Y$, for some functor $P$. I thought these matrices you’re after are mappings from $P X$ to $P Y$, so that transposes take you in the opposite direction.
On the other hand, a map from $X$ to $P Y$ does generate a map from $P X$ to $P Y$. And as we are dealing with ‘linear’ mappings from $P X$ to $P Y$, a mapping from $X$ to $P Y$ can be recovered.
OK, so what are ‘linear’ mappings between ‘field with no elements’modules, i.e., between sets? Oh, they’re just functions, aren’t they?
Good, I think confusion is over.