Who Ordered That?
Posted by Tom Leinster
Prize for the most peculiar theorem of the year must surely go to my colleague Natalia Iyudu and her collaborator Stanislav Shkarin, who recently proved the following conjecture of Kontsevich.
Start with a matrix.
Take its transpose, then take the reciprocal of each entry, then take the inverse of the whole matrix.
Take the transpose of that, then take the reciprocal of each entry, then take the matrix inverse.
Take the transpose of that, then take the reciprocal of each entry, and then, finally, take the matrix inverse.
Theorem: Up to a bit of messing about, you’re back where you started.
What on earth does this mean? It’s not clear that anyone really knows.
Apparently this conjecture came out of the Gelfand school in about 1996. Its first appearance in print was in Kontsevich’s paper Noncommutative identities (Section 3). Natalia Iyudu and Stanislav Shkarin proved it in May.
Natalia gave a seminar about it yesterday here in Edinburgh, and although I had to leave before the end, it seems from what I heard and from their paper that the proof is elementary — not to mention ingenious. She didn’t say a lot about why the result was first suspected to be true (only that physics was somehow involved), and I haven’t understood remotely enough of Kontsevich’s paper to glean any motivation from there.
Let me state the theorem more precisely. We start with a matrix whose entries are to be thought of as noncommutative formal variables. (The theorem is nontrivial even for commutative variables — at least, it doesn’t seem obvious to me — but it’s true in this greater generality.) We’re allowed to invert formal expressions in our variables, provided, of course, that they’re not zero.
Thus, we’re working over some ring with nine noncommuting generators in which lots of elements are invertible. I’m being vague here because I don’t fully understand what this ring is, and that in turn is because I know very little about matrix algebra over a noncommutative ring. Anyway, let’s push on.
For such a matrix , let be the transpose of , let be the matrix whose -entry is , and let . Write .
The theorem nearly says that for all . What it actually says is that this is true modulo left and right action by diagonal matrices, as follows.
Theorem (Iyudu and Shkarin) For any matrix , there exist diagonal matrices and such that .
Perhaps your first thought, like mine, was to wonder why this can’t be proved immediately using a computer algebra package. After all, each entry of can be expanded as some enormously complicated function of the original nine variables, which when simplified should give the corresponding entry of .
There are two obstacles. First, the theorem doesn’t say that and are equal; it merely says they’re the same modulo the diagonal matrix actions, and no algorithm is known for deciding equivalence of this type. Second, noncommutative expressions of the type involved here don’t (as far as anyone knows) have a canonical simplest form, so testing for equality between them is in any case appreciably harder than in the commutative case.
Five minutes with pen and paper will convince you that for matrices of commuting variables, is a constant multiple of . Also, trivially, for matrices. So in particular, Iyudu and Shkarin’s theorem holds when is replaced throughout by any .
The obvious conjecture, then, is that it holds for arbitrary . But as it happens, it’s false even for — adding further to the theorem’s mystique.
Iyudu and Shkarin’s theorem reminded me slightly of a question I’ve heard asked by both Joachim Kock and Mike Stay (separately), which John wrote about here: since the function is periodic of order , and since is the decategorification of the free monoid functor, is there some weird sense in which doing the free monoid construction three times gets you back to where you started? I don’t know the answer to this. Nor does this seem very similar to the Iyudu–Shkarin theorem, although there is the common point that both Kontsevich’s operation and the operation are the composites of a small number of involutions (three and two, respectively). But it’s the only shred of an idea I’ve had.
Anyway, I’d love it if the Iyudu–Shkarin theorem had some appealing and accessible interpretation. I hope someone can think of one!
Re: Who Ordered That?
How badly does the 4x4 case fail? The reason I ask is that it seems that the equivalence relation gets a little looser as the dimensions progress. In fact, the pattern you state is:
for 1x1 matrices,
for 2x2 matrices, is a constant multiple of (which can be thought of in various ways in terms of multiplication on either side by a diagonal matrix with the same entry on both diagonals)
for 3x3 matrices, is up to left and right multiplication by diagonal matrices
so simply following this pattern, for 4x4 matrices, it seems reasonable to ask if is related to by some even more general relation within the pattern. I don’t know what that would be, but perhaps there is some hope for a four-dimensional generalization.