### Who Ordered That?

#### Posted by Tom Leinster

Prize for the most peculiar theorem of the year must surely go to my colleague Natalia Iyudu and her collaborator Stanislav Shkarin, who recently proved the following conjecture of Kontsevich.

Start with a $3 \times 3$ matrix.

Take its transpose, then take the reciprocal of each entry, then take the inverse of the whole matrix.

Take the transpose of *that*, then take the reciprocal of each entry, then take the matrix inverse.

Take the transpose of **that**, then take the reciprocal of each entry, and then, finally, take the matrix inverse.

**Theorem:** Up to a bit of messing about, you’re back where you started.

What on earth does this *mean*? It’s not clear that anyone really knows.

Apparently this conjecture came out of the Gelfand school in about 1996. Its first appearance in print was in Kontsevich’s paper Noncommutative identities (Section 3). Natalia Iyudu and Stanislav Shkarin proved it in May.

Natalia gave a seminar about it yesterday here in Edinburgh, and although I had to leave before the end, it seems from what I heard and from their paper that the proof is elementary — not to mention ingenious. She didn’t say a lot about why the result was first suspected to be true (only that physics was somehow involved), and I haven’t understood remotely enough of Kontsevich’s paper to glean any motivation from there.

Let me state the theorem more precisely. We start with a $3 \times 3$ matrix whose entries are to be thought of as noncommutative formal variables. (The theorem is nontrivial even for commutative variables — at least, it doesn’t seem obvious to *me* — but it’s true in this greater generality.) We’re allowed to invert formal
expressions in our variables, provided, of course, that they’re not zero.

Thus, we’re working over some ring with nine noncommuting generators in which lots of elements are invertible. I’m being vague here because I don’t fully understand what this ring is, and that in turn is because I know very little about matrix algebra over a noncommutative ring. Anyway, let’s push on.

For such a matrix $M$, let $\phi_1(M)$ be the transpose of $M$, let $\phi_2(M)$ be the matrix whose $(i, j)$-entry is $1/M_{i j}$, and let $\phi_3(M) = M^{-1}$. Write $\Phi = \phi_3 \circ \phi_2 \circ \phi_1$.

The theorem *nearly* says that $\Phi^3(M) = M$ for all $M$. What it actually says is that this is true modulo left and right action by diagonal matrices, as follows.

Theorem (Iyudu and Shkarin)For any $3 \times 3$ matrix $M$, there exist diagonal matrices $D$ and $E$ such that $\Phi^3(M) = D\cdot M\cdot E$.

Perhaps your first thought, like mine, was to wonder why this can’t be proved immediately using a computer algebra package. After all, each entry of $\Phi^3(M)$ can be expanded as some enormously complicated function of the original nine variables, which when simplified should give the corresponding entry of $M$.

There are two obstacles. First, the theorem doesn’t say that $\Phi^3(M)$ and $M$ are equal; it merely says they’re the same modulo the diagonal matrix actions, and no algorithm is known for deciding equivalence of this type. Second, noncommutative expressions of the type involved here don’t (as far as anyone knows) have a canonical simplest form, so testing for equality between them is in any case appreciably harder than in the commutative case.

Five minutes with pen and paper will convince you that for $2 \times 2$ matrices of commuting variables, $\Phi^2(M)$ is a constant multiple of $M$. Also, trivially, $\Phi^1(M) = M$ for $1 \times 1$ matrices. So in particular, Iyudu and Shkarin’s theorem holds when $3$ is replaced throughout by any $n \leq 3$.

The obvious conjecture, then, is that it holds for arbitrary $n$. But as it happens, it’s false even for $n = 4$ — adding further to the theorem’s mystique.

Iyudu and Shkarin’s theorem reminded me slightly of a question I’ve heard asked by both Joachim Kock and Mike Stay (separately), which John wrote about here: since the function
$x \mapsto \frac{1}{1 - x}$
is periodic of order $3$, and since
$\frac{1}{1 - x} = 1 + x + x^2 + \cdots$
is the decategorification of the free monoid functor, is there some weird sense in which doing the free monoid construction three times gets you back to where you started? I don’t know the answer to this. Nor does this seem *very* similar to the Iyudu–Shkarin theorem, although there is the common point that both Kontsevich’s operation and the operation $x \mapsto 1/(1 - x)$ are the composites of a small number of involutions (three and two, respectively). But it’s the only shred of an idea I’ve had.

Anyway, I’d love it if the Iyudu–Shkarin theorem had some appealing and accessible interpretation. I hope someone can think of one!

## Re: Who Ordered That?

How badly does the 4x4 case fail? The reason I ask is that it seems that the equivalence relation gets a little looser as the dimensions progress. In fact, the pattern you state is:

for 1x1 matrices, $\Phi^1(M)=M$

for 2x2 matrices, $\Phi^2(M)$ is a constant multiple of $M$ (which can be thought of in various ways in terms of multiplication on either side by a diagonal matrix with the same entry on both diagonals)

for 3x3 matrices, $\Phi^3(M)$ is $M$ up to left and right multiplication by diagonal matrices

so simply following this pattern, for 4x4 matrices, it seems reasonable to ask if $\Phi^4(M)$ is related to $M$ by some even more general relation within the pattern. I don’t know what that would be, but perhaps there is some hope for a four-dimensional generalization.