Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 2, 2013

Who Ordered That?

Posted by Tom Leinster

Prize for the most peculiar theorem of the year must surely go to my colleague Natalia Iyudu and her collaborator Stanislav Shkarin, who recently proved the following conjecture of Kontsevich.

Start with a 3×33 \times 3 matrix.

Take its transpose, then take the reciprocal of each entry, then take the inverse of the whole matrix.

Take the transpose of that, then take the reciprocal of each entry, then take the matrix inverse.

Take the transpose of that, then take the reciprocal of each entry, and then, finally, take the matrix inverse.

Theorem: Up to a bit of messing about, you’re back where you started.

What on earth does this mean? It’s not clear that anyone really knows.

Apparently this conjecture came out of the Gelfand school in about 1996. Its first appearance in print was in Kontsevich’s paper Noncommutative identities (Section 3). Natalia Iyudu and Stanislav Shkarin proved it in May.

Natalia gave a seminar about it yesterday here in Edinburgh, and although I had to leave before the end, it seems from what I heard and from their paper that the proof is elementary — not to mention ingenious. She didn’t say a lot about why the result was first suspected to be true (only that physics was somehow involved), and I haven’t understood remotely enough of Kontsevich’s paper to glean any motivation from there.

Let me state the theorem more precisely. We start with a 3×33 \times 3 matrix whose entries are to be thought of as noncommutative formal variables. (The theorem is nontrivial even for commutative variables — at least, it doesn’t seem obvious to me — but it’s true in this greater generality.) We’re allowed to invert formal expressions in our variables, provided, of course, that they’re not zero.

Thus, we’re working over some ring with nine noncommuting generators in which lots of elements are invertible. I’m being vague here because I don’t fully understand what this ring is, and that in turn is because I know very little about matrix algebra over a noncommutative ring. Anyway, let’s push on.

For such a matrix MM, let ϕ 1(M)\phi_1(M) be the transpose of MM, let ϕ 2(M)\phi_2(M) be the matrix whose (i,j)(i, j)-entry is 1/M ij1/M_{i j}, and let ϕ 3(M)=M 1\phi_3(M) = M^{-1}. Write Φ=ϕ 3ϕ 2ϕ 1\Phi = \phi_3 \circ \phi_2 \circ \phi_1.

The theorem nearly says that Φ 3(M)=M\Phi^3(M) = M for all MM. What it actually says is that this is true modulo left and right action by diagonal matrices, as follows.

Theorem (Iyudu and Shkarin)  For any 3×33 \times 3 matrix MM, there exist diagonal matrices DD and EE such that Φ 3(M)=DME\Phi^3(M) = D\cdot M\cdot E.

Perhaps your first thought, like mine, was to wonder why this can’t be proved immediately using a computer algebra package. After all, each entry of Φ 3(M)\Phi^3(M) can be expanded as some enormously complicated function of the original nine variables, which when simplified should give the corresponding entry of MM.

There are two obstacles. First, the theorem doesn’t say that Φ 3(M)\Phi^3(M) and MM are equal; it merely says they’re the same modulo the diagonal matrix actions, and no algorithm is known for deciding equivalence of this type. Second, noncommutative expressions of the type involved here don’t (as far as anyone knows) have a canonical simplest form, so testing for equality between them is in any case appreciably harder than in the commutative case.

Five minutes with pen and paper will convince you that for 2×22 \times 2 matrices of commuting variables, Φ 2(M)\Phi^2(M) is a constant multiple of MM. Also, trivially, Φ 1(M)=M\Phi^1(M) = M for 1×11 \times 1 matrices. So in particular, Iyudu and Shkarin’s theorem holds when 33 is replaced throughout by any n3n \leq 3.

The obvious conjecture, then, is that it holds for arbitrary nn. But as it happens, it’s false even for n=4n = 4 — adding further to the theorem’s mystique.

Iyudu and Shkarin’s theorem reminded me slightly of a question I’ve heard asked by both Joachim Kock and Mike Stay (separately), which John wrote about here: since the function x11x x \mapsto \frac{1}{1 - x} is periodic of order 33, and since 11x=1+x+x 2+ \frac{1}{1 - x} = 1 + x + x^2 + \cdots is the decategorification of the free monoid functor, is there some weird sense in which doing the free monoid construction three times gets you back to where you started? I don’t know the answer to this. Nor does this seem very similar to the Iyudu–Shkarin theorem, although there is the common point that both Kontsevich’s operation and the operation x1/(1x)x \mapsto 1/(1 - x) are the composites of a small number of involutions (three and two, respectively). But it’s the only shred of an idea I’ve had.

Anyway, I’d love it if the Iyudu–Shkarin theorem had some appealing and accessible interpretation. I hope someone can think of one!

Posted at October 2, 2013 9:53 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2662

11 Comments & 0 Trackbacks

Re: Who Ordered That?

How badly does the 4x4 case fail? The reason I ask is that it seems that the equivalence relation gets a little looser as the dimensions progress. In fact, the pattern you state is:

for 1x1 matrices, Φ 1(M)=M\Phi^1(M)=M

for 2x2 matrices, Φ 2(M)\Phi^2(M) is a constant multiple of MM (which can be thought of in various ways in terms of multiplication on either side by a diagonal matrix with the same entry on both diagonals)

for 3x3 matrices, Φ 3(M)\Phi^3(M) is MM up to left and right multiplication by diagonal matrices

so simply following this pattern, for 4x4 matrices, it seems reasonable to ask if Φ 4(M)\Phi^4(M) is related to MM by some even more general relation within the pattern. I don’t know what that would be, but perhaps there is some hope for a four-dimensional generalization.

Posted by: Greg Friedman on October 3, 2013 3:55 AM | Permalink | Reply to this

Re: Who Ordered That?

I don’t know; that’s something I wondered myself. In fact, I haven’t seen anything about the failure of the 4×44 \times 4 case in either of the papers I cited (not that I looked very hard). It’s something I learned from Natalia’s talk, and it’s possible that I misunderstood.

The other thing is that my 2×22 \times 2 calculation was only for commuting variables. I don’t know whether Φ 2(M)\Phi^2(M) is a constant multiple of MM when they don’t commute — do you?

Posted by: Tom Leinster on October 3, 2013 8:08 AM | Permalink | Reply to this

Re: Who Ordered That?

Well, the dimension-reduction used in the paper would reduce the 2-by-2 case to a single “non”-commuting variable, which ends up commuting with everything relevant by default, so you’ve already got the calculation you need.

Posted by: Jesse C. McKeown on October 3, 2013 3:39 PM | Permalink | Reply to this

Re: Who Ordered That?

Not sure what you mean, Jesse — can you explain a bit more?

Posted by: Tom Leinster on October 3, 2013 3:46 PM | Permalink | Reply to this

Re: Who Ordered That?

Maybe I’m missing what you were meaning…

Here’s what I’ve got for 2-by-2 now

(1)Φ 2((u 0 0 v)(1 1 1 a)(w 0 0 x))=(u 0 0 v)(a(1a) 1I)(1 1 1 a)((a1) 1I)(w 0 0 x) \Phi^2 \left( \left( \begin{array}{c} u & 0 \\ 0 & v \end{array}\right) \left( \begin{array}{c} 1 & 1 \\ 1 & a \end{array}\right) \left( \begin{array}{c} w & 0 \\ 0 & x \end{array}\right) \right) = \left(\begin{array}{c} u & 0 \\ 0 & v \end{array}\right) ( a (1-a)^{-1} I ) \left( \begin{array}{c} 1 & 1 \\ 1 & a \end{array}\right) ( (a-1)^{-1} I ) \left( \begin{array}{c} w & 0 \\ 0 & x \end{array}\right)
Posted by: Jesse C. McKeown on October 3, 2013 4:34 PM | Permalink | Reply to this

Re: Who Ordered That?

John Baez wrote:

First, the theorem doesn’t say that Φ3(M) and M are equal; it merely says they’re the same modulo the diagonal matrix actions, and no algorithm is known for deciding equivalence of this type.

I’m probably missing something obvious, but it seems that if matrices A and B are related this way, we should be able to more or less write down the diagonal matrices D and E such that A = DBE. If we assume that A and B have no zero entries, it’s easy. And it seems that for the case of this conjecture, since we’re taking reciprocals of the entries, we are implicitly assuming they are all nonzero, at least in the input matrix. (Also I’m assuming that none of the matrices are degenerate, so all the diagonal entries in D and E are nonzero.)

The left multiplication of B by diagonal D just rescales each row of B, (by a different scalar for each row), so we can choose D in a way that makes the first column of DB come out to what we want (that is, matching first column of A). Similarly the right multiplication of DB by E just rescales columns, but the first column is already OK, so the 1,1 entry of E is 1. (There’s an overall scalar degree of freedom in the choice of D and E, which we are using up by specifying E[1,1]=1). The other two entries of E are now determined by the need to make the second and third columns come out right. (If the second and third columns of DB are not scalar multiples of the respective columns in A, then the relation A=DBE does not hold.)

I guess this is all only applicable in the commuting variable case (?), not the full generality of the Kontsevich conjecture, but at least for 3x3 matrices and commuting variables, you can work this all out in a computer algebra package. I don’t drive a Rolls-Royce (Mathematica), but the little Maxima (VW) script below produces (after about 2 million lines of hideous algebra) the orginal matrix.

########maxima script##########

M: matrix([a,b,c],[d,e,f],[g,h,i]);
M1: invert(transpose(1/M));
M2: invert(transpose(1/M1));
M3: invert(transpose(1/M2));

D1: M[1,1]/M3[1,1];
D2: M[2,1]/M3[2,1];
D3: M[3,1]/M3[3,1];
D: matrix([D1,0,0],[0,D2,0],[0,0,D3]);

DM3: D . M3;

E1: 1;          /* M[1,1] = DM3[1,1] by choice of D1 */
E2: M[1,2]/DM3[1,2];
E3: M[1,3]/DM3[1,3];
E: matrix([E1,0,0],[0,E2,0],[0,0,E3]);

DM3E: DM3 . E;

ratsimp(DM3E);  /* Simplify rational expression */


##### output ######

(%i16) ratsimp(DM3E)
                                  [ a  b  c ]
                                  [         ]
(%o16)                            [ d  e  f ]
                                  [         ]
                                  [ g  h  i ]

Posted by: Charles G Waldman on October 11, 2013 5:59 PM | Permalink | Reply to this

Re: Who Ordered That?

Thanks. By the way, John Baez didn’t write that: I did.

After I wrote my post, I asked Natalia Iyudu whether I’d got everything right. The only reservation she expressed was that I’d said that even the commutative case was non-trivial, which she disagreed with. I changed the wording a bit, but decided not to change it too much — an expert like Natalia might consider it trivial, but I might not.

I think what you’ve done is to confirm her view.

Posted by: Tom Leinster on October 11, 2013 6:10 PM | Permalink | Reply to this

Re: Who Ordered That?

John Cook mentioned doing numerical tests, so: some maxima for the 3-by-3 case, using 3-by-3 matrices as a model of noncommutativity.

deblock (M) := block(  /* forgive me ... */
 [ size : length(M[1,1]) ],
  genmatrix (
    lambda([i,j], 
       M[1+(i-1 - mod(i-1,size))/size,1+(j-1-mod(j-1,size))/size]
        [mod (i-1,size)+1,mod(j-1,size)+1]),
    size * length(M),size * length(M)));

reblock ( M , size ) := /* ... this sort of hackery really shouldn't be allowed... */
 genmatrix (
   lambda(
      [k,l], 
      genmatrix (
        lambda ([i,j] , M[i + (k-1) * size][j+(l-1)*size]),
        size,size)),
   (length(M) - mod (length(M), size))/size,(length(M) - mod (length(M), size))/size);

entrInvert( M ) := genmatrix( lambda([i,j], invert (M[i,j])), length(M), length(M));
/* entrInvert, inverts each entry, is the first of two
 reasons I'm prefering matrices of matrices to 3n-by-3n
 matrices; I suppose one _could_ do things the other way
 around... */

coldiagBlocks (M) :=
   block (
     [l: length (M[1,1])],
     genmatrix(
       lambda([i,j], 
          if is( i = j ) then M[i,1] else zeromatrix(l,l)),
       length(M),length(M)) );

rowdiagBlocks(M) := 
    block (
      [l: length (M[1,1])],
      genmatrix(
        lambda([i,j],
          if is( i = j ) then M[1,i] else zeromatrix(l,l)),
        length(M),length(M)) );

OpInvert(M) := block( /* Mat ( Mat (R,m), n) ~~ Mat (R, m n)  */
  [ size : length ( M [ 1 , 1 ] )],
  reblock( invert ( deblock (M) ), size ) );

OpMul ( M , N ) :=block(  /* similarly */
  [ size : length ( M [ 1 , 1 ] )],
  reblock( deblock(M) . deblock(N) , size ) );

Phi ( M ) := OpInvert ( transpose ( entrInvert (M)));

lReduce ( M ) := OpMul ( OpInvert ( coldiagBlocks (M)) , M ) ;
rReduce ( M ) := OpMul ( M , OpInvert ( rowdiagBlocks (M)));

randmatmat( insize, outsize) := 
  genmatrix(
    lambda([i,j],
      genmatrix (lambda([i,j],random(10.0)),insize,insize)),
    outsize,outsize);

exampl : randmatmat(3,3);

lrexmpl : lReduce(rReduce(exampl));
lrPhi3 : phi3Ex : lReduce (rReduce (Phi(Phi(Phi(exampl)))));

/* it's much easier to get the relevant eigenvalues this way, too */

bfloat(eigenvalues ( lrexmpl[2,2] ));
bfloat(eigenvalues ( phi3Ex[2,2] ));

If the random number generator is any good, one should get the same list of numerical eigenvalues as the result of the last two commands.

The full weight of the theorem is that the same change-of-basis relating lrexampl[2,2] to phi3Ex[2,2] should also relate lrexampl[i,j] and phi3Ex[i,j]; but I’ll leave the pleasure of writing that script to someone-else

Posted by: Jesse C. McKeown on October 15, 2013 11:05 PM | Permalink | Reply to this

Re: Who Ordered That?

The text description says to take the transpose, then the reciprocal. The Maxima script does these in the opposite order. Do both work or is only the script correct?

I’ve experimenting with this numerically, and I don’t get anywhere near the same matrix out as I started with. It would not be surprising if the process were numerically ill-conditioned.

Posted by: John Cook on October 15, 2013 1:01 PM | Permalink | Reply to this

Re: Who Ordered That?

My first point was silly: Obviously it doesn’t matter whether you take the reciprocals or transpose first. But the numerical problems persist. I’m using uniform random entries, and perhaps such matrices exacerbate the numerical difficulties.

Posted by: John Cook on October 15, 2013 1:07 PM | Permalink | Reply to this

Re: Who Ordered That?

Taking elementwise inverse is the only nonlinear operation in the block cipher AES. A paper by Murphy and Robshaw shows that the cipher can be written in the form

c i+1=M(c i+k i) 1c_{i+1} = M (c_i + k_i)^{-1}

where c 0c_0 is the plaintext, c ic_i is the ciphertext after round ii, k ik_i is the iith round key, MM is a fixed matrix, and inversion is elementwise. (1/0 is defined to be 0.)

What motivated the elementwise inversion in Kontsevich’s conjecture?

Posted by: Mike Stay on October 15, 2013 8:49 PM | Permalink | Reply to this

Post a New Comment