## November 29, 2016

### Quarter-Turns

#### Posted by Tom Leinster Teaching linear algebra this semester has made me face up to the fact that for a linear operator $T$ on a real inner product space, $\langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T^\ast = -T$ whereas for an operator on a complex inner product space, $\langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T = 0.$ In other words, call an operator $T$ a quarter-turn if $\langle T x, x \rangle = 0$ for all $x$. Then the real quarter-turns correspond to the skew symmetric matrices — but apart from the zero operator, there are no complex quarter turns at all.

Where in my mental landscape should I place these facts?

The proofs of both facts are easy enough. Everyone who’s met an inner product space knows the real polarization identity: in a real inner product space $X$,

$\langle x, y \rangle = \frac{1}{4} \bigl( \| x + y \|^2 - \| x - y \|^2 \bigr).$

All we used about $\langle -, - \rangle$ here is that it’s a symmetric bilinear form (and that $\|w\|^2 = \langle w, w \rangle$). In other words, for a symmetric bilinear form $\beta$ on $X$, writing $Q(w) = \beta(w, w)$, we have

$\beta(x, y) = \frac{1}{4} \bigl( Q(x + y) - Q(x - y) \bigr)$

for all $x, y \in X$.

The crucial point is that we really did need the symmetry. (For it’s clear that the right-hand side is symmetric whether or not $\beta$ is.) For a not-necessarily-symmetric bilinear form $\beta$, all we can say is

$\frac{1}{2} \bigl( \beta(x, y) + \beta(y, x) \bigr) = \frac{1}{4} \bigl( Q(x + y) - Q(x - y) \bigr)$

or more simply put,

$\beta(x, y) + \beta(y, x) = \frac{1}{2} \bigl( Q(x + y) - Q(x - y) \bigr).$

Now let $T$ be a linear operator on $X$. There is a bilinear form $\beta$ defined by $\beta(x, y) = \langle T x, y \rangle$. It’s not symmetric unless $T$ is self-adjoint; nevertheless, the polarization identity just stated tells us that

$\langle T x, y \rangle + \langle T y, x \rangle = \frac{1}{2} \bigl( \langle T(x + y), x + y \rangle - \langle T(x - y), x - y \rangle \bigr).$

It follows that $T$ is a quarter-turn if and only if

$\langle T x, y \rangle + \langle T y , x \rangle = 0$

for all $x, y \in X$. After some elementary rearrangement, this in turn is equivalent to

$\langle (T + T^\ast)x, y \rangle = 0$

for all $x, y$, where $T^\ast$ is the adjoint of $T$. But that just means that $T + T^\ast = 0$. So, $T$ is a quarter-turn if and only if $T^\ast = -T$.

The complex case involves a more complicated polarization identity, but is ultimately simpler. To be clear, when I say “complex inner product” I’m talking about something that’s linear in the first argument and conjugate linear in the second.

In a complex inner product space, the polarization formula is

$\langle x , y \rangle = \frac{1}{4} \sum_{p = 0}^3 i^p \| x + i^p y \|^2.$

This can be compared with the real version, which (in unusually heavy notation) says that

$\langle x , y \rangle = \frac{1}{4} \sum_{p = 0}^1 (-1)^p \| x + (-1)^p y \|^2.$

And the crucial point in the complex case is that this time, we don’t need any symmetry. In other words, for any bilinear form $\beta$ on $X$, writing $Q(x) = \beta(x, x)$, we have

$\beta(x, y) = \frac{1}{4} \sum_{p = 0}^3 i^p Q(x + i^p y).$

So given a quarter-turn $T$ on $X$, we can define a bilinear form $\beta$ by $\beta(x, y) = \langle T x, y \rangle$, and it follows immediately from this polarization identity that $\langle T x, y \rangle = 0$ for all $x, y$ — that is, $T = 0$.

So we’ve now shown that over $\mathbb{R}$,

$T \,\,\text{is a quarter-turn}\,\, \iff T^\ast = - T$

but over $\mathbb{C}$,

$T \,\,\text{is a quarter-turn}\,\, \iff T = 0.$

Obviously everything I’ve said is very well-known to those who know it. (For instance, most of it’s in Axler’s Linear Algebra Done Right.) But how should I think about these results? How can I train my intuition so that the real and complex results seem simultaneously obvious?

Whatever the intuitive picture, here’s a nice consequence, also in Axler’s book.

This pair of results immediately implies that whether we’re over $\mathbb{R}$ or $\mathbb{C}$, the only self-adjoint quarter-turn is zero. Now let $T$ be any operator on a real or complex inner product space, and recall that $T$ is said to be normal if it commutes with $T^\ast$.

Equivalently, $T$ is normal if the operator $T^\ast T - T T^\ast$ is zero.

But $T^\ast T - T T^\ast$ is always self-adjoint, so $T$ is normal if and only if $T^\ast T - T T^\ast$ is a quarter-turn.

Finally, a bit of routine messing around with inner products shows that this is in turn equivalent to

$\| T^\ast x \| = \| T x \| \,\,\text{for all}\,\, x \in X.$

So a real or complex operator $T$ is normal if and only if $T^\ast x$ and $T x$ have the same length for all $x$.

Posted at November 29, 2016 11:14 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2924

## 24 Comments & 0 Trackbacks

### Re: Quarter-Turns

The first thing that jumps out at me is that $\mathbb{C}$ has a scalar quarter-turn already. Such is famously absent over the Reals.

Posted by: Jesse C. McKeown on November 30, 2016 1:42 AM | Permalink | Reply to this

### Re: Quarter-Turns

Indeed, I don’t know why I don’t see more often an informal layman’s explanation of what $i$ “is”: it’s a $90^\circ$ turn (in the same way that $-1$ is a $180^\circ$ turn). It ought to remove a lot of mystery of what “imaginary numbers” are.

Posted by: Todd Trimble on November 30, 2016 11:25 AM | Permalink | Reply to this

### Re: Quarter-Turns

That’s an interesting point, Jesse. Of course, $\langle i, 1 \rangle$ is not zero in the one-dimensional complex inner product space $\mathbb{C}$. Nevertheless, $\langle i, 1 \rangle = 0$ in the two-dimensional real vector space $\mathbb{C}$ with its obvious real inner product.

Posted by: Tom Leinster on November 30, 2016 12:22 PM | Permalink | Reply to this

### Re: Quarter-Turns

There’s also a natural sense in which the “obvious” real inner product on a complex inner product space is the only natural one: it’s the only one that induces the same norm as the complex inner product.

Posted by: Mark Meckes on December 1, 2016 3:49 PM | Permalink | Reply to this

### Re: Quarter-Turns

Here’s one way to think about it (in finite dimensions anyway, or with sufficient additional hypotheses in infinite dimensions). The inner product of vectors $\langle T x, x \rangle$ is equal to a Hilbert–Schmidt inner product of operators: $\langle T x, x \rangle = \langle T, x \otimes x \rangle_{HS}.$ Here $x \otimes x$ is the self-adjoint rank-one operator given by $x\otimes x (y) = \langle y, x \rangle x.$ So $T$ is quarter-turn if and only if it is in the orthogonal complement of the span of self-adjoint rank-one operators.

Now if your scalar field is $\mathbb{R}$, then by the spectral theorem that span consists precisely of the self-adjoint operators. So quarter-turns are the operators which are (Hilbert–Schmidt-)orthogonal to self-adjoint operators; it’s not hard to show this is equivalent to $T^* = -T$.

On the other hand, if the scalar field is $\mathbb{C}$, then it’s still true that the real linear span of self-adjoint rank-one operators is the space of self-adjoint operators, but $S = i x \otimes x$ is not self-adjoint — it satisfies $S^* = -S$. So the complex linear span of self-adjoint rank-one operators is the space of all operators. In particular, it contains $T$ itself. So a quarter-turn on a complex inner product space is Hilbert–Schmidt-orthogonal to itself, hence $0$.

Posted by: Mark Meckes on November 30, 2016 2:38 PM | Permalink | Reply to this

### Re: Quarter-Turns

Thanks, Mark. After nearly a semester of teaching, my thinking has slowed to a crawl, so I’m going to have to take this bit by bit.

You mentioned the Hilbert-Schmidt inner product of operators. Let me try to explain what this is. In brief, I claim that it’s the only reasonable inner product on $Hom(X, Y)$, for inner product spaces $X$ and $Y$.

Start with the category of inner product spaces over $\mathbb{F}$ (either $\mathbb{R}$ or $\mathbb{C}$) and all linear maps between them. To make things easy, let’s stick to finite-dimensional spaces, although I know there are wider settings where you can make this work.

For any two inner product spaces $X$ and $Y$, we have the vector space $Hom(X, Y)$ of linear maps. Can we make it into an inner product space in a sensible way?

Surely any sensible method for doing this has these two properties:

• The isomorphism of vector spaces $Hom(\mathbb{F}, Y) \cong Y$ preserves inner products. Here $Y$ is any inner product space, and $Hom(\mathbb{F}, Y)$ has the as-yet-unknown inner product on it.

• The isomorphism of vector spaces $Hom(X_1 \oplus X_2, Y) \cong Hom(X_1, Y) \oplus Hom(X_2, Y)$ also preserves inner products. Here each of the three Hom-spaces has the as-yet-unknown inner product on it. On the right-hand side, we’re using the fact that when $V$ and $W$ are inner product spaces, $V \oplus W$ naturally acquires an inner product: $\langle (v, w), (v', w') \rangle = \langle v, v' \rangle + \langle w, w' \rangle.$

Putting these two requirements together says that for all $n \geq 0$ and inner product spaces $X_1, \ldots, X_n, Y$, the canonical isomorphism

$Hom(X_1 \oplus \cdots \oplus X_n, Y) \cong Hom(X_1, Y) \oplus \cdots \oplus Hom(X_n, Y)$

should preserve inner products.

And in fact, this requirement completely determines what the inner product must be. Taking $X_1, \ldots, X_n$ all to be the base field $\mathbb{F}$, it gives us an isomorphism of inner product spaces

$Hom(\mathbb{F}^n, Y) \cong Y \oplus \cdots \oplus Y.$

And since every inner product space is isomorphic to one of the form $\mathbb{F}^n$, this determines the inner product on $Hom(X, Y)$ for all inner product spaces $X$.

When you work out what this says explicitly, you find that this inner product is as follows. Given linear maps $\alpha, \beta : X \to Y$ and an orthonormal basis $x_1, \ldots, x_n$ of $X$,

$\langle \alpha, \beta \rangle = \sum_{i = 1}^n \langle \alpha(x_i), \beta(x_i) \rangle$

where the inner products on the right-hand side take place in $Y$. A neater way to write this is

$\langle \alpha, \beta \rangle = tr(\beta^\ast \circ \alpha)$

where $\beta : Y \to X$ is the adjoint of $\beta$. And that’s the definition of the Hilbert-Schmidt inner product on $Hom(X, Y)$.

The trace formulation makes clear that the definition does not depend on the choice of basis. It also suggests that the definition should work in any context where trace makes sense, beyond the finite-dimensional setting.

The Hilbert-Schmidt inner product has other good categorical properties. First note that the tensor product $V \times W$ of two inner product spaces $V$, $W$ naturally carries an inner product:

$\langle v \otimes w, v' \otimes w' \rangle = \langle v, v' \rangle \, \langle w, w' \rangle.$

Now it’s not hard to show that for any inner product spaces $X$, $Y$ and $Z$, the canonical isomorphism of vector spaces

$\Hom(X \otimes Y, Z) \cong \Hom(X, \Hom(Y, Z))$

is in fact an isomorphism of inner product spaces. Here we’re using the natural inner product on $X \otimes Y$ and the Hilbert-Schmidt inner product on the Hom-spaces.

Now that I’ve figured all this out, I want to not call it the “Hilbert-Schmidt inner product”: it’s just the inner product on hom-spaces, the only one that’s sensible. Would you agree, Mark?

Right, now onto the rest of your comment!

Posted by: Tom Leinster on December 1, 2016 1:52 PM | Permalink | Reply to this

### Re: Quarter-Turns

I agree that the Hilbert–Schmidt inner product is the only reasonable inner product on $Hom(X,Y)$ when $X$ and $Y$ are inner product spaces (at least, when there is no other additional structure assumed). But I would justify it in a different way.

As befits a category theorist, for you “an inner product on $Hom(X,Y)$” means “an assignment, for each pair of inner product spaces $X, Y$, of an inner product to $Hom(X,Y)$”, and you then proceed to show that there is a unique such assignment which interacts nicely with the structure of the category of inner product spaces.

But the Hilbert–Schmidt inner product on a single $Hom(X,Y)$ is also uniquely determined (up to normalization) by an invariance property that makes no mention of any other spaces. Namely, it is the unique (up to normalization) inner product on $Hom(X,Y)$ which induces a unitarily invariant norm: $\| T \|_{HS} = \| U T V \|_{HS}$ for any unitary maps (i.e., isomorphisms of inner product spaces) $U: Y \to Y$ and $V: X \to X$. (I bet there’s actually a nice categorial way to state this property, but I haven’t ever bothered to figure out what it is.)

Now you may ask, how would I choose to justify the normalization of the Hilbert–Schmidt inner product? One answer is that I might not choose to: different normalizations turn out to be more convenient in different situations. Another answer is given by a property you alluded to above, but in slightly different form: with the usual normalization, $\langle v \otimes w, v' \otimes w' \rangle_{HS} = \langle v, v' \rangle \langle w, w' \rangle,$ where I’m using $\otimes$ as in my comment above to denote rank-one operators, making this agree with the natural inner product on $V \otimes W$.

Posted by: Mark Meckes on December 1, 2016 3:23 PM | Permalink | Reply to this

### Re: Quarter-Turns

In my first, unsatisfactory, attempt to figure out the “meaning” of the Hilbert-Schmidt inner product, what I actually ended up with was not $tr(\beta^\ast \alpha)$ but $\frac{1}{n} tr(\beta^\ast \alpha),$ where $n = \dim X = tr(1_X)$. I seem to remember that cropping up in free probability theory, in keeping with the analogy between trace and expected value. So I’m guessing that’s the kind of alternative normalization that a random matrix theorist might have in mind.

Posted by: Tom Leinster on December 1, 2016 3:49 PM | Permalink | Reply to this

### Re: Quarter-Turns

You’ve got it. More generally, in the context of operator algebras people often like to work with functionals $\varphi$ on an algebra normalized so that $\varphi(1) = 1$. Of course in infinite dimensions the identity map on a Hilbert space doesn’t have a trace, but when one specializes the operator-algebraic machinery to finite dimensions, a natural special case is $\varphi = \frac{1}{n} tr$.

Posted by: Mark Meckes on December 1, 2016 4:00 PM | Permalink | Reply to this

### Re: Quarter-Turns

A colleague pointed out yet another way of seeing that the Hilbert-Schmidt inner product is a natural thing. Let $X$ and $Y$ be inner product spaces (finite-dimensional, say). We have $Hom(X, Y) \cong X^\ast \times Y.$ As already mentioned, the tensor product of inner product spaces gets an inner product in a natural way. The dual $X^\ast$ of an inner product space $X$ also has a natural inner product. Indeed, we can just transport the inner product of $X$ across the isomorphism $X \cong X^\ast$ defined by $x \mapsto \langle -, x \rangle$. Putting this together gives an inner product on $Hom(X, Y)$, and it’s exactly Hilbert and Schmidt’s.

Posted by: Tom Leinster on December 5, 2016 2:17 PM | Permalink | Reply to this

### Re: Quarter-Turns

Right. I meant to be alluding to that above, when pointing out that the Hilbert–Schmidt inner product, applied to rank-one operators, agrees with the natural inner product on a tensor product. What’s lurking in the background there are $Hom(X,Y) \cong X* \otimes Y,$ and $X^{\ast} \cong X$ naturally for a finite-dimensional inner product space.

(Okay, since the whole point of this post is about things working out differently depending on whether the scalar field is $\mathbb{R}$ or $\mathbb{C}$ I should own up to the fact that the latter natural isomorphism is only conjugate-linear over $\mathbb{C}$. But for the application to characterizing quarter-turns, what matters is just orthogonality, which is unaffected by this particular subtlety.)

Posted by: Mark Meckes on December 5, 2016 2:50 PM | Permalink | Reply to this

### Re: Quarter-Turns

Right, got it. So when you use “$x \otimes y$” to denote a linear map $X \to Y$, you’re implicitly moving across the isomorphisms

$X \otimes Y \cong X^\ast \otimes Y \cong Hom(X, Y)$

(give or take a conjugate). But here’s something I got a bit stuck on: is it obvious that

$\langle T x, x \rangle = \langle T, x \otimes x \rangle?$

I can calculate that it’s true… and indeed that

$\langle T x, y \rangle = \langle T, x \otimes y \rangle$

for arbitrary $x \in X$, $y \in Y$ and $T: X \to Y$. But how should I think about this?

Posted by: Tom Leinster on December 13, 2016 10:33 AM | Permalink | Reply to this

### Re: Quarter-Turns

I have to say I find this description of the Hilbert-Schmidt inner product much more transparent than any of the others. Apparently it’s something I’m perfectly familiar with even though I didn’t know it had a fancy name. (-:

I think one way to think about $\langle T x,x\rangle = \langle T, x\otimes x\rangle$ is with string diagrams: there are just some strings being turned around. I don’t have the time right now to remember how to create such diagrams as images and post them here, but maybe you can recreate them yourself.

Another way to say the same thing is with abstract index notation for tensors: $x^a$ for a vector (a 1-0-tensor), $T^a_b$ for a transformation (a 1-1-tensor), $g_{a b}$ for the inner product and $g^{a b}$ for its inverse, hence $g^{a c} T^b_c$ for the transformation with one index raised to become a 2-0-tensor, and $g_{a c} g_{b d}$ for the induced metric on 2-0-tensors, so we have

$\langle T x, x\rangle = g_{a b} (T^a_c x^c) x^b$

$\langle T, x\otimes x \rangle = (g_{a c} g_{b d}) (g^{a e} T^b_e) (x^c x^d)$

which are equal since $g_{a c} g^{a e} = \delta_c^e$ (which in string diagrams is “pulling a bent string straight”). Although now that I’ve written it out, I suppose that’s probably essentially the same calculation you already did.

Posted by: Mike Shulman on December 13, 2016 10:53 AM | Permalink | Reply to this

### Re: Quarter-Turns

Tom asked:

is it obvious that $\langle T x, x \rangle = \langle T, x \otimes x \rangle$ ?

What’s obvious is very much a function of what you take as your definitions (among other things). Here it’s better to think of the Hilbert–Schmidt inner product in terms of the tensor product, instead of in terms of the trace. If $T = u \otimes v$ is any rank-one map, then $\langle (u \otimes v) (x), y \rangle = \langle \langle u, x \rangle v, y \rangle = \langle u, x \rangle \langle v, y \rangle = \langle u \otimes v, x \otimes y \rangle.$ (Please excuse sloppiness about conjugates or conventions about order.)

Since an arbitrary $T$ is a linear combination of rank-one $T$s, the identity follows.

That’s still a calculation, but a pretty simple one.

Posted by: Mark Meckes on December 13, 2016 8:18 PM | Permalink | Reply to this

### Re: Quarter-Turns

I see. Thanks. Taking apart what you said, I see the three points:

• The identity $\langle u, x \rangle \langle v, y \rangle = \langle u \otimes v, x \otimes y \rangle$ (for $u, x \in X$ and $v, y \in Y$).

This is an immediate consequence of the definition of the tensor product of inner product spaces.

• The fact that every rank-one map $X \to Y$ is of the form $u \otimes v$ for some $u \in X$ and $v \in Y$.

To prove this, take any rank-one map $T : X \to Y$, let $u$ be any nonzero element of the orthogonal complement of $\ker(T)$, and let $v$ be a suitably-scaled element of the one-dimensional space $\im(T)$.

• The fact that every linear map $T: X \to Y$ is a linear combination of rank-one maps.

To prove this, just observe that the identity on $Y$ is a sum of rank-one maps, then compose.

Alternatively, one could just argue that $\Hom(X, Y) \cong X^\ast \otimes Y$, so $T$ is a linear combination of maps of the form $u \otimes v$, which is what you wanted anyway. (No need to mention rank at all.)

To round out the picture, under the isomorphism $X \otimes Y \cong Hom(X, Y)$, the adjoint of $x \otimes y$ is $y \otimes x$. Moreover, in the case $X = Y$, we can only have $x \otimes y = y \otimes x$ if one of $x$ and $y$ is a scalar multiple of the other. So the rank-one self-adjoint operators on $X$ are exactly the operators of the form $x \otimes x$ — as you said at the start of this thread!

Posted by: Tom Leinster on December 14, 2016 1:22 PM | Permalink | Reply to this

### Re: Quarter-Turns

It’s worth mentioning the second most famous quarter-turn after the number $i$: the Fourier transform!

$F^4 = 1$

The Fourier transform is just the quantization of multiplication by $i$, so it’s not really a different example. Nonetheless, learning how they’re the same is quite a fascinating adventure.

Posted by: John Baez on December 5, 2016 6:16 AM | Permalink | Reply to this

### Re: Quarter-Turns

That’s quite a striking statement! Pray tell, where may one set off on such an adventure?

Posted by: qf on December 6, 2016 11:47 PM | Permalink | Reply to this

### Re: Quarter-Turns

Take a look at John’s lecture notes Quantization and Categorification - Fall 03. The quarter turn is explicitly mentioned on pp. 62-63 and p.68.

Did it crop up elsewhere in the seminars?

Posted by: David Corfield on December 13, 2016 9:24 AM | Permalink | Reply to this

### Re: Quarter-Turns

I think it’s common knowledge that second quantization functorially gives a Hilbert space $K(H)$ for any Hilbert space $H$ and a unitary operator $K(U): K(H) \to K(H)$ for any unitary $U: H \to H$; this is what Edward Nelson meant when he said

Quantization is a mystery, but second quantization is a functor.

When we take $H = \mathbb{C}$ there’s a famous isomorphism between $K(H)$ and $L^2(\mathbb{R})$. Using this, and taking the unitary $U$ to be multiplication by $i$, the unitary $K(U) : K(H) \to K(H)$ corresponds to the Fourier transform $F: L^2(\mathbb{R}) \to L^2(\mathbb{R})$. Since second quantization is a functor, $i^4 = 1$ implies $F^4 = 1$.

What’s a good place to learn this stuff? I gave a pretty thorough introduction to second quantization in my Fall 2003 seminar and David spotted the place where I explained this. I learned a lot of this material from my thesis advisor Irving Segal, and one can see it spelled out in a painfully general and rigorous way in our book:

But it should be in some easier books. Perhaps

• Gerald Folland, Harmonic Analysis on Phase Space.

has it — I forget! It deserves to be on Wikipedia but I don’t see it.

Posted by: John Baez on December 16, 2016 6:04 PM | Permalink | Reply to this

### Re: Quarter-Turns

Sorry to be late to the party. In a jet-lag fug I thought I had posted a comment last week, but apparently not. Anyway, is the terminology standard? Two things stand out that seem to run against the impression you might get from the name. Firstly, four quarter-turns do not necessarily make a whole turn and, secondly, a rotation by 90 degrees about an axis in 3-space is not a quarter-turn!

Posted by: Simon Willerton on December 16, 2016 10:41 AM | Permalink | Reply to this

### Re: Quarter-Turns

No, I just made it up. I agree that it has the disadvantages that you mention. Any better ideas?

Posted by: Tom Leinster on December 16, 2016 1:03 PM | Permalink | Reply to this

### Re: Quarter-Turns

Here’s some standard stuff, which overlaps in an interesting way with Tom’s discoveries.

The usual term for an operator $T$ on either a real or complex Hilbert space obeying $T^\ast = -T$ is skew-adjoint, and we should definitely keep that. An operator with $T^\ast = T^{-1}$ is called orthogonal or unitary depending on whether it’s acting on a real or complex Hilbert space; sometimes people want a word that covers both cases, but none has caught on. Let me say ‘unitary’. If $T$ is unitary and skew-adjoint it clearly obeys $T^4 = 1$. If you have a unitary skew-adjoint operator on a real Hilbert space, you can use it as ‘multiplication by $i$’ to get a complex Hilbert space, so it’s often called a complex structure.

Posted by: John Baez on December 16, 2016 6:14 PM | Permalink | Reply to this

### Re: Quarter-Turns

Posted by: Layra Idarani on December 17, 2016 12:56 AM | Permalink | Reply to this

### Re: Quarter-Turns

Posted by: Jesse C. McKeown on December 17, 2016 5:16 AM | Permalink | Reply to this

Post a New Comment