## August 22, 2013

### Linear Operators Done Right

#### Posted by Tom Leinster

A conversation prompted by Simon’s last post reminded me of an analogy that’s too excellent to be buried in a comments thread. It must be very well-known, but I’ll go ahead and describe it anyway.

The analogy is between complex numbers and linear operators on an inner product space. Its best feature is that it makes important properties of complex numbers correspond to important properties of operators:

The title of this post refers to Sheldon Axler’s beautiful book Linear Algebra Done Right, which I’ve written about before. Most of what I’ll say can be found in Chapter 7. It’s one of those texts that feels like a piece of category theory even though it’s not actually about categories.

Today, all vector spaces are over $\mathbb{C}$ and finite-dimensional. Most (all?) of what I’ll say can be done in more sophisticated functional-analytic settings, but I’ll stick to this most basic of situations.

Fix a vector space $X$ equipped with an inner product. By an operator on $X$, I mean a linear map $X \to X$.

Here’s how the analogy goes.

Complex numbers are like operators  This is the basis of everything that follows.

There’s not much substance to this statement yet. For now, let’s just observe that both the complex numbers and the operators on $X$ form rings. I’ll write $End(X)$ for the ring of operators on $X$, following the usual categorical custom. (“End” stands for “endomorphisms”.)

The two rings $\mathbb{C}$ and $End(X)$ don’t seem very similar. Unlike $\mathbb{C}$, the ring $End(X)$ isn’t commutative and usually has nontrivial zero-divisors. (Indeed, as long as $dim(X) \geq 2$, there is some $T \in End(X)$ with $T \neq 0$ but $T^2 = 0$.) Perhaps surprisingly, these differences don’t prevent the development of this useful analogy.

In some loose sense, we can pass back and forth between $\mathbb{C}$ and $End(X)$. In one direction, starting with a complex number $\lambda$, we get the operator $x \mapsto \lambda x$. In elementary texts, this operator is often written as $\lambda I$, but I’ll almost always write it as just $\lambda$.

In the opposite direction, starting with an operator on $X$, we get not just a single complex number but a collection of them — namely, its eigenvalues.

Complex conjugates are like adjoints  Every complex number $z$ has a complex conjugate $z^\ast$. Taking complex conjugates defines a self-inverse automorphism of the ring $\mathbb{C}$.

Every linear map $T: X \to Y$ of inner product spaces has an adjoint $T^\ast: Y \to X$, characterized by the equation $\langle T x, y \rangle = \langle x, T^\ast y \rangle$. In particular, every operator $T$ on $X$ has an adjoint $T^\ast$, also an operator on $X$.

It’s almost true that taking adjoints defines a self-inverse automorphism of $End(X)$. The only obstruction is that taking adjoints reverses the order of composition: $(T S)^\ast = S^\ast T^\ast$. So actually, taking adjoints defines a pair of mutually inverse ring isomorphisms

$End(X)^{op} \stackrel{\longrightarrow}{\leftarrow} End(X)$

where $End(X)^{op}$ is the ring $End(X)$ with its order of multiplication reversed.

What about those back-and-forth passages between complex numbers and operators?

First, start with a complex number $\lambda$; then the adjoint of the operator $\lambda I$ is $\lambda^\ast I$. That is, $(\lambda I)^\ast = \lambda^\ast I$. This is why I’m writing $z^\ast$ for the complex conjugate of $z$, rather than the more common $\bar{z}$.

Second, start with an operator $T$. Then the eigenvalues of $T^\ast$ are exactly the conjugates of the eigenvalues of $T$. Why? Because taking the adjoint defines an isomorphism of rings, so $T - \lambda$ is invertible iff $(T - \lambda)^\ast = T^\ast - \lambda^\ast$ is.

Real numbers are like self-adjoint operators  A complex number $z$ is real if and only if $z = z^\ast$. By definition, an operator $T$ is self-adjoint if and only if $T = T^\ast$.

Again, let’s look at the passages back and forth between $\mathbb{C}$ and $End(X)$. First, let $\lambda \in \mathbb{C}$. As long as $X$ is nontrivial, the operator $\lambda$ is self-adjoint iff $\lambda$ is real.

Second, if $T$ is a self-adjoint operator then all its eigenvalues are real. The converse isn’t true: an operator can have all real eigenvalues without being self-adjoint. We’ll come back to that.

Any even half-serious endeavour involving self-adjoint operators makes use of the theorem that classifies them, the spectral theorem. Loosely put, this states that every self-adjoint operator is an orthogonal sum of self-adjoint operators of the most simple kind: scalar multiplication by a real number.

Precisely: given any self-adjoint operator $T$, there is a unique orthogonal decomposition $X = \bigoplus_{\lambda \in \mathbb{R}} X_\lambda$ such that for each $\lambda$, the restriction of $T$ to $X_\lambda$ is multiplication by $\lambda$. Of course, all but finitely many of these subspaces $X_\lambda$ are trivial, the nontrivial ones are those for which $\lambda$ is an eigenvalue, and $X_\lambda$ is the eigenspace $ker(T - \lambda)$.

Nonnegative real numbers are like positive operators  For a complex number $z$, the following are equivalent:

• (1) $z$ is nonnegative, i.e. real and $\geq 0$
• (2) $z = w^\ast w$ for some complex $w$
• (3) $z = w^\ast w$ ($= w^2$) for some real $w$
• (4) $z = w^\ast w$ ($= w^2$) for some nonnegative $w$
• (5) $z = w^\ast w$ ($= w^2$) for a unique nonnegative $w$.

I’ll follow custom and say that an operator $T$ is positive if it is self-adjoint and each eigenvalue is $\geq 0$. (Other names are “positive semidefinite” and “nonnegative definite”. As we were recently discussing, the terminology around positive/nonnegative is a bit of a mess.) Note that by definition, “positive” includes “self-adjoint”. This is just like the convention that when we call a complex number “nonnegative”, we tacitly include the condition “real”.

For an operator $T$ on $X$, the following are equivalent:

• (1) $T$ is positive, i.e. self-adjoint and each eigenvalue is $\geq 0$
• (1.5) $T$ is self-adjoint and $\langle T x, x \rangle \geq 0$ for all $x \in X$
• (2) $T = S^\ast S$ for some inner product space $Y$ and linear map $S: X \to Y$
• (2.5) $T = S^\ast S$ for some operator $S$ on $X$
• (3) $T = S^\ast S$ ($= S^2$) for some self-adjoint operator $S$
• (4) $T = S^\ast S$ ($= S^2$) for some positive operator $S$
• (5) $T = S^\ast S$ ($= S^2$) for a unique positive operator $S$.

The implications $5 \Rightarrow 4 \Rightarrow \cdots \Rightarrow 1$ are all either trivial or easy. The remaining implication, $1 \Rightarrow 5$, follows from the spectral theorem, using $1 \Rightarrow 5$ of the result on nonnegativity of numbers.

In particular, given $\lambda \in \mathbb{C}$, the operator $\lambda$ is positive iff the number $\lambda$ is nonnegative (assuming that $X$ is nontrivial). And given an operator $T$, if $T$ is positive then each eigenvalue of $T$ is nonnegative (but not conversely).

The modulus of a complex number is like… the modulus of an operator? What is the modulus of a complex number? Let’s answer this carefully, using the theorem above on nonnegativity of complex numbers. Let $z \in \mathbb{C}$. By the theorem, $z^\ast z$ is nonnegative, so by the theorem again, there is a unique nonnegative $m$ such that $z^\ast z = m^\ast m$ ($= m^2$). This $m$ is, of course, $\left|z\right|$, the modulus of $z$.

What is the analogue for operators? Let’s use the theorem above on positivity of operators. Let $T \in End(X)$. By the theorem, $T^\ast T$ is positive, so by the theorem again, there is a unique positive $M$ such that $T^\ast T = M^\ast M$ ($= M^2$). I’ll call $M$ the modulus of $T$ and write it as $\left|T\right|$. I don’t know whether the term “modulus” is standard here, and I’m pretty sure the notation $\left|T\right|$ isn’t — it’s risky, given the potential for confusion with a norm. But I’ll use it anyway, to emphasize the analogy.

Complex numbers of unit modulus are like isometries  A complex number $z$ has unit modulus if and only if $z^\ast z = 1$, if and only if $z z^\ast = 1$. An operator $T$ is an isometry if and only if $T^\ast T = 1$, if and only if $T T^\ast = 1$ (if and only if $T$ preserves inner products, if and only if $T$ preserves distances). Isometries are more often called unitary operators, but I find the term “isometry” more vivid.

Now that we have a definition of “modulus” for operators, we can ask: which operators are literally “of unit modulus”? In other words, which operators $T$ satisfy $\left|T\right| = 1$? Here $1$ is the identity operator. Certainly $1$ is positive, so $\left|T\right| = 1$ if and only if $T^\ast T = 1^\ast 1$, if and only if $T$ is an isometry. So the different parts of the analogy hang together nicely.

Once again, let’s go back and forth between complex numbers and operators. Given $\lambda \in \mathbb{C}$, the operator $\lambda$ is an isometry iff the number $\lambda$ is of unit modulus (again, assuming that $X$ is nontrivial). Given an operator $T$, if $T$ is an isometry then all its eigenvalues are of unit modulus. Again, the converse is false, and again, we’ll come back to that.

Polar decomposition of complex numbers and operators Any complex number $z$ can be expressed as a product

$z = u p$

where $u$ is of unit modulus and $p$ is nonnegative. Moreover, this $p$ is uniquely determined as $\left|z\right|$, and if $z \neq 0$ then $u$ is uniquely determined by $z$ too. (If $z = 0$ then many choices of $u$ are possible.)

Similarly, it’s a theorem that any operator $T$ can be expressed as a composite

$T = U P$

where $U$ is an isometry and $P$ is positive. Moreover, this $P$ is uniquely determined as $\left| T \right|$, and if $T$ is invertible then $U$ is uniquely determined by $T$ too. (If $T$ is not invertible then many choices of $U$ are possible.)

In the case where $T$ is just multiplication by a scalar $z$, the second theorem (polar decomposition of operators) reduces to the first (polar decomposition of complex numbers).

If you prefer, you can decompose an operator in the other order too: an isometry followed by a positive operator. To see this, decompose $T^\ast$ as $U P$; then $T = P^\ast U^\ast = P U^\ast$. But $U^\ast$ is an isometry, since the adjoint of an isometry is again an isometry — just as the conjugate of a complex number of unit modulus is again of unit modulus.

And that’s the analogy.

### Normal operators, and the fraying of the analogy

Like all analogies, this one eventually frays. Right at the start, we noted a big difference between complex numbers and operators: multiplying complex numbers is commutative, but composing operators isn’t. And another one: there are no nonzero nilpotent complex numbers, but there are nonzero nilpotent operators.

I’ll explain the trouble this causes by talking about operators $T$ that satisfy the equation $T^\ast T = T T^\ast$. In a fit of no inspiration, someone once called such operators normal, and the name stuck.

Now, all complex numbers $z$ are “normal”, in the sense that $z^\ast z = z z^\ast$, but not all operators $T$ are normal — for example, any nonzero nilpotent is “abnormal”. So this is a wrinkle in the analogy. You might conclude from this that the correct analogue for the complex numbers is not the set of all operators, but just the normal ones. This idea has in its favour that all self-adjoint operators and isometries (“real numbers” and “numbers of unit modulus”) are normal — because an operator commutes with both itself and its inverse.

However, the normal operators don’t form a ring, at least, not under the usual operations. The class of normal operators is closed under taking polynomials in one variable, but not under composition. Indeed, the polar decomposition theorem implies that by composing two normal operators, we can obtain any operator we like.

The normal operators are nevertheless a useful class, giving further depth to the analogy. I clearly remember the first time I saw the definition of normal operator: I was overwhelmed by the feeling that it was an awful hack. “Someone,” I thought to myself, “simply wants a definition that includes both self-adjoint operators and isometries, and they’ve written down the first thing that came into their head.” Oh young, foolish self; I was wrong. Here’s why:

Normal operators are exactly the right context for the spectral theorem.

Recall that for an operator $T$, the spectral theorem says that $X$ is the orthogonal sum of the eigenspaces of $T$. This statement isn’t true for all operators. Earlier on, I stated that it was true for all self-adjoint operators, and that in that case, all the eigenvalues are real. But there are certainly non-self-adjoint operators such that $X$ is the orthogonal sum of the eigenspaces — multiplication by any non-real scalar is an example.

So which operators is the spectral theorem true for? Exactly the normal ones. In other words:

Spectral theorem  Let $T$ be an operator on $X$. Then $X$ is the orthogonal sum of the eigenspaces of $T$ if and only if $T$ is normal.

This says that multiplication by a scalar is a normal operator, that the class of normal operators is closed under orthogonal sums, and that combining these two constructions generates all possible normal operators. ‘Only if’ is easy; it’s ‘if’ that takes work. You can find a proof in Linear Algebra Done Right.

We can read off two corollaries, both supporting the claim that “complex numbers are like normal operators” is a better analogy than “complex numbers are like operators”.

Corollary  Let $T$ be a normal operator. Then (i) $T$ is self-adjoint if and only if all eigenvalues of $T$ are real, and (ii) $T$ is an isometry if and only if all eigenvalues of $T$ are of unit modulus.

We saw earlier that without the normality, the “only if” parts are true but the “if” parts fail.

Fundamental theorem of algebra for normal operators  Let $p$ be a nonconstant polynomial over $\mathbb{C}$, and let $T$ be a normal operator. Then there exists a normal operator $S$ such that $p(S) = T$.

For both proofs, all we have to do is observe that the class of operators $T$ for which the result holds contains all operators of the form “multiply by a scalar” and is closed under orthogonal sums. That’s all there is to it!

Posted at August 22, 2013 3:07 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2650

### Re: Linear Operators Done Right

Two quick comments: first, the complex numbers and the algebra of operators on a Hilbert space are both *-algebras and that seems like the main point here. Second, presumably you already know this, but normality is precisely the condition that the *-subalgebra generated by an element is commutative, from which a version of the spectral theorem follows by Gelfand-Naimark.

Posted by: Qiaochu Yuan on August 22, 2013 5:12 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

presumably you already know this, but normality is precisely the condition that the *-subalgebra generated by an element is commutative, from which a version of the spectral theorem follows by Gelfand-Naimark.

I didn’t know this, and I don’t think I’d heard of $\ast$-algebras before. Thanks.

Can you expand on that last bit, “a version of the spectral theorem follows by Gelfand–Naimark”?

Posted by: Tom Leinster on August 22, 2013 12:07 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Sorry for the late reply. In the case of finite-dimensional matrices, the observation is that if an operator is normal then the star-subalgebra it generates is a commutative C-star-algebra, hence by Gelfand-Naimark is the algebra of functions on some compact Hausdorff space (its Gelfand spectrum). But since it’s finite-dimensional, the compact Hausdorff space must be finite, and from here it’s straightforward to show that the Gelfand spectrum must be the spectrum of the operator in the usual sense. (But of course classifying finite-dimensional C-star-algebras is easier than this.)

Posted by: Qiaochu Yuan on September 9, 2013 7:49 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Thanks, and thanks to John, who wrote something similar (which I meant to reply to) below.

Most of what I didn’t understand in your comment was down to different uses of the phrase “spectral theorem”. As a beginning undergraduate, I was taught that the spectral theorem for such-and-such operators on a finite-dimensional vector space is the statement that every such-and-such operator has an orthonormal basis of eigenvectors. This looks pretty different from what you and John are calling the spectral theorem, though I can see that with some work, the undergraduate one can be deduced from the more abstract one.

Posted by: Tom Leinster on September 10, 2013 5:23 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

One less fuzzy connection between numbers and operators is Morita’s equivalence: the two algebras have equivalent categories of modules. Can you work that into the nice story you told?

Posted by: James Borger on August 22, 2013 8:02 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

That sounds like an excellent question. I don’t know. Maybe Qiaochu does…?

Posted by: Tom Leinster on August 22, 2013 12:26 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Morita equivalence goes beyond anything that is happening here. Most of the story so far applies without change to arbitrary $C^*$-algebras, which are most certainly not all Morita equivalent.

Posted by: Erik Crevier on August 23, 2013 12:02 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

I see. So is the analogy described in my post the bread and butter of the theory of $C^*$-algebras?

Posted by: Tom Leinster on August 23, 2013 2:22 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Yep. The appropriate generalization to a $C^\star$-algebra is “a normal element is like a complex number”. Normality in this context is still the same algebraic condition as for operators. The fact that makes the analogy work is called the commutative Gelfand-Naimark theorem.

One form of this theorem says that there is a contravariant equivalence of categories between commutative unital $C^\star$-algebras and compact Hausdorff spaces.

To go from spaces to $C^\star$-algebras, you map $X$ to the algebra of continuous complex-valued functions $C(X)$. To go the other way, you send $A$ to the space $Spec(A)$ of homomorphisms $A\to\mathbb{C}$ equipped with the product topology.

Now if $x$ an element of some $C^\star$-algebra $A$, then we can look at the smallest closed $\star$-subalgebra $C^\star(x)$ of $A$ containing $x$ and $1$. This subalgebra is commutative exactly when $x$ is normal. It also turns out that the spectrum of $C^\star(x)$ can be identified with the spectrum of $x$ ie the set of all complex numbers $\lambda$ such that $x-\lambda$ is not invertible.

So if $x$ is normal, you can use Gelfand-Naimark to pretend that $x$ is a continuous function $\hat{x}$ on its spectrum. In particular, if you have any continuous function $f:Spec(x)\to\mathbb{C}$, then you can compose $f$ with $\hat{x}$ to get another element of $C(Spec(x))$. This corresponds to another element of $C^\star(x)$ which is denoted by $f(x)$.

This whole yoga gives a nice way of applying complex functions to normal elements which has very nice formal properties. The resulting machine is called the “continuous functional calculus”. It often allows you to get away with treating normal elements as if they are complex numbers.

If you work out what all of this means for a finite dimensional normal operator, you’ll find that the definition of $f(T)$ amounts to taking $T$, diagonalizing it, and then applying $f$ to all its eigenvalues. In finite dimensions, you can actually recover the spectral theorem directly from the continuous functional calculus. It takes more work in infinite dimensions, but morally speaking it’s fair to say that Gelfand-Naimark is responsible for the spectral theorem for bounded normal operators.

Posted by: Erik Crevier on August 23, 2013 3:40 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Thanks very much for the nice explanation. It’s a bit embarrassing not to know this stuff already, but better late than never. Actually, I did know the commutative Gelfand-Naimark theorem, but not the proof, and I hadn’t seen how it was connected to what I was writing about.

A couple of questions:

So if $x$ is normal, you can use Gelfand-Naimark to pretend that $x$ is a continuous function $\hat{x}$ on its spectrum.

Let me check I understand this. By definition, $Spec(C^\star(x))$ is the set of homomorphisms $C^\star(x) \to \mathbb{C}$, with the product topology. So there’s a continuous map $ev_x: Spec(C^\star(x)) \to \mathbb{C}$ given by evaluation at $x$.

On the other hand, you mentioned that $Spec(C^\star(x)) \cong Spec(x)$. Starting with $\lambda \in Spec(x)$, I guess the corresponding element of $Spec(C^\star(x))$ is the unique homomorphism $C^\star(x) \to \mathbb{C}$ that sends $x$ to $\lambda$.

So we have $Spec(x) \cong Spec(C^\star(x)) \stackrel{ev_x}{\to} \mathbb{C}$ and I guess $\hat{x}$ is the composite $Spec(x) \to \mathbb{C}$. But that means $\hat{x}$ is just the inclusion $Spec(x) \hookrightarrow \mathbb{C}$! Is that right?

In particular, if you have any continuous function $f: Spec(x) \to \mathbb{C}$, then you can compose $f$ with $\hat{x}$ to get another element of $C(Spec(x))$.

I don’t understand this. Both $f$ and $\hat{x}$ are functions $Spec(x) \to \mathbb{C}$, so they can’t be composed: the domain of one is not the codomain of the other. Well, maybe I’m interpreting things too strictly. I suppose we could say that since $\hat{x}$ is an inclusion, it’s essentially an identity (on $Spec(x)$). But then $f(x)$ is simply $f$. That doesn’t seem right.

Wikipedia says the same thing about composing, and I’m not getting any joy from the various books I’ve looked in, either.

Posted by: Tom Leinster on August 23, 2013 6:03 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Starting with $\lambda\in Spec(x)$, I guess the corresponding element of $Spec(C^\star(x))$ is the unique homomorphism $C^\star(x)\to\mathbb{C}$ that sends $x$ to $\lambda$.

This is correct. Maybe I should explain a little bit on why this homomorphism always exists, because it isn’t exactly obvious. The point is that if $\lambda$ is an element of $Spec(x)$, then $x-\lambda$ is non-invertible and thus contained in some maximal ideal. It’s a nice theorem that whenever you quotient a Banach algebra by a maximal ideal, you get a copy of $\mathbb{C}$, and this gives a homomorphism to $\mathbb{C}$ which maps $x-\lambda$ to $0$.

But this means that $\hat{x}$ is just the inclusion $Spec(x)\hookrightarrow\mathbb{C}$!

I sort of wish that I had defined $\hat{x}$ to be a continuous function on $\Spec(C^*(x))$ instead of on $Spec(x)$. Of course you can switch back and forth between the two easily enough, but the former is really more proper. That’s what you need to define $\hat{x}$ as if $x$ is just given as some element of an arbitrary commutative $C^\star$-algebra. The fact that we are playing with the the subalgebra generated by $x$ in some larger $C^\star$-algebra makes things a bit more confusing.

If you do decide to treat $\hat{x}$ as a function on $Spec(x)$, then you’re right that it is just the inclusion into $\mathbb{C}$. To help see why this is what you should expect, try playing around with a diagonal matrix $D$. Then you can get any other diagonal matrix by applying some function $f:Spec(D)\to\mathbb{C}$ to D, and you get back $D$ by taking $f$ to be the inclusion.

Both $f$ and $\hat{x}$ are functions $Spec(x)\to\mathbb{C}$, so they can’t be composed.

The trick is that the range of $\hat{x}$ is exactly $Spec(x)$. This is obvious when you are viewing $\hat{x}$ as the inclusion map $Spec(x)\to\mathbb{C}$. Because of this, you actually can form the composition $f\circ\hat{x}$, and it gives a function $Spec(C^*(x))\to\mathbb{C}$.

But the $f(x)$ is simply $f$.

Not quite. Remember that $x$ isn’t really a function on $Spec(x)\cong Spec(C^\star(x))$. It’s just that we have an isomorphism

$C^\star(x)\cong C(Spec(C^\star(x)))\cong C(Spec(x))$

The image of $x$ under this isomorphism is $\hat{x}$, and we can compose $f$ with $\hat{x}$ to get another element of $C(Spec(C^\star(x)))$. When you turn this into a function on $Spec(x)$ using $Spec(x)\cong Spec(C^\star(x))$, then you do indeed get back $f$. What’s really interesting though, is to take

$f\circ\hat{x}\in C(Spec(C^\star(x)))$

and send it over to an element of $C^\star(x)$. That’s the definition of $f(x)$.

Posted by: Erik Crevier on August 23, 2013 8:52 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Thanks for the further patient explanation.

It’s a nice theorem that whenever you quotient a Banach algebra by a maximal ideal, you get a copy of $\mathbb{C}$

Yes! I just read the proof of that in Rudin’s book. I guess this theorem is part of what gives Gelfand duality a different flavour from other similar kinds of duality.

For instance, suppose we’re instead considering the prime spectrum of a commutative ring $A$. Then the isomorphism type of $A/p$ varies as $p$ varies within $Spec(A)$. This means we need to use sheaves rather than $\mathbb{C}$-valued functions. E.g. given $x \in A$, the thing $\hat{x}$ that we get is a global section of the structure sheaf on $Spec(A)$, rather than simply a ‘function on $Spec(A)$’ (which is meaningless anyway — where would such ‘functions’ take their value?).

Posted by: Tom Leinster on August 25, 2013 12:06 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

I guess if you’re used to thinking about affine schemes, then Gelfand duality looks more like the baby case of affine varieties over algebraically closed fields. In that setting, we have an easy result roughly analogous to the Gelfand-Mazur theorem: If $A$ is a finitely generated $k$-algebra which is also a field, then $A\cong k$. So in both cases there’s only one kind of residue field to worry about, or dually there’s only one kind of we need to probe our spaces with.

One thing I’m not clear on is if there’s a good high-level reason why it is enough to only look at maximal ideals/closed points in these two cases, while for affine schemes we need to look at all prime ideals. The best unifying explanation I can give of this is that MaxSpec is not a functor on CRing (because the pullback of a maximal ideal need not be maximal), but it is a functor on affine $k$-algebras and Banach algebras. I’m not really sure what this means geometrically though.

Posted by: Erik Crevier on August 25, 2013 4:23 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

There is at least one thing in the main post which fails in infinite dimensions: the equivalence between $T^*T=1$ and $TT^*=1$. Typically, operators which satisfy $T^*T=1$ are called isometries, which is equivalent to preservation of the norm. Only operators satisfying both equations are called unitary, and then an operator is unitary if and only if it is a surjective isometry.

An example showing that these two things are not the same is this: take a separable infinite-dimensional Hilbert space with orthonormal basis indexed by the natural numbers. Then the operator which maps the $i$th basis vector $e_1$ to $e_{i+1}$ is an isometry; its adjoint is the operator which takes $e_i$ to $e_{i-1}$ for $i\geq 1$ and annihilates $e_0$. But clearly, this operator is not unitary since it is not surjective.

A categorification of this notion of isometry might be the inclusion functor of a (co-)reflective subcategory: composing such a functor with its adjoint yields the identity functor (up to natural iso). Or is this too naive?

Now I wonder: is there a category which is a (co-)reflective subcategory of itself in a non-trivial way? If so, which “finiteness” conditions on a category guarantees such an inclusion functor to be an equivalence, just like finite-dimensionality of a Hilbert space guarantees that every isometry is unitary?

Posted by: Tobias Fritz on August 22, 2013 3:00 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Just on the first point, every time I wrote “$T^\ast T = 1$”, that should really be understood as “$T^\ast T = T T^\ast = 1$”. Of course, in finite dimensions they’re equivalent.

In a similar way, “isometry” is really shorthand for “isometric isomorphism”. In all sorts of contexts, there’s a conflict between those who insist that isometries are by definition bijective and those who don’t. Again, in finite dimensions it makes no difference, since an isometry $X \to X$ is automatically injective and so surjective too.

I know you know all this. I just wanted to point out that I was being a little bit lazy in these ways. Were I to want to generalize to higher dimensions, I’d be more careful.

Now I wonder: is there a category which is a (co-)reflective subcategory of itself in a non-trivial way?

Yes, and there’s an example that’s more or less the same as your Hilbert space example. Take the poset $A = (\mathbb{N}, \leq)$ regarded as a category, and the full subcategory $B$ of strictly positive integers. The inclusion $B \hookrightarrow A$ has a left adjoint, $max\{1, -\}$, so $B$ is reflective in $A$. And, of course, $B \cong A$.

Posted by: Tom Leinster on August 22, 2013 3:19 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

In fact, normal operators are not even closed under addition…

Posted by: Alexander Shamov on August 22, 2013 3:35 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

I tried to think of an example of this that wouldn’t require me to write down any matrices. But I couldn’t. Do you know a nice one?

Posted by: Tom Leinster on August 22, 2013 5:10 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Every operator is the sum of its real part and $i$ times its imaginary part, both of which are normal. Pick a nilpotent shift and observe it can’t be normal.

(I also have comments regarding positive and non-negative, along the lines that an operator which is not negative might not be “non-negative” in the sense of $\geq 0$, but these have to wait till I’ve got to a proper keyboard and disposed of some irksome chores)

As Q-C Y has commented, it seems that your post is related to/commenting on the (C-)star or von Neumann algebraic structure present in both cases. I prefer to see the B(H) case as an extension of what we know about complex numbers rather than a generalization, but I admit I haven’t reflected properly on why I feel this way.

Posted by: Yemon Choi on August 22, 2013 6:08 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Every operator is the sum of its real part and $i$ times its imaginary part, both of which are normal. Pick a nilpotent shift and observe it can’t be normal.

Ah, very good. Thanks.

For the record, here’s a proof that a nonzero nilpotent operator can’t be normal — or, to put it another way, the only normal nilpotent operator is $0$.

First consider a self-adjoint nilpotent operator $S$. Then $S^n = 0$ for some $n$ that’s a power of $2$. If $n \geq 2$ then for all $x \in X$,

$\| S^{n/2} x \|^2 = \langle S^{n/2} x, S^{n/2} x \rangle = \langle x, S^n x \rangle = 0,$

so $S^{n/2} = 0$. Hence, by induction, $S = 0$.

Now take a normal nilpotent operator $T$. The operator $T^\ast T$ is self-adjoint and nilpotent, so $T^\ast T = 0$. Hence for all $x \in X$,

$\| T x \|^2 = \langle T x, T x \rangle = \langle x, T^\ast T x \rangle = 0,$

giving $T = 0$, as required.

Posted by: Tom Leinster on August 23, 2013 12:27 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

I don’t think we should be surprised that complex numbers act like a class of operators. After all, they are the operators on $\mathbb{C}^1$. All operations on operators will carry over to operations on complex numbers. What I really like is the comparison to the normal operators, and the “Fundamental theorem of algebra for normal operators” is fun!

Posted by: Tom Ellis on August 22, 2013 3:37 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

I agree, but what I think is remarkable is that so many familiar aspects of $\mathbb{C} = End(\mathbb{C}^1)$ carry over to $End(X)$ for an arbitrary finite-dimensional inner product space $X$. For instance, it doesn’t seem obvious in advance that the notions of real line, positive real line, unit circle and modulus in $\mathbb{C}$ alll have useful analogues in $End(X)$, nor that they will behave in very much the same way.

Posted by: Tom Leinster on August 22, 2013 3:43 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

C*-algebras rock!

Posted by: John Baez on August 23, 2013 5:46 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

I only discovered yesterday that it was your supervisor who gave them their name.

Posted by: Tom Leinster on August 23, 2013 5:53 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

It took a strenuous act of modesty not to point that out—thanks, now I can exhale. I worked with Segal because I was really interested in the foundations of quantum theory, where C*-algebras are important.

As far as I can tell, all the results you stated for operators in your blog post hold for elements of arbitrary C*-algebras, except for

$T^* T = 1 \iff T^* T = 1$

which holds only for finite-dimensional ones. So, in the infinite-dimensional case, we must distinguish between isometries, which obey $T^* T = 1$, and unitaries which obey $T^* T = T^* T = 1$.

Hmm, now I see you wrote:

I’ll call $M$ the modulus of T and write it as $∣T∣$. I don’t know whether the term “modulus” is standard here, and I’m pretty sure the notation $∣T∣$ isn’t—it’s risky, given the potential for confusion with a norm.

Actually in C*-algebra theory $|T|$ is the standard notation for the modulus (the positive square root of $T^* T$). I guess folks in that subject are a pack of risk-loving daredevils! Either that, or few of them have double vision, so they can safely get away with writing $\|T\|$ for the norm of $T$, and $|T|$ for its modulus.

And hmm, I see you also wrote:

Can you expand on that last bit, “a version of the spectral theorem follows by Gelfand-Naimark”?

The Gelfand–Naimark theorem says any commutative C*-algebra $A$ (with multiplicative unit) is isomorphic to the algebra of continuous functions on a compact Hausdorff space. This space is unique up to canonical isomorphism: you can think of its points as homomorphisms $A \to \mathbb{C}$. It’s called the spectrum of $A$.

Even better, the map sending $A$ to its spectrum extends to an equivalence between the category of commutative C*-algebras and the opposite of the category of compact Hausdorff spaces!

If we apply this result to the C*-algebra generated by a normal operator $T$ on a Hilbert space $H$, we can prove the ‘spectral theorem’ for normal operators.

There are tons of ways to state this theorem, but since I’m feeling lazy, I’ll pick the easiest one to prove: the C*-algebra generated by $T$ is isomorphic to the algebra of continuous functions on a compact Hausdorff space called the spectrum of $T$.

This version is an instant consequence of the Gelfand-Naimark theorem! Other versions go further in describing what the spectrum of $T$ actually looks like. For example, when $H$ is finite-dimensional, the spectrum of $T$ is just its set of eigenvalues.

Posted by: John Baez on August 25, 2013 3:12 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

Small nitpick regarding John’s choice of words “which holds only for finite-dimensional ones” (I think I know what he means, though).

While it’s true that for infinite-dimensional $H$ there exist $T\in B(H)$ satisfying $T^\ast T= I \neq TT^\ast$, I’d like to point out that there are interesting infinite-dimensional von Neumann algebras in which the equation $ab=1$ always implies $ba=1$.

These are the so-called finite von Neumann algebras, and a good supply of mysterious examples is given by taking a discrete group $G$, considering the left regular representation of $G$ on $H=\ell^2(G)$, and taking the WOT closed algebra generated inside $B(H)$.

Posted by: Yemon Choi on August 26, 2013 1:27 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Yemon wrote:

Small nitpick regarding John’s choice of words “which holds only for finite-dimensional ones” (I think I know what he means, though).

Thanks for the interesting counterexample! I should have said “which holds for finite-dimensional ones, but not all infinite-dimensional ones.”

In case anyone reads WOT and thinks “WOT?”, I’ll add that this stands for “weak operator topology”.

Posted by: John Baez on August 27, 2013 1:59 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

One helpful result is that the norm of the modulus of $T$ equals the norm of $T$ (in any $C^*$-algebra). That is, ${\|{|T|}\|} = {\|T\|}$, so there is a limit to how many lines anybody needs.

Posted by: Toby Bartels on September 15, 2013 8:37 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

What also rocks is taking a serious, structural approach to the humble subject of finite-dimensional vector spaces. This is what Axler does. I’m not sure everyone realizes how much there is to the subject.

I was lucky enough, in my second year as an undergraduate, to have a (compulsory) second course in linear algebra given by Roger Heath-Brown. It was one of the highlights of my undergraduate years. He covered things like dual vector spaces (up to results such as $ker(T^\ast) = (im T)^\circ$), quotient spaces, finite-dimensional inner product spaces, and so on.

Current undergraduates at Glasgow, and I think Edinburgh too, only see this material if they choose to do a course in functional analysis. I suspect that’s true at other British universities too, and it’s an enormous shame. We can expose undergraduates to so many beautiful results in the basic finite-dimensional context, without the need to introduce any analytic structure. Why don’t we?

Of course, this also acts as excellent preparation (and an excellent trailer) for functional analysis. But it always seems to me to be a missed opportunity that we don’t teach more about plain old finite-dimensional vector spaces.

Posted by: Tom Leinster on August 25, 2013 12:22 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Is there a standard name for the purely imaginary operators, i.e. the operators for which A*=-A? For example, if looking at the vector space of square integrable functions on R which vanish at positive and negative infinity, differentiation is a “purely imaginary” operator by integration by parts.

Posted by: Steven Gubkin on August 29, 2013 10:53 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

I think operators satisfying $A^*=-A$ are often called ‘anti-selfadjoint’, in analogy with antisymmetric vs symmetric matrices.

However, if you work over the complex numbers, like e.g. in a $C^*$-algebra, then you can also consider $i A$ instead of $A$, and $i A$ is selfadjoint by antilinearity of $*$. So, there is a linear isomorphism between selfadjoint and anti-selfadjoint operators.

This multiplication by $i$ is very commonly used in connection with the differentiation operator: $-i\tfrac{d}{dx}$ is the momentum operator, one of the most fundamental objects in quantum mechanics! More generally, quantum physicists like to write everything in terms of selfadjoint operators, and call a selfadjoint operator $A$ the “generator” of the $1$-parameter family of unitaries $e^{i t A}$. With $A=-i\tfrac{d}{dx}$, such a unitary translates your function $f$ by $t$ in the horizontal direction. Intuitively, this is because formally expanding $e^{t\tfrac{d}{dx}}f$ into a power series results in the Taylor expansion of $f$. This is maybe the coolest trick that one can learn from physicists!

Posted by: Tobias Fritz on August 29, 2013 11:23 PM | Permalink | Reply to this

### Re: Linear Operators Done Right

“Skew-hermitian” is the terminology I absorbed somewhere along the way, though I don’t know if it is now standard.

Posted by: Yemon Choi on August 30, 2013 12:01 AM | Permalink | Reply to this

### Re: Linear Operators Done Right

Steve wrote:

Is there a standard name for the purely imaginary operators, i.e. the operators for which $A^*=-A$?

I’ve always heard them called ‘skew-adjoint’, but Wikipedia says that ‘skew-hermitian’ and ‘anti-hermitian’ are also used.

I dislike this use of ‘anti-hermitian’ because these operators are not antilinear, the way antiunitary operators are!

So, I recommend using either ‘skew-adjoint’ or ‘skew-hermitian’, depending on whether you call an operator with $A^* = A$ ‘self-adjoint’ or ‘hermitian’.

I hereby deprecate the use of ‘anti-hermitian’ for linear operators with $A^* = -A$.

Posted by: John Baez on August 30, 2013 6:43 AM | Permalink | Reply to this

Post a New Comment