## September 14, 2015

### Where Does The Spectrum Come From?

#### Posted by Tom Leinster

Perhaps you, like me, are going to spend some of this semester teaching students about eigenvalues. At some point in our lives, we absorbed the lesson that eigenvalues are important, and we came to appreciate that the invariant par excellence of a linear operator on a finite-dimensional vector space is its spectrum: the set-with-multiplicities of eigenvalues. We duly transmit this to our students.

There are lots of good ways to motivate the concept of eigenvalue, from lots of points of view (geometric, algebraic, etc). But one might also seek a categorical explanation. In this post, I’ll address the following two related questions:

1. If you’d never heard of eigenvalues and knew no linear algebra, and someone handed you the category $\mathbf{FDVect}$ of finite-dimensional vector spaces, what would lead you to identify the spectrum as an interesting invariant of endomorphisms in $\mathbf{FDVect}$?

2. What is the analogue of the spectrum in other categories?

I’ll give a fairly complete answer to question 1, and, with the help of that answer, speculate on question 2.

(New, simplified version posted at 22:55 UTC, 2015-09-14.)

Famously, trace has a kind of cyclicity property: given maps

$X \stackrel{f}{\to} Y \stackrel{g}{\to} X$

in $\mathbf{FDVect}$, we have

$tr(g \circ f) = tr(f \circ g).$

I call this “cyclicity” because it implies the more general property that for any cycle

$X_0 \stackrel{f_1}{\to} X_1 \stackrel{f_2}{\to} \,\, \cdots\,\, \stackrel{f_{n-1}}{\to} X_{n - 1} \stackrel{f_n}{\to} X_0$

of linear maps, the scalar

$tr(f_i \circ \cdots \circ f_1 \circ f_n \circ \cdots \circ f_{i + 1})$

is independent of $i$.

A slightly less famous fact is that the same cyclicity property is enjoyed by a finer invariant than trace: the set-with-multiplicities of nonzero eigenvalues. In other words, the operators $g\circ f$ and $f\circ g$ have the same nonzero eigenvalues, with the same (algebraic) multiplicities. Zero has to be excluded to make this true: for instance, if we take $f$ and $g$ to be the projection and inclusion associated with a direct sum decomposition, then one composite operator has $0$ as an eigenvalue and the other does not.

I’ll write $Spec(T)$ for the set-with-multiplicities of eigenvalues of a linear operator $T$, and $Spec'(T)$ for the set-with-multiplicities of nonzero eigenvalues. Everything we’ll do is on finite-dimensional vector spaces over an algebraically closed field $k$. Thus, $Spec(T)$ is a finite subset-with-multiplicity of $k$ and $Spec'(T)$ is a finite subset-with-multiplicity of $k^\times = k \setminus \{0\}$.

I’ll call $Spec'(T)$ the invertible spectrum of $T$. Why? Because every operator $T$ decomposes uniquely as a direct sum of operators $T_{nil} \oplus T_{inv}$, where every eigenvalue of $T_{nil}$ is $0$ (or equivalently, $T_{nil}$ is nilpotent) and no eigenvalue of $T_{inv}$ is $0$ (or equivalently, $T_{inv}$ is invertible). Then the invertible spectrum of $T$ is the spectrum of its invertible part $T_{inv}$.

If excluding zero seems forced or unnatural, perhaps it helps to consider the “reciprocal spectrum”

$RecSpec(T) = \{\lambda \in k : ker(\lambda T - I) \,\,\text{ is nontrivial} \}.$

There’s a canonical bijection between $Spec'(T)$ and $RecSpec(T)$ given by $\lambda \leftrightarrow 1/\lambda$. So the invariants $Spec'$ and $RecSpec$ carry the same information, and if $RecSpec$ seems natural to you then $Spec'$ should too.

Moreover, if you know the space $X$ that your operator $T$ is acting on, then to know the invertible spectrum $Spec'(T)$ is to know the full spectrum $Spec(T)$. That’s because the multiplicities of the eigenvalues of $T$ sum to $dim(X)$, and so the multiplicity of $0$ in $Spec(T)$ is $dim(X)$ minus the sum of the multiplicities of the nonzero eigenvalues.

The cyclicity equation

$Spec'(g\circ f) = Spec'(f\circ g)$

is a very strong property of $Spec'$. A second, seemingly more mundane, property is that for any operators $T_1$ and $T_2$ on the same space, and any scalar $\lambda$,

$Spec'(T_1) = Spec'(T_2) \implies Spec'(T_1 + \lambda I) = Spec'(T_2 + \lambda I).$

In other words, for an operator $T$, if you know $Spec'(T)$ and you know the space that $T$ acts on, then you know $Spec'(T + \lambda I)$ for each scalar $\lambda$. Why? Well, we noted above that if you know the invertible spectrum of an operator and you know the space it acts on, then you know the full spectrum. So $Spec'(T)$ determines $Spec(T)$, which determines $Spec(T + \lambda I)$ (as $Spec(T) + \lambda$), which in turn determines $Spec'(T + \lambda I)$.

I claim that the invariant $Spec'$ is universal with these two properties, in the following sense.

Theorem   Let $\Omega$ be a set and let $\Phi : \{ \text{linear operators} \} \to \Omega$ be a function satisfying:

1. $\Phi(g \circ f) = \Phi(f \circ g)$ for all $X \stackrel{f}{\to} Y \stackrel{g}{\to} X$

2. $\Phi(T_1) = \Phi(T_2)$ $\implies$ $\Phi(T_1 + \lambda I) = \Phi(T_2 + \lambda I)$ for all operators $T_1, T_2$ on the same space, and all scalars $\lambda$.

Then $\Phi$ is a specialization of $Spec'$, that is, $Spec'(T_1) = Spec'(T_2) \implies \Phi(T_1) = \Phi(T_2)$ for all $T_1, T_2$. Equivalently, there is a unique function $\bar{\Phi} : \{ \text{finite subsets-with-multiplicity of }\,\, k^\times\} \to \Omega$ such that $\Phi(T) = \bar{\Phi}(Spec'(T))$ for all operators $T$.

For example, take $\Phi$ to be trace. Then conditions 1 and 2 are satisfied, so the theorem implies that trace is a specialization of $Spec'$. That’s clear anyway, since the trace of an operator is the sum-with-multiplicities of the nonzero eigenvalues.

I’ll say just a little about the proof.

The invertible spectrum of a nilpotent operator is empty. Now, the Jordan normal form theorem invites us to pay special attention to the special nilpotent operators $P_n$ on $k^n$ defined as follows: writing $e_1, \ldots, e_n$ for the standard basis of $k^n$, the operator $P_n$ is given by

$e_n \mapsto e_{n - 1} \mapsto \cdots \mapsto e_1 \mapsto 0.$

So if the theorem is to be true then, in particular, $\Phi(P_n)$ must be independent of $n$.

But it’s not hard to cook up maps $f: k^n \to k^{n - 1}$ and $g: k^{n - 1} \to k^n$ such that $g\circ f = P_n$ and $f \circ g = P_{n - 1}$. Thus, condition 1 implies that $\Phi(P_n) = \Phi(P_{n - 1})$. It follows that $\Phi(P_n)$ is independent of $n$, as claimed.

Of course, that doesn’t prove the theorem. But the rest of the proof is straightforward, given the Jordan normal form theorem and condition 2, and in this way, we arrive at the conclusion of the theorem:

$Spec'(T_1) = Spec'(T_2) \implies \Phi(T_1) = \Phi(T_2)$

for any operators $T_1$ and $T_2$.

One way to interpret the theorem is as follows. Let $\sim$ be the smallest equivalence relation on $\{\text{linear operators}\}$ such that:

1. $g\circ f \sim f \circ g$

2. $T_1 \sim T_2$ $\implies$ $T_1 + \lambda I \sim T_2 + \lambda I$

(where $f$, $g$, etc. are quantified as in the theorem). Then the natural surjection

$\{ \text{linear operators} \} \longrightarrow \{ \text{linear operators} \}/\sim$

is isomorphic to

$Spec': \{ \text{linear operators} \} \longrightarrow \{ \text{finite subsets-with-multiplicity of }\,\, k^\times\}.$

That is, there is a bijection between $\{ \text{linear operators} \}/\sim$ and $\{ \text{finite subsets-with-multiplicity of }\,\, k^\times\}$ making the evident triangle commute.

So, we’ve characterized the invariant $Spec'$ in terms of conditions 1 and 2. These conditions seem reasonably natural, and don’t depend on any prior concepts such as “eigenvalue”.

Condition 2 does appear to refer to some special features of the category $\mathbf{FDVect}$ of finite-dimensional vector spaces. But let’s now think about how it could be interpreted in other categories. That is, for a category $\mathcal{E}$ (in place of $\mathbf{FDVect}$) and a function

$\Phi: \{ \text{endomorphisms in }\,\, \mathcal{E} \} \to \Omega$

into some set $\Omega$, how can we make sense of condition 2?

Write $\mathbf{Endo}(\mathcal{E})$ for the category of endomorphisms in $\mathcal{E}$, with maps preserving those endomorphisms in the sense that the evident square commutes. (It’s the category of functors from the additive monoid $\mathbb{N}$, seen as a one-object category, into $\mathcal{E}$.)

For any scalars $\kappa \neq 0$ and $\lambda$, there’s an automorphism $F_{\kappa, \lambda}$ of the category $\mathbf{Endo}(\mathbf{FDVect})$ given by

$F_{\kappa, \lambda}(T) = \kappa T + \lambda I.$

I guess, but haven’t proved, that these are the only automorphisms of $\mathbf{Endo}(\mathbf{FDVect})$ that leave the underlying vector space unchanged. In what follows, I’ll assume this guess is right.

Now, condition 2 says that $\Phi(T)$ determines $\Phi(T + \lambda I)$ for each $\lambda$, for operators $T$ on a known space. That’s weaker than the statement that $\Phi(T)$ determines $\Phi(\kappa T + \lambda I)$ for each $\kappa \neq 0$ and $\lambda$ — but $Spec'(T)$ does determine $Spec'(\kappa T + \lambda I)$. So the theorem remains true if we replace condition 2 with the statement that $\Phi(T)$ determines $\Phi(F(T))$ for each automorphism $F$ of $\mathbf{Endo}(\mathbf{FDVect})$ “over $\mathbf{FDVect}$” (that is, leaving the underlying vector space unchanged).

This suggests the following definition:

Definition   Let $\mathcal{E}$ be a category. Let $\sim$ be the equivalence relation on $\{ \text{endomorphisms in }\,\, \mathcal{E} \}$ generated by:

1. $g\circ f \sim f \circ g$ for all $X \stackrel{f}{\to} Y \stackrel{g}{\to} X$ in $\mathcal{E}$

2. $T_1 \sim T_2$ $\implies$ $F(T_1) \sim F(T_2)$ for all endomorphisms $T_1, T_2$ on the same object of $\mathcal{E}$ and all automorphisms $F$ of $\mathbf{Endo}(\mathcal{E})$ over $\mathcal{E}$.

Call $\{ \text{endomorphisms in }\,\, \mathcal{E}\}/\sim$ the set of invertible spectral values of $\mathcal{E}$. Write $Spec': \{ \text{endomorphisms in }\,\, \mathcal{E} \} \to \{ \text{invertible spectral values of}\,\, \mathcal{E} \}$ for the natural surjection. The invertible spectrum of an endomorphism $T$ in $\mathcal{E}$ is $Spec'(T)$.

In the case $\mathcal{E} = \mathbf{FDVect}$, the invertible spectral values are the finite subsets-with-multiplicity of $k^\times$, and the invertible spectrum $Spec'(T)$ is as defined at the start of this post — namely, the set of nonzero eigenvalues with their algebraic multiplicities.

Aside   At least, that’s the case up to isomorphism. You might feel that we’ve lost something, though. After all, the spectrum of a linear operator is a subset-with-multiplicities of the base field, not just an element of some abstract set.

But the theorem does give us some structure on the set of invertible spectral values. This remark of mine below (written after I wrote a first version of this post, but before I wrote the revised version you’re now reading) shows that if $\mathcal{E}$ has finite coproducts then $\sim$ is a congruence for them; that is, if $S_1 \sim S_2$ and $T_1 \sim T_2$ then $S_1 + T_1 \sim S_2 + T_2$. (Here $+$ is the coproduct in $\mathbf{Endo}(\mathcal{E})$, which comes from the coproduct in $\mathcal{E}$ in the obvious way.) So the coproduct structure on endomorphisms induces a binary operation $\vee$ on the set of invertible spectral values, satisfying

$Spec'(S \oplus T) = Spec'(S) \vee Spec'(T).$

In the case $\mathcal{E} = \mathbf{FDVect}$, this is the union of finite subsets-with-multiplicity of $k^\times$ (adding multiplicities). And in general, the algebraic properties of coproduct imply that $\vee$ gives the set of invertible spectral values the structure of a commutative monoid.

Similarly, condition 2 implies that the automorphism group of $\mathbf{Endo}(\mathcal{E})$ acts on the set of invertible spectral values; and since automorphisms preserve coproducts (if they exist), it acts by monoid homomorphisms.

We can now ask what this general definition produces for other categories. I’ve only just begun to think about this, and only in one particular case: when $\mathcal{E}$ is $\mathbf{FinSet}$, the category of finite sets.

I believe the category of endomorphisms in $\mathbf{FinSet}$ has no nontrivial automorphisms over $\mathbf{FinSet}$. After all, given an endomorphism $T$ of a finite set $X$, what natural ways are there of producing another endomorphism of $X$? There are only the powers $T^n$, I think, and the process $T \mapsto T^n$ is only invertible when $n = 1$.

So, condition 2 is trivial. We’re therefore looking for the smallest equivalence relation on $\{ \text{endomorphisms of finite sets} \}$ such that $g \circ f \sim f \circ g$ for all maps $f$ and $g$ pointing in opposite directions. I believe, but haven’t proved, that $T_1 \sim T_2$ if and only if $T_1$ and $T_2$ have the same number of cycles

$x_1 \mapsto x_2 \mapsto \cdots \mapsto x_p \mapsto x_1$

of each period $p$. Thus, the invertible spectral values of $\mathbf{FinSet}$ are the finite sets-with-multiplicity of positive integers, and if $T$ is an endomorphism of a finite set then $Spec'(T)$ is the set-with-multiplicities of periods of cycles of $T$.

All of the above is a record of thoughts I had in spare moments at this workshop I just attended in Louvain-la-Neuve, so I haven’t had much time to reflect. I’ve noted where I’m not sure of the facts, but I’m also not sure of the aesthetics:

In other words, do the theorem and definition above represent the best approach? Here are three quite specific reservations:

1. I’m not altogether satisfied with the fact that it’s the invertible spectrum, rather than the full spectrum, that comes out. Perhaps there’s something to be done with the observation that if you know the invertible spectrum, then knowing the full spectrum is equivalent to knowing (the dimension of) the space that your operator acts on.

2. Condition 2 of the theorem states that $Spec'(T)$ determines $Spec'(T + \lambda I)$ for an operator $T$ on a known space (and, of course, for known $\lambda)$. That was enough to prove the theorem. But there’s also a much stronger true statement: $Spec'(T)$ determines $Spec'(p(T))$ for any polynomial $p$ over $k$ (again, for an operator $T$ on a known space). Any polynomial $p$ gives an endomorphism $T \mapsto p(T)$ of $\mathbf{Endo}(\mathbf{FDVect})$ over $\mathbf{FDVect}$, and I guess these are the only endomorphisms. So, we could generalize condition 2 by using endomorphisms rather than automorphisms of $\mathbf{Endo}(\mathcal{E})$. Should we?

Posted at September 14, 2015 1:06 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2844

### Re: Where Does The Spectrum Come From?

The “slightly less famous fact” is essentially the Sylvester determinant theorem: https://en.wikipedia.org/wiki/Sylvester%27s_determinant_theorem . (Sorry, I can’t figure out how to make HTML links with this parser.)

Posted by: Terence Tao on September 14, 2015 2:00 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Thanks, I didn’t know that name for it.

My favourite proof is this. Write

$ker^\infty(T) = \bigcup_{n=0}^\infty ker(T^n).$

The algebraic multiplicity of $\lambda$ with respect to $T$ is $\dim(\ker^\infty(T - \lambda I))$. (In my preferred approach to linear algebra, that’s the definition of algebraic multiplicity.) Thus, the “slightly less famous fact” states that

$\ker^\infty(g f - \lambda I) \cong \ker^\infty(f g - \lambda I)$

for all $\lambda \neq 0$.

To prove this, note that $f$ restricts to a map from the left-hand side to the right-hand side of this supposed isomorphism, and $g$ restricts to a map from right to left. So it’s enough to prove that $g f$, as an operator on the left-hand side, is invertible (and similarly $f g$ on the right-hand side). Now $g f - \lambda I$ acts nilpotently on the left-hand side, so its only eigenvalue is $0$, so the only eigenvalue of $g f$ (as an operator on the left-hand side) is $\lambda$. But $\lambda \neq 0$, so we’re done.

Posted by: Tom Leinster on September 14, 2015 2:19 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Tom,

It is well known in semigroup theory that the smallest equivalence relation on the monoid of all self maps of a finite set generated by fg is equivalent to gf is precisely having the same cardinality eventual range (or set of recurrent points) and the same cycle type as a permutation of their eventual range as you suggest.

This is also equivalent to all complex characters agreeing on the two elements.

Posted by: Benjamin Steinberg on September 14, 2015 3:01 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

* — This comment and those replying to it refer to an old version of the post. — *

Excellent; thanks.

You didn’t mention condition 2, that the equivalence relation is a congruence with respect to disjoint union. Maybe that’s not needed in the context of finite sets — and maybe not for finite-dimensional vector spaces either. It would be nice if it could be dropped. I’ll have to think that over.

Posted by: Tom Leinster on September 14, 2015 3:08 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I didn’t think about it but it seems clear. If you put together mappings of disjoint sets, then the cycle structure of the eventual range depends only on the two mappings since they don’t interact.

Posted by: Benjamin Steinberg on September 14, 2015 3:22 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Condition 2 (on coproducts) is redundant.

That is, the theorem in my post remains true if we drop that condition, and dropping it also leaves unchanged the definitions of $\sim$, invertible spectral value and invertible spectrum. (This also means that in the definition, we no longer need to assume the existence of coproducts.)

When I get a moment, I’ll edit the post to delete all mention of condition 2 (on coproducts). The old condition 3 (on automorphisms) will get renumbered as condition 2. What the old condition 2 of the theorem said was that if $\Phi(S_1) = \Phi(S_2)$ and $\Phi(T_1) = \Phi(T_2)$ then $\Phi(S_1 \oplus T_1) = \Phi(S_2 \oplus T_2)$, where $\oplus$ is direct sum and these four operators can be on any spaces (potentially all different). Similarly, the old condition 2 of the definition said that if $S_1 \sim S_2$ and $T_1 \sim T_2$ then $S_1 + T_1 \sim S_2 + T_2$.

To see that the old condition 2 of the definition is redundant, let $\sim$ be the equivalence relation on $\{ \text{endomorphisms in}\,\,\mathcal{E}\}$ generated by the old conditions 1 and 3, that is, the new conditions 1 and 2. I claim that if $S_1 \sim S_2$ and $T_1 \sim T_2$ then $S_1 + T_1 \sim S_2 + T_2$.

It’s enough to prove that if $T_1 \sim T_2$ then $S + T_1 \sim S + T_2$ for all $S$. So, define another equivalence relation $\approx$ by

$T_1 \approx T_2 \iff S + T_1 \sim S + T_2 \,\,\text{for all}\,\, S.$

Then $\approx$ satisfies the other two conditions:

• Let $X \stackrel{f}{\to} Y \stackrel{g}{\to} X$, and let $S$ be an endomorphism of $A$, say. Then we get maps $A + X \stackrel{S + f}{\to} A + Y \stackrel{1_A + g}{\to} A + X,$ so $S + g \circ f \sim S + f \circ g$, as required.

• Let $T_1$ and $T_2$ be operators on $X$ with $T_1 \approx T_2$, let $F$ be an automorphism of $\mathbf{Endo}(\mathcal{E})$ over $\mathcal{E}$, and let $S$ be an endomorphism of $A$ in $\mathcal{E}$. We have an endomorphism $F^{-1}(S)$ of $F^{-1}(A)$, and $F^{-1}(S) + T_1 \sim F^{-1}(S) + T_2$ by definition of $\approx$. The old condition 3 (i.e. the new condition 2) allows us to apply $F$ to each side, giving $S + F(T_1) \sim S + F(T_2)$, as required.

But $\sim$ is the smallest equivalence relation satisfying those two conditions, so $\sim \subseteq \approx$ — which was exactly what had to be proved.

It follows that the theorem also remains true if we drop the old condition 2.

Thanks to Ben for prompting these thoughts.

Posted by: Tom Leinster on September 14, 2015 1:22 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Another approach: the spectrum contains the same information as the trace of all powers, so it suffices to motivate the trace categorically. But this is standard: traces canonically make sense in any symmetric monoidal category with duals. One reason to prefer this story is that cyclicity comes for free from the category theory, rather than having to be put in by hand.

Posted by: Qiaochu Yuan on September 14, 2015 4:22 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

That’s a really nice observation.

Can you make it apply to the category of finite sets?

Posted by: Tom Leinster on September 14, 2015 8:21 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

A standard way to apply symmetric-monoidal-trace-theory to a category without duals is to map it into one that does, such as by the free vector space functor.

Posted by: Mike Shulman on September 14, 2015 5:17 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

In fact we can apply it to the category of sets! To do that we need to turn it somehow into a symmetric monoidal category with duals, which we do by passing to the category of sets and (isomorphism classes of) spans. Every object is now self-dual, and you can compute that the trace of an endomorphism is its set of fixed points.

Posted by: Qiaochu Yuan on September 15, 2015 4:17 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Is the first bit true in positive characteristic? I don’t think one can get $e_2\pmod 2$ from $p_1$ and $p_2 \pmod 2$.

Posted by: Jesse C. McKeown on September 14, 2015 2:42 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Posted by: Tom Leinster on September 14, 2015 6:25 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

$e$ for elementary symmetric polynomials, $p$ for power sum; over the rationals, they convey the same information, but not over a ring with torsion, such as the finite fields. Specifically, they’re related thus $p_n - e_1 p_{n-1} +- \cdots + (-1)^{n-1} e_{n-1} p_1 + (-1)^n n e_n = 0,$ which means you can’t get $e_n$ from $p$ when $n$ is zero in the coefficient field. Qiaochu’s starting-point seems to be that, when $e$ are the symmetric polynomials of a Spectrum, i.e. the coefficients of the characteristic polynomial, the $p$ are the power traces.

If we don’t have to worry about torsion, of course, it’s a non-issue.

Posted by: Jesse C. McKeown on September 14, 2015 7:35 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I get your point. So Qiaochu’s statement that

the spectrum contains the same information as the trace of all powers

is true in characteristic 0 but false in all positive characteristics. A counterexmample for any field $k$ of characteristic $p \gt 0$: if we denote by $0_p$ and $I_p$ the zero operator and identity operator on $k^p$, then

$tr(0_p^r) = 0 = tr(I_p^r)$

for all integers $r \geq 0$. But, of course, $0_p$ and $I_p$ have different spectra.

Posted by: Tom Leinster on September 14, 2015 8:24 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I agree that the preceding does put trace and spectrum in a nice categorical context, and confirms their utility; I’m not sure how students will find it, before they’re convinced the spectrum is as worth-while (or “natural”) as it is. Particularly, the invocation of Jordan Canonical Form puts the argument well after the spectrum has been introduced and put to a lot of work.

The perspective that makes most sense to me is: a linear operator $T$ on $V$ turns $V$, a (finite-dimensional) $k$-module into a (finitely-generated) $k[x]$-module. While $k[x]$ isn’t a field, it is a Euclidean domain, so $(V,T)$ is a (finite) direct sum of quotients $k[x]/(p(x))$; more: working in the coordinate vector spaces $k^m$, one can analyze this direct sum by applying Smith’s algorithm to any presentation of $(V,T)$ as a $k[x]$-module, and it so happens that we have one in the matrix $T - x I$. It’s as easy${}^{*}$ as factoring polynomials!

One thing that strikes me about it now is how it gives a good starting-place to build up finite group characters. In that context one is led (for example) to study not only single operators on/in the group ring $k[G]$, but the whole center of $k[G]$; the punch-line in that direction is: the conjugacy classes in $G$ give a nice basis for the center, and that in turn makes the group ring a module for $k[x_{[g]} \mid ...]$, polynomials on several generators; these rings aren’t euclidean anymore, but there is still lots to say!

Posted by: Jesse C. McKeown on September 15, 2015 2:32 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I think the point of asking these questions is not for the benefit of students. Rather, it’s for our benefit: suppose that in the future we find ourselves in a category and wanting to know some interesting invariants of endomorphisms in it. Where should we look? Since the spectrum has been such a fruitful invariant for endomorphisms of vector spaces, we might want categorical machinery that spits it out so we can then apply it to new settings and see what we get.

The commutative algebra story is nice so far as it goes but it doesn’t generalize well: you really need to know that you’re dealing with vector spaces. The story about symmetric monoidal categories with duals, on the other hand, is extremely general, and has a rich continuation involving topological field theory, the Lefschetz fixed point theorem, etc. etc.

Posted by: Qiaochu Yuan on September 15, 2015 4:43 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Asking for ourselves, and pondering, of course are fine things to do; I don’t think I denied that.

I’m not convinced that the commutative localization of a category is so far from commutative algebra as all that! One definitely ends up with a category whose endomorphism monoids are abelian monoids, and the monoid rings of such things are Natural things to consider, et.c. …

Posted by: Jesse C. McKeown on September 16, 2015 5:21 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I don’t know what you mean by “commutative localization” here. If you’re referring to the dimension / trace construction described in my second top-level comment involving quotienting by $f g \sim g f$, the result is not a category, it’s just a set.

This is directly analogous to the zeroth Hochschild homology of an algebra, which is a vector space (not an algebra) given by quotienting by the subspace generated by commutators. In particular it’s not the abelianization, which is an algebra given by quotienting by the ideal generated by commutators.

Posted by: Qiaochu Yuan on September 16, 2015 7:11 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Oh dear… yes, I did go a bit silly there…

There… I’m sure there’s a category somewhere under the thing I’m thinking of… and you just might tell me it’s the thing that everyone else is already talking about.

Well, I don’t feel like a wheelwright today, not for this stuff.

Posted by: Jesse C. McKeown on September 18, 2015 1:18 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Here is another approach which has the benefit of making sense in any category, no symmetric monoidal structure needed, and which also has the property that cyclicity appears rather than being put in by hand.

There is a 2-category whose objects are (small, for simplicity) categories, whose 1-morphisms $C \to D$ are bimodules

$F : D^{op} \times C \to \text{Set}$

(where composition is given by tensor product), and whose 2-morphisms are morphisms of bimodules. This 2-category is symmetric monoidal, and every category is dualizable with dual the opposite category, so you can take the trace of any endomorphism $F : C^{op} \otimes C \to \text{Set}$, and you get the coend

$\int^{c \in C} F(c, c).$

In particular you can define the “dimension” (or “Hochschild homology”) of a category to be the trace of the identity endomorphism, which turns out to be the coend of the Hom bifunctor $\text{Hom}(-, -) : C^{op} \times C \to \text{Set}$.

When you compute this coend, you get precisely the quotient of the set of endomorphisms in the category by the cyclicity relation $f g \sim g f$ (where $f : c \to d, g : d \to c$ are morphisms, not necessarily endo). There is now a “universal trace” of any endomorphism in the category which takes values in this set. If you apply this to $\text{FinVect}$ while remembering that it’s enriched over $\text{Vect}$ (so everything is enriched over $\text{Vect}$ in the previous discussion), I think you just get the trace back. If you apply this to $\text{FinSet}$ you get something more complicated that Todd Trimble explained on MO once; I don’t remember the details.

Incidentally, I think it’s a feature and not a bug that you only get the invertible spectrum and not the full spectrum. Knowing only the traces of the powers of a linear operator, and not the dimension of the vector space it acts on, also only recovers the invertible spectrum at best.

Posted by: Qiaochu Yuan on September 15, 2015 4:39 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Just one little comment for now:

Knowing only the traces of the powers of a linear operator, and not the dimension of the vector space it acts on, also only recovers the invertible spectrum at best.

In characteristic zero, the trace of the zeroth power of the operator is the dimension of the space. So you do get the full spectrum.

Posted by: Tom Leinster on September 15, 2015 4:58 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

The coend $\int^c \hom(c, c)$ that Qiaochu is describing has a name: it’s called the trace of a category. Here I’ve linked to the calculation I once described here at the Café involving finite sets. The calculation is closely related to eventual images, and shows that the trace of finite sets and functions is the same as the trace of finite sets and bijections, and is summarized as the set of conjugacy classes of permutations.

Posted by: Todd Trimble on September 15, 2015 11:11 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Thanks Todd; that’s neat.

Do you have any thoughts about the role in my post of the automorphisms of the category of endomorphisms? E.g. if you quotient out the set of endomorphisms in $\mathbf{FDVect}$ by $g f \sim f g$ then you don’t, as far as I’m aware, get the invertible spectrum. All the same, it would be nice to get rid of condition 2.

Posted by: Tom Leinster on September 16, 2015 12:06 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

This is exactly the equivalence relation that all complex characters of the monoid of self-maps on a finite set agree on the two elements. So this means in some sense that allowing linear representations and then taking the trace gives the same answer as the categorical notion of trace.

Posted by: Benjamin Steinberg on September 16, 2015 3:46 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I think following Todd’s argument for $\mathsf{FinSet}$ can show that the categorical trace in $\mathsf{FinVect}$ is indeed the invertible spectrum – at least when the field is algebraically closed. That is, I think that condition (2) is unnecessary to characterize the invertible spectrum when the field is algebraically closed.

It’s clear, as before that when we restrict to the groupoid $\mathsf{FinVect}_{\mathrm{iso}}$, the categorical trace becomes a complete conjugation invariant, i.e. the categorical trace of $\mathsf{FinVect}_{\mathrm{iso}}$ is the Jordan normal form (in a groupoid, the cyclicity relation is just another way to express conjugation invariance).

The same argument as Todd makes in $\mathsf{FinSet}$ also shows that the categorical trace of $\mathsf{FinVect}$ forgets the generalized eigenspace of generalized eigenvalue 0. Putting these together, the categorical trace of $\mathsf{FinVect}$ can only depend on the part of the Jordan form associated to nonzero spectrum.

Unlike in the case of $\mathsf{FinSet}$ (although I admit, I don’t follow the logic in this case), there are further identifications. For example, setting

$A = \left[\begin{array}{ccc} \lambda & 0 & a \\ 0 & \lambda & b \end{array} \right]\qquad B = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array} \right]$

we get

$A B = \left[ \begin{array}{cc} \lambda & 0 \\ 0 & \lambda \end{array} \right] \qquad B A = \left[ \begin{array}{ccc} \lambda & 0 & a \\ 0 & \lambda & b \\ 0 & 0 & 0 \end{array} \right]$

The two have the same invertible spectrum (two copies of $\lambda$, assuming $\lambda$ is nonzero), but $B A$ has a nilpotent part while $A B$ doesn’t (as long as $a$ or $b$ is nonzero).

I think iteratively applying maps analogous to $A$ and $B$ ought to allow us to replace the nilpotent part of any endomorphism at will. This leaves the invertible spectrum as the only remaining invariant after applying the cyclicity condition, and Tom’s “slightly less well-known fact” tells us that the nonzero spectrum is exactly the categorical trace.

I’m inclined to think that for non-algebraically-closed fields, the categorical trace is something like the invertible spectrum of the operator after tensoring with the algebraic closure of the ground field?

Posted by: Tim Campion on September 17, 2015 3:45 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

A calculation shows however that the Jordan normal form of your $B A$ is diagonalizable, so I’m not yet seeing “unexpected reductions” (such as two non-conjugate Jordan normal form matrices of the same dimension mapping to the same element in the trace of the category).

But I haven’t yet ruled out this possibility.

Posted by: Todd Trimble on September 17, 2015 9:36 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Wires crossed, I guess we realized just how wrong I was at the same time!

Actually, now I believe that the Jordan form associated to the invertible spectrum is the categorical trace. Tom’s proof above that the invertible spectrum is cyclic-invariant can be refined to show that $\operatorname{ker} (gf - \lambda I)^n \cong \operatorname{ker}(fg-\lambda I)^n$ for all $n$ when $\lambda \neq 0$. From the dimensions of these spaces, one can recover the Jordan form associated to the invertible spectrum.

Posted by: Tim Campion on September 17, 2015 10:12 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Well, my claim that those matrices are in different conjugacy classes is horribly wrong. One point remains: the categorical trace of $\mathsf{FinVect}_k$ when $k$ is algebraically closed is somewhere between the Jordan normal form corresponding to the invertible spectrum and the invertible spectrum itself, which is a razor-thin distinction. It should be doable to resolve exactly what it is.

Posted by: Tim Campion on September 17, 2015 9:43 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Tim, I haven’t had time to go through this thread properly, but are you assuming we’re over a field of characteristic zero? As Jesse and I were discussing earlier, in positive characteristic, two operators $S$ and $T$ with different invertible spectra can still satisfy $tr(S^r) = tr(T^r)$ for all natural numbers $r$.

Posted by: Tom Leinster on September 17, 2015 11:37 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

All I need is that every linear endomorphism has a Jordan normal form, which I think (though I haven’t really checked) only requires that the field be algebraically closed. The actual trace of linear algebra doesn’t appear explicitly in anything I’m doing, I’m just computing the categorical trace, i.e. the universal extranatural transformation $\eta: \mathrm{Hom}_{\mathsf{FinVect}} \implies ?$ which mods out by the cyclic relation $fg \equiv gf$.

I argue that when $k$ is algebraically closed, the vertex, which I’ve simply labeled “$?\,\,$”, is the set of all Jordan normal forms of invertible endomorphisms, and the categorical trace $\eta$ assigns to a linear endomorphism its Jordan normal form when restricted to its eventual image, or equivalently its Jordan normal form with the block of generalized eigenvalue 0 deleted.

The argument (spread out over several comments, with false claims to the contrary mixed in!) boils down to this:

• The cyclic relation implies that $\eta$ is conjugation-invariant, so it can only depend on the Jordan normal form.

• The cyclic relation also implies that the categorical trace $\eta(T)$ only depends on the restriction of $T$ to its eventual image (this is Todd’s argument on the nlab, written for the case of $\mathsf{FinSet}$, but it still works here). The endomorphism $T$’s restriction to its eventual image is obtained by deleting the 0- Jordan block, so $\eta$ depends only on this part of the Jordan normal form.

• Conversely, I claim that the part of the Jordan normal form associated to the nonzero eigenvalues is invariant under the cyclic relation. This amounts to observing that in your (Tom’s) argument that $\ker^\infty(gf - \lambda I) \cong \ker^\infty(fg - \lambda I)$ (when $\lambda \neq 0$), you really establish that $\ker(gf-\lambda I)^n \cong \ker(fg - \lambda I)^n$ for any $n \gt 0$ (when $\lambda \neq 0$). But the dimensions of these spaces are enough to recover the Jordan blocks for nonzero eigenvalues.

I’m not sure what the story is in the non-algebraically-closed case. And I don’t know what happens when we treat everything as $\mathsf{Ab}$-enriched when taking this coend.

Posted by: Tim Campion on September 18, 2015 12:58 AM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Back in 1969/70 Anders Kock had a student called Maan Justersen. Her speciale thesis was on the categorical notion of trace. I do not know whether the Matematisk Institut in Aarhus still keeps copies of it.

Posted by: Gavin Wraith on September 17, 2015 10:32 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

This comment is in response to Tom’s conjecture on the form of automorphisms of $\mathbf{Endo(FDVect)}$.

If I understand it correctly, we are trying to understand automorphisms $F: \mathbf{Endo(FDVect)} \to \mathbf{Endo(FDVect)}$ as a category over $\mathbf{Vect}$, i.e., if $U: \mathbf{Endo(FDVect)} \to \mathbf{Vect}$ is the forgetful functor, then we are interested in describing automorphisms $F$ such that $U \circ F = U$.

At first I thought Tom’s conjecture would follow by applying some form of Tannaka reconstruction. We can think of $\mathbf{Endo(FDVect)}$ as the $\mathbf{Vect}$-enriched functor category $\mathbf{Vect}_{fd}^{k[x]}$ where the polynomial algebra $k[x]$ is regarded as a one-object $\mathbf{Vect}$-enriched category, and hope that endomorphisms $F: \mathbf{Vect}_{fd}^{k[x]} \to \mathbf{Vect}_{fd}^{k[x]}$ over $U$ would correspond to enriched functors = algebra maps $k[x] \to k[x]$. The idea here is that endomorphisms $F$, or at least automorphisms $F$ on the functor category, would be obtained by pulling back along an algebra endomorphisms or algebra automorphisms on $k[x]$. And note that algebra automorphisms are given by maps $x \mapsto a x + b$ where $a \in k^\times$ is nonzero, as Tom was suggesting.

Something like this might be the case if we weren’t just dealing with finite-dimensional modules over $k[x]$. An automorphism of $\mathbf{Vect}^{k[x]}$ over the underlying functor $U$ would be in particular part of an adjoint equivalence, and so we’d be getting into a Morita context where I think the Cauchy completion of $k[x]$ is a tractable object (it might not be so tractable if it were polynomials in several variables), and there I think something like Tom’s conjecture would likely be true. But in the finite-dimensional context, I think it’s the spectrum itself, or rather the spectral decomposition of a pair $(V, T: V \to V)$, that suggests more possibilities.

To each $(V, T)$ and each $\lambda \in k$, there is a corresponding generalized eigenspace of $T$ attached to $\lambda$ (which is obviously zero for all but finitely many $\lambda$), which I will denote as $V_\lambda$. Thus we have a direct sum or product decomposition $V \cong \prod_{\lambda \in k} V_\lambda$. It is important to note for our purposes that this is a canonical decomposition: there is a canonical idempotent operator $\pi_\lambda: V \to V$ whose image is the subspace $V_\lambda \hookrightarrow V$. Perhaps that is an obvious point, but in case not, here’s the way I look at it. If $p(x)$ is the characteristic polynomial of $T: V \to V$, then the module structure $k[x] \to \hom(V, V)$, i.e., the algebra map sending $x$ to $T$, factors through the quotient $k[x] \to k[x]/(p)$ by the Cayley-Hamilton theorem. Working over an algebraically closed field $k$ (and let’s go ahead and assume $char(k) \neq 0$, in view of other comments in this discussion), the polynomial $p$ splits completely, say as $p(x) = x^r \prod_i (x - \lambda_i)^{e_i}$. Then by the Chinese remainder theorem we have a canonical product decomposition

$k[x]/(p) \cong k[x]/(x^r) \times \prod_i k[x]/((x - \lambda_i)^{e_i})$

induced by uniquely determined primitive idempotents in the ring $k[x]/(p)$. If $e_\lambda: k[x]/(p) \to k[x]/(p)$ denotes multiplication by a typical such idempotent, then pushing out $e_\lambda \otimes 1_V: k[x]/(p) \otimes_k V \to k[x]/(p) \otimes_k V$ along the structure map $k[x]/(p) \otimes_k V \to V$ produces an idempotent operator $\pi_\lambda: V \to V$ whose image is the eigenspace attached to $\lambda$.

That out of the way, we for each $(V, T)$ a canonical decomposition $V \cong \prod_{\lambda \in k} V_\lambda$, with $1_V = \sum_\lambda \pi_\lambda$. Now suppose $\rho: k \to k^\times$ is any function taking values $\rho(\lambda)$ that are invertible in $k$. Define an automorphism

$F_\rho: \mathbf{Vect}_{fd}^{k[x]} \to \mathbf{Vect}_{fd}^{k[x]}$

by taking $(V, T)$ to $(V, \sum_\lambda \rho(\lambda) T \pi_\lambda)$. I think it’s pretty clear that if $f: V \to W$ is a map $(V, T) \to (W, S)$, i.e., if $S \circ f = f \circ T$, then also $\pi_{\lambda, W} \circ f = f \circ \pi_{\lambda, V}$, since $(T - \lambda_i)^{e_i}(v) = 0$ implies $(S - \lambda_i)^{e_i}(f v) = 0$. And so each $f: (V, T) \to (W, S)$ gives a map $f: (V, F_\rho(T)) \to (W, F_{\rho}(S))$. The inverse of $F_\rho$ is, naturally enough, $F_{\rho^{-1}}$ where $\rho^{-1}(\lambda) \coloneqq \rho(\lambda)^{-1}$.

These $F_\rho$ are generally not of the form $T \mapsto a T + b$.

I should mention how I came to all this, because it wasn’t immediately apparent to me. As I said, I thought at first maybe Tom’s conjecture would be true by some sort of Tannaka reconstruction argument. At some point I had to remind myself that Tannaka reconstruction ought really to be considered in terms of finite-dimensional comodules over coalgebras (or, passing to a richer monoidal setting, comodules over bialgebras such as group bialgebras). Going with this, at some point it occurred to me that a finite-dimensional vector space $V$ equipped with an endomorphism $T: V \to V$ could be considered a module over the free $k$-algebra on a 1-dimensional vector space (also called the tensor algebra; we have $Tens(k) \coloneqq \sum_{n \geq 0} k^{\otimes n} \cong k[x]$), or – it could also be considered as a comodule over the cofree $k$-coalgebra over a 1-dimensional space, $Cof(k)$. That is to say, we could consider it in terms of an algebra map $k[x] \to \hom(V, V)$, or – we could consider it dually in terms of a coalgebra map $V^\ast \otimes_k V \to Cof(k)$, the unique coalgebra map induced by the linear map given by the composite

$V^\ast \otimes_k V \stackrel{1 \otimes T}{\to} V^\ast \otimes_k V \stackrel{eval}{\to} k.$

(Note that we use finite-dimensionality of $V$ in order to define a coalgebra structure on $V^\ast \otimes_k V$!) This coalgebra map transforms to a comodule structure $V \to V \otimes_k Cof(k)$.

One virtue of the comodule picture is that the canonical spectral decompositions already reside within $Cof(k)$. What turns out to be true is that there is a coalgebra decomposition

$Cof(k) \cong \sum_{\lambda \in k} k[x]$

where each summand $k[x]$ has the usual bialgebra comultiplication given by $\delta(x) = x \otimes 1 + 1 \otimes x$, or $\delta(x^n) = \sum_{i + j = n} x^i \otimes x^j$. More explicitly, $Cof(k)$ is the union of coalgebra duals of finite-dimensional algebra quotients $q: k[x] \to k[x]/(p)$, seen in terms of linear embeddings $q^\ast (k[x]/(p))^\ast \to k[x]^\ast \cong k[ [x] ]$, and $Cof(k)$ is in fact the space of rational functions in the localization $k[x]_{(x)}$ which via their MacLaurin expansions form a subspace of the space of power series $k[ [x] ]$. This involves partial fraction decompositions of rational functions and Mac Laurin expansions of $\frac1{(x - \lambda_i)^{e_i}}$. I’m actually rushing through a story told in more detail here, which should make the spectral decomposition story more apparent.

Anyway, the comodule picture might be worth keeping in mind in this discussion.

(I almost forgot to mention that $Cof(k)$ can also be regarded as a predual of the profinite completion of $k[x]$, although that might have been obvious already.)

Posted by: Todd Trimble on September 18, 2015 3:45 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Thanks for this excellent food for thought. This is (the end of) one of the busiest weeks of the academic year for me, so I’m struggling to keep up with what everyone’s written; in particular, I’ve only had time to skim your comment. So please forgive me if the following questions betray some superficial misunderstanding.

1. You emphasize that the decomposition $V \cong \prod_{\lambda \in k} V_\lambda$ is canonical. By “canonical”, do you mean to say that it, and the operators $\pi_\lambda$, are independent of $T$? (It doesn’t seem so to me, but this is related to question 2.)

2. If I understand your comment correctly, we get for each function $\rho: k \to k^\times$ an automorphism of $\mathbf{Endo}(\mathbf{FDVect})$ over $\mathbf{FDVect}$. Suppose we define $\rho(\lambda) = \begin{cases} 2 &\text{if}\,\, \lambda = 1,\\ 1 &\text{otherwise}. \end{cases}$ This gives rise to an automorphism $(V, T) \mapsto (V, \sum_\lambda \rho(\lambda)T \pi_\lambda.)$ What is the effect of this automorphism on the diagonal matrices $T = diag(1, 2)$ and $T = diag(2, 2)$, respectively?

Posted by: Tom Leinster on September 18, 2015 6:52 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

(1) No, it should depend on $T$. When I wrote that, I had in mind the fact that subspace inclusions (in this case of generalized eigenspaces) generally admit many retractions; in this case I wanted to say that the retraction or projection onto the eigenspace is not something arbitrary, but is determined from the structure of $T$.

(2) Well, it does look like the effect is to produce $diag(2, 2)$ in both cases, which is to say: something looks wrong. Offhand I would guess that I either mistranslated or misinterpreted something from the comodule picture, or overlooked something else from what I thought it was telling me.

It’s probably up to me to figure out the root of my mistake. But my thought process behind it was that each coalgebra automorphism of $Cof(k)$ would induce an automorphism functor on $Comod_{fd}(Cof(k))$; I had convinced myself that this comodule category is your $\mathbf{Endo(FDVect)}$ in different language. Coalgebra endomorphisms $Cof(k) \to Cof(k)$ are quite plentiful; they correspond to linear functions $Cof(k) \to k$ which I still think correspond to the elements in the profinite completion of $k[x]$ (here when I say “profinite completion”, I mean the inverse limit with respect to the system of finite-dimensional quotients of $k[x]$ and quotient maps between them). I also thought the coalgebra automorphisms would be plentiful, but I may have misidentified them or something.

I’m hoping it won’t take me too long to find time to sort through this (and sorry for the noise, and thanks for your thoughts).

Posted by: Todd Trimble on September 18, 2015 11:16 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

I think the statement about $F_{\rho^{-1}}$ being the inverse of $F_\rho$ is incorrect; at least, I don’t see why it’s true. At first I thought it was true, but then I realized that my calculation assumed that $\pi_\lambda$ was independent of the operator involved. That was the connection between my two questions.

Posted by: Tom Leinster on September 19, 2015 4:44 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

Long comment coming up; here’s the summary.

1. Although there may have been some things wrong in Todd’s comment, he’s substantially right: there are indeed non-obvious automorphisms of the category of linear operators on finite-dimensional vector spaces.

2. However, this doesn’t appear to affect the substance of my post. Although the automorphism group of this category is larger than I thought, the newly-discovered elements don’t falsify anything I wrote about the (invertible) spectrum.

3. If we work with all vector spaces rather than just the finite-dimensional ones, then (as we both expected) the category of linear operators has no automorphisms or endomorphisms other than the obvious ones.

In what follows, everything is over an algebraically closed field $k$. There’s no need to make any assumption about its characteristic.

We’re going to be looking for ways (especially reversible ways) of turning a linear operator on a finite-dimensional vector space into another operator on the same space. Formally, this means considering the category $\mathbf{Endo}(\mathbf{FDVect})$ whose objects are operators on finite-dimensional vector spaces and whose maps are linear maps making the evident square commute. Then we consider endomorphisms (especially automorphisms) of $\mathbf{Endo}(\mathbf{FDVect})$ over $\mathbf{FDVect}$, that is, commuting with the forgetful functor to $\mathbf{FDVect}$.

(1)   I’ll begin by restating some of what Todd wrote. For an operator $T$ on a vector space $X$ (always finite-dimensional for now), the eventual kernel is the subspace

$ker^\infty(T) = \bigcup_{n \geq 0} ker^n(T).$

It’s a theorem that

$X = \bigoplus_{\lambda \in k} ker^\infty(T - \lambda)$

where, of course, $ker^\infty(T - \lambda)$ is trivial for all but finitely many values of $\lambda$ (the eigenvalues). Moreover, $T$ restricts to an operator $T_\lambda$ on $ker^\infty(T - \lambda)$, for each $\lambda$, so

$T = \bigoplus_{\lambda \in k} T_\lambda.$

The only eigenvalue of $T_\lambda$ is $\lambda$; equivalently, $T_\lambda - \lambda$ is nilpotent. All these facts are closely related to Jordan normal form. Another fact that may or may not be relevant here: the projections associated with this direct sum decomposition of $X$ are all polynomials in $T$.

This decomposition is functorial. That is, if $f: (X, T) \to (Y, S)$ is a map in $\mathbf{Endo}(\mathbf{FDVect})$, then $f$ restricts to a map

$f: ker^\infty(T - \lambda) \to ker^\infty(S - \lambda)$

for each $\lambda \in k$, and therefore defines a map

$f: (ker^\infty(T - \lambda), T_\lambda) \to (ker^\infty(S - \lambda), S_\lambda)$

for each $\lambda \in k$.

Now take any map of sets $k \to k[t]$, which I’ll write as $\lambda \mapsto p_\lambda$. Then we obtain an endomorphism $F$ of $\mathbf{Endo}(\mathbf{FDVect})$, defined by

$(X, T) \mapsto \Bigl(X, \bigoplus_{\lambda \in k} p_\lambda(T_\lambda)\Bigr).$

We have to check functoriality. In other words, we have to check that if $f: (X, T) \to (Y, S)$ is a map in $\mathbf{Endo}(\mathbf{FDVect})$ then

$f \circ \bigoplus_{\lambda \in k} p_\lambda(T_\lambda) = \bigoplus_{\lambda \in k} p_\lambda(S_\lambda) \circ f.$

And that’s true by the previous paragraph.

Usually, such an $F$ won’t be an automorphism of $\mathbf{Endo}(\mathbf{FDVect})$. But it sometimes is. Let’s start with a bijection $\sigma: k \to k$ that fixes $0$. Then, for $\lambda \in k$, define $p_\lambda$ to be the constant polynomial $\sigma(\lambda)/\lambda$ (to be interpreted as $1$ when $\lambda = 0$). The resulting endomorphism of $\mathbf{Endo}(\mathbf{FDVect})$, which I’ll call $G_\sigma$, is

$G_\sigma: (X, T) \mapsto \Bigl( X, \bigoplus_{\lambda \in k} \frac{\sigma(\lambda)}{\lambda} T_\lambda \Bigr).$

The only eigenvalue of $T_\lambda$ is $\lambda$, so the only eigenvalue of $\frac{\sigma(\lambda)}{\lambda} T_\lambda$ is $\sigma(\lambda)$. Thus, $G_\sigma$ has the effect of permuting the generalized/eventual eigenspaces $ker^\infty(T - \lambda)$ according to $\sigma$. In particular, $G_\sigma$ is invertible, with inverse $G_{\sigma^{-1}}$.

For example, let $\sigma$ be the permutation of $\mathbb{C}$ that interchanges $1$ and $2$ but fixes everything else. Then $G_\sigma$ has the following effect:

$\begin{pmatrix} 1& & & & \\ &2&1& & \\ & &2& & \\ & & &3& \\ & & & &4 \end{pmatrix} \qquad \mapsto \qquad \begin{pmatrix} 2& & & & \\ &1&1/2& & \\ & &1& & \\ & & &3& \\ & & & &4 \end{pmatrix}.$ where all unlabelled entries are zero, and both sides are operators on $\mathbb{C}^5$. This automorphism is not one of the “obvious” ones, i.e. not of the form $T \mapsto a T + b$ for scalars $a \neq 0$ and $b$.

I don’t know whether there are endomorphisms or automorphisms of $\mathbf{Endo}(\mathbf{FDVect})$ other than those just described.

(2)   In my original post, I used the “fact” that if we have an automorphism $F$ of the category $\mathbf{Endo}(\mathbf{FDVect})$ over $\mathbf{FDVect}$, and a vector space $X$, then $Spec'(T)$ determines $Spec'(F(T))$ for operators $T$ on $X$. Is this true?

When I wrote it, I believed that $F$ had to be of the form $T \mapsto a T + b$ for some scalars $a \neq 0$ and $b$. It’s certainly true then. (As I said in my post, when the space $X$ is known, knowing $Spec'(T)$ is equivalent to knowing $Spec(T)$; and $Spec(a T + b) = a Spec(T) + b$.)

But it’s also true for the automorphisms $G_\sigma$ defined in (1). Indeed, the multiplicity of a scalar $\lambda$ in $Spec(G_\sigma(T))$ is the multiplicity of $\sigma^{-1}(\lambda)$ in $Spec(T)$.

(3)   Now dropping the assumption of finite-dimensionality, there are no more endomorphisms of $\mathbf{Endo}(\mathbf{Vect})$ over $\mathbf{Vect}$ than you think there are. That is, they’re all of the form $T \mapsto p(T)$ for some polynomial $p$. This is true over any field $k$ whatsoever, not necessarily algebraically closed.

To see this, let $F$ be an endomorphism of $\mathbf{Endo}(\mathbf{Vect})$ over $\mathbf{Vect}$. We have the vector space $k[t]$ of polynomials over $k$ and the operator $t\cdot -$ on it, hence also the operator $F(t\cdot -)$ on it. Let

$p = (F(t\cdot -))(1) \in k[t].$

I claim that $F(T) = p(T)$ for all operators $T$. Indeed, let $T$ be an operator on a space $X$, and let $x \in X$. There’s a linear map $f: k[t] \to X$ defined by $f(q) = q(T)(x)$, for polynomials $q$. This defines a map

$f: (k[t], t\cdot -) \to (X, T)$

in the category $\mathbf{Endo}(\mathbf{Vect})$ of operators. Hence we get a map

$f: (k[t], F(t\cdot -)) \to (X, F(T))$

in the same category. It follows that $f \circ F(t \cdot -) = F(T) \circ f$. Evaluating both sides at $1 \in k[t]$ gives $q(T)(x) = F(T)(x)$, as required.

It’s clear, I guess, that $T \mapsto p(T)$ is only an automorphism when $p(t)$ is of the form $a t + b$ for some $a, b \in k$ with $a \neq 0$.

Posted by: Tom Leinster on September 19, 2015 6:14 PM | Permalink | Reply to this

### Re: Where Does The Spectrum Come From?

just a random thought for you. This post got me thinking about something I never did quite understand. I spent a lot of time in school studying “the method of moments” or “boundary element method” (https://en.wikipedia.org/wiki/Boundaryelementmethod). The reference I used in college was “Field Computation by Moment Methods” by Harrington. In that book,section 1-3 he wrote, “The classical eigenfunction method leads to a diagonal matrix, and can be thought of as a special case of the method of moments”.

That never made sense to me (like most things!)

Might be worth looking at.

Posted by: Rob MacDonald on September 25, 2015 8:16 PM | Permalink | Reply to this

Post a New Comment