Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 14, 2015

Where Does The Spectrum Come From?

Posted by Tom Leinster

Perhaps you, like me, are going to spend some of this semester teaching students about eigenvalues. At some point in our lives, we absorbed the lesson that eigenvalues are important, and we came to appreciate that the invariant par excellence of a linear operator on a finite-dimensional vector space is its spectrum: the set-with-multiplicities of eigenvalues. We duly transmit this to our students.

There are lots of good ways to motivate the concept of eigenvalue, from lots of points of view (geometric, algebraic, etc). But one might also seek a categorical explanation. In this post, I’ll address the following two related questions:

  1. If you’d never heard of eigenvalues and knew no linear algebra, and someone handed you the category FDVect\mathbf{FDVect} of finite-dimensional vector spaces, what would lead you to identify the spectrum as an interesting invariant of endomorphisms in FDVect\mathbf{FDVect}?

  2. What is the analogue of the spectrum in other categories?

I’ll give a fairly complete answer to question 1, and, with the help of that answer, speculate on question 2.

(New, simplified version posted at 22:55 UTC, 2015-09-14.)

Famously, trace has a kind of cyclicity property: given maps

XfYgX X \stackrel{f}{\to} Y \stackrel{g}{\to} X

in FDVect\mathbf{FDVect}, we have

tr(gf)=tr(fg). tr(g \circ f) = tr(f \circ g).

I call this “cyclicity” because it implies the more general property that for any cycle

X 0f 1X 1f 2f n1X n1f nX 0 X_0 \stackrel{f_1}{\to} X_1 \stackrel{f_2}{\to} \,\, \cdots\,\, \stackrel{f_{n-1}}{\to} X_{n - 1} \stackrel{f_n}{\to} X_0

of linear maps, the scalar

tr(f if 1f nf i+1) tr(f_i \circ \cdots \circ f_1 \circ f_n \circ \cdots \circ f_{i + 1})

is independent of ii.

A slightly less famous fact is that the same cyclicity property is enjoyed by a finer invariant than trace: the set-with-multiplicities of nonzero eigenvalues. In other words, the operators gfg\circ f and fgf\circ g have the same nonzero eigenvalues, with the same (algebraic) multiplicities. Zero has to be excluded to make this true: for instance, if we take ff and gg to be the projection and inclusion associated with a direct sum decomposition, then one composite operator has 00 as an eigenvalue and the other does not.

I’ll write Spec(T)Spec(T) for the set-with-multiplicities of eigenvalues of a linear operator TT, and Spec(T)Spec'(T) for the set-with-multiplicities of nonzero eigenvalues. Everything we’ll do is on finite-dimensional vector spaces over an algebraically closed field kk. Thus, Spec(T)Spec(T) is a finite subset-with-multiplicity of kk and Spec(T)Spec'(T) is a finite subset-with-multiplicity of k ×=k{0}k^\times = k \setminus \{0\}.

I’ll call Spec(T)Spec'(T) the invertible spectrum of TT. Why? Because every operator TT decomposes uniquely as a direct sum of operators T nilT invT_{nil} \oplus T_{inv}, where every eigenvalue of T nilT_{nil} is 00 (or equivalently, T nilT_{nil} is nilpotent) and no eigenvalue of T invT_{inv} is 00 (or equivalently, T invT_{inv} is invertible). Then the invertible spectrum of TT is the spectrum of its invertible part T invT_{inv}.

If excluding zero seems forced or unnatural, perhaps it helps to consider the “reciprocal spectrum”

RecSpec(T)={λk:ker(λTI) is nontrivial}. RecSpec(T) = \{\lambda \in k : ker(\lambda T - I) \,\,\text{ is nontrivial} \}.

There’s a canonical bijection between Spec(T)Spec'(T) and RecSpec(T)RecSpec(T) given by λ1/λ\lambda \leftrightarrow 1/\lambda. So the invariants SpecSpec' and RecSpecRecSpec carry the same information, and if RecSpecRecSpec seems natural to you then SpecSpec' should too.

Moreover, if you know the space XX that your operator TT is acting on, then to know the invertible spectrum Spec(T)Spec'(T) is to know the full spectrum Spec(T)Spec(T). That’s because the multiplicities of the eigenvalues of TT sum to dim(X)dim(X), and so the multiplicity of 00 in Spec(T)Spec(T) is dim(X)dim(X) minus the sum of the multiplicities of the nonzero eigenvalues.

The cyclicity equation

Spec(gf)=Spec(fg) Spec'(g\circ f) = Spec'(f\circ g)

is a very strong property of SpecSpec'. A second, seemingly more mundane, property is that for any operators T 1T_1 and T 2T_2 on the same space, and any scalar λ\lambda,

Spec(T 1)=Spec(T 2)Spec(T 1+λI)=Spec(T 2+λI). Spec'(T_1) = Spec'(T_2) \implies Spec'(T_1 + \lambda I) = Spec'(T_2 + \lambda I).

In other words, for an operator TT, if you know Spec(T)Spec'(T) and you know the space that TT acts on, then you know Spec(T+λI)Spec'(T + \lambda I) for each scalar λ\lambda. Why? Well, we noted above that if you know the invertible spectrum of an operator and you know the space it acts on, then you know the full spectrum. So Spec(T)Spec'(T) determines Spec(T)Spec(T), which determines Spec(T+λI)Spec(T + \lambda I) (as Spec(T)+λSpec(T) + \lambda), which in turn determines Spec(T+λI)Spec'(T + \lambda I).

I claim that the invariant SpecSpec' is universal with these two properties, in the following sense.

Theorem   Let Ω\Omega be a set and let Φ:{linear operators}Ω\Phi : \{ \text{linear operators} \} \to \Omega be a function satisfying:

  1. Φ(gf)=Φ(fg)\Phi(g \circ f) = \Phi(f \circ g) for all XfYgXX \stackrel{f}{\to} Y \stackrel{g}{\to} X

  2. Φ(T 1)=Φ(T 2)\Phi(T_1) = \Phi(T_2) \implies Φ(T 1+λI)=Φ(T 2+λI)\Phi(T_1 + \lambda I) = \Phi(T_2 + \lambda I) for all operators T 1,T 2T_1, T_2 on the same space, and all scalars λ\lambda.

Then Φ\Phi is a specialization of SpecSpec', that is, Spec(T 1)=Spec(T 2)Φ(T 1)=Φ(T 2) Spec'(T_1) = Spec'(T_2) \implies \Phi(T_1) = \Phi(T_2) for all T 1,T 2T_1, T_2. Equivalently, there is a unique function Φ¯:{finite subsets-with-multiplicity of k ×}Ω \bar{\Phi} : \{ \text{finite subsets-with-multiplicity of }\,\, k^\times\} \to \Omega such that Φ(T)=Φ¯(Spec(T))\Phi(T) = \bar{\Phi}(Spec'(T)) for all operators TT.

For example, take Φ\Phi to be trace. Then conditions 1 and 2 are satisfied, so the theorem implies that trace is a specialization of SpecSpec'. That’s clear anyway, since the trace of an operator is the sum-with-multiplicities of the nonzero eigenvalues.

I’ll say just a little about the proof.

The invertible spectrum of a nilpotent operator is empty. Now, the Jordan normal form theorem invites us to pay special attention to the special nilpotent operators P nP_n on k nk^n defined as follows: writing e 1,,e ne_1, \ldots, e_n for the standard basis of k nk^n, the operator P nP_n is given by

e ne n1e 10. e_n \mapsto e_{n - 1} \mapsto \cdots \mapsto e_1 \mapsto 0.

So if the theorem is to be true then, in particular, Φ(P n)\Phi(P_n) must be independent of nn.

But it’s not hard to cook up maps f:k nk n1f: k^n \to k^{n - 1} and g:k n1k ng: k^{n - 1} \to k^n such that gf=P ng\circ f = P_n and fg=P n1f \circ g = P_{n - 1}. Thus, condition 1 implies that Φ(P n)=Φ(P n1)\Phi(P_n) = \Phi(P_{n - 1}). It follows that Φ(P n)\Phi(P_n) is independent of nn, as claimed.

Of course, that doesn’t prove the theorem. But the rest of the proof is straightforward, given the Jordan normal form theorem and condition 2, and in this way, we arrive at the conclusion of the theorem:

Spec(T 1)=Spec(T 2)Φ(T 1)=Φ(T 2) Spec'(T_1) = Spec'(T_2) \implies \Phi(T_1) = \Phi(T_2)

for any operators T 1T_1 and T 2T_2.

One way to interpret the theorem is as follows. Let \sim be the smallest equivalence relation on {linear operators}\{\text{linear operators}\} such that:

  1. gffgg\circ f \sim f \circ g

  2. T 1T 2T_1 \sim T_2 \implies T 1+λIT 2+λIT_1 + \lambda I \sim T_2 + \lambda I

(where ff, gg, etc. are quantified as in the theorem). Then the natural surjection

{linear operators}{linear operators}/ \{ \text{linear operators} \} \longrightarrow \{ \text{linear operators} \}/\sim

is isomorphic to

Spec:{linear operators}{finite subsets-with-multiplicity of k ×}. Spec': \{ \text{linear operators} \} \longrightarrow \{ \text{finite subsets-with-multiplicity of }\,\, k^\times\}.

That is, there is a bijection between {linear operators}/\{ \text{linear operators} \}/\sim and {finite subsets-with-multiplicity of k ×}\{ \text{finite subsets-with-multiplicity of }\,\, k^\times\} making the evident triangle commute.

So, we’ve characterized the invariant SpecSpec' in terms of conditions 1 and 2. These conditions seem reasonably natural, and don’t depend on any prior concepts such as “eigenvalue”.

Condition 2 does appear to refer to some special features of the category FDVect\mathbf{FDVect} of finite-dimensional vector spaces. But let’s now think about how it could be interpreted in other categories. That is, for a category \mathcal{E} (in place of FDVect\mathbf{FDVect}) and a function

Φ:{endomorphisms in }Ω \Phi: \{ \text{endomorphisms in }\,\, \mathcal{E} \} \to \Omega

into some set Ω\Omega, how can we make sense of condition 2?

Write Endo()\mathbf{Endo}(\mathcal{E}) for the category of endomorphisms in \mathcal{E}, with maps preserving those endomorphisms in the sense that the evident square commutes. (It’s the category of functors from the additive monoid \mathbb{N}, seen as a one-object category, into \mathcal{E}.)

For any scalars κ0\kappa \neq 0 and λ\lambda, there’s an automorphism F κ,λF_{\kappa, \lambda} of the category Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) given by

F κ,λ(T)=κT+λI. F_{\kappa, \lambda}(T) = \kappa T + \lambda I.

I guess, but haven’t proved, that these are the only automorphisms of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) that leave the underlying vector space unchanged. In what follows, I’ll assume this guess is right.

Now, condition 2 says that Φ(T)\Phi(T) determines Φ(T+λI)\Phi(T + \lambda I) for each λ\lambda, for operators TT on a known space. That’s weaker than the statement that Φ(T)\Phi(T) determines Φ(κT+λI)\Phi(\kappa T + \lambda I) for each κ0\kappa \neq 0 and λ\lambda — but Spec(T)Spec'(T) does determine Spec(κT+λI)Spec'(\kappa T + \lambda I). So the theorem remains true if we replace condition 2 with the statement that Φ(T)\Phi(T) determines Φ(F(T))\Phi(F(T)) for each automorphism FF of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) “over FDVect\mathbf{FDVect}” (that is, leaving the underlying vector space unchanged).

This suggests the following definition:

Definition   Let \mathcal{E} be a category. Let \sim be the equivalence relation on {endomorphisms in }\{ \text{endomorphisms in }\,\, \mathcal{E} \} generated by:

  1. gffgg\circ f \sim f \circ g for all XfYgXX \stackrel{f}{\to} Y \stackrel{g}{\to} X in \mathcal{E}

  2. T 1T 2T_1 \sim T_2 \implies F(T 1)F(T 2)F(T_1) \sim F(T_2) for all endomorphisms T 1,T 2T_1, T_2 on the same object of \mathcal{E} and all automorphisms FF of Endo()\mathbf{Endo}(\mathcal{E}) over \mathcal{E}.

Call {endomorphisms in }/\{ \text{endomorphisms in }\,\, \mathcal{E}\}/\sim the set of invertible spectral values of \mathcal{E}. Write Spec:{endomorphisms in }{invertible spectral values of} Spec': \{ \text{endomorphisms in }\,\, \mathcal{E} \} \to \{ \text{invertible spectral values of}\,\, \mathcal{E} \} for the natural surjection. The invertible spectrum of an endomorphism TT in \mathcal{E} is Spec(T)Spec'(T).

In the case =FDVect\mathcal{E} = \mathbf{FDVect}, the invertible spectral values are the finite subsets-with-multiplicity of k ×k^\times, and the invertible spectrum Spec(T)Spec'(T) is as defined at the start of this post — namely, the set of nonzero eigenvalues with their algebraic multiplicities.

Aside   At least, that’s the case up to isomorphism. You might feel that we’ve lost something, though. After all, the spectrum of a linear operator is a subset-with-multiplicities of the base field, not just an element of some abstract set.

But the theorem does give us some structure on the set of invertible spectral values. This remark of mine below (written after I wrote a first version of this post, but before I wrote the revised version you’re now reading) shows that if \mathcal{E} has finite coproducts then \sim is a congruence for them; that is, if S 1S 2S_1 \sim S_2 and T 1T 2T_1 \sim T_2 then S 1+T 1S 2+T 2S_1 + T_1 \sim S_2 + T_2. (Here ++ is the coproduct in Endo()\mathbf{Endo}(\mathcal{E}), which comes from the coproduct in \mathcal{E} in the obvious way.) So the coproduct structure on endomorphisms induces a binary operation \vee on the set of invertible spectral values, satisfying

Spec(ST)=Spec(S)Spec(T). Spec'(S \oplus T) = Spec'(S) \vee Spec'(T).

In the case =FDVect\mathcal{E} = \mathbf{FDVect}, this is the union of finite subsets-with-multiplicity of k ×k^\times (adding multiplicities). And in general, the algebraic properties of coproduct imply that \vee gives the set of invertible spectral values the structure of a commutative monoid.

Similarly, condition 2 implies that the automorphism group of Endo()\mathbf{Endo}(\mathcal{E}) acts on the set of invertible spectral values; and since automorphisms preserve coproducts (if they exist), it acts by monoid homomorphisms.

We can now ask what this general definition produces for other categories. I’ve only just begun to think about this, and only in one particular case: when \mathcal{E} is FinSet\mathbf{FinSet}, the category of finite sets.

I believe the category of endomorphisms in FinSet\mathbf{FinSet} has no nontrivial automorphisms over FinSet\mathbf{FinSet}. After all, given an endomorphism TT of a finite set XX, what natural ways are there of producing another endomorphism of XX? There are only the powers T nT^n, I think, and the process TT nT \mapsto T^n is only invertible when n=1n = 1.

So, condition 2 is trivial. We’re therefore looking for the smallest equivalence relation on {endomorphisms of finite sets}\{ \text{endomorphisms of finite sets} \} such that gffgg \circ f \sim f \circ g for all maps ff and gg pointing in opposite directions. I believe, but haven’t proved, that T 1T 2T_1 \sim T_2 if and only if T 1T_1 and T 2T_2 have the same number of cycles

x 1x 2x px 1 x_1 \mapsto x_2 \mapsto \cdots \mapsto x_p \mapsto x_1

of each period pp. Thus, the invertible spectral values of FinSet\mathbf{FinSet} are the finite sets-with-multiplicity of positive integers, and if TT is an endomorphism of a finite set then Spec(T)Spec'(T) is the set-with-multiplicities of periods of cycles of TT.

All of the above is a record of thoughts I had in spare moments at this workshop I just attended in Louvain-la-Neuve, so I haven’t had much time to reflect. I’ve noted where I’m not sure of the facts, but I’m also not sure of the aesthetics:

Cartoon of blackboard full of mathematics, two professors standing in front, one saying to the other: The math is right.  It's just in poor taste.

In other words, do the theorem and definition above represent the best approach? Here are three quite specific reservations:

  1. I’m not altogether satisfied with the fact that it’s the invertible spectrum, rather than the full spectrum, that comes out. Perhaps there’s something to be done with the observation that if you know the invertible spectrum, then knowing the full spectrum is equivalent to knowing (the dimension of) the space that your operator acts on.

  2. Condition 2 of the theorem states that Spec(T)Spec'(T) determines Spec(T+λI)Spec'(T + \lambda I) for an operator TT on a known space (and, of course, for known λ)\lambda). That was enough to prove the theorem. But there’s also a much stronger true statement: Spec(T)Spec'(T) determines Spec(p(T))Spec'(p(T)) for any polynomial pp over kk (again, for an operator TT on a known space). Any polynomial pp gives an endomorphism Tp(T)T \mapsto p(T) of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) over FDVect\mathbf{FDVect}, and I guess these are the only endomorphisms. So, we could generalize condition 2 by using endomorphisms rather than automorphisms of Endo()\mathbf{Endo}(\mathcal{E}). Should we?

Posted at September 14, 2015 1:06 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2844

37 Comments & 0 Trackbacks

Re: Where Does The Spectrum Come From?

The “slightly less famous fact” is essentially the Sylvester determinant theorem: https://en.wikipedia.org/wiki/Sylvester%27s_determinant_theorem . (Sorry, I can’t figure out how to make HTML links with this parser.)

Posted by: Terence Tao on September 14, 2015 2:00 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Thanks, I didn’t know that name for it.

My favourite proof is this. Write

ker (T)= n=0 ker(T n). ker^\infty(T) = \bigcup_{n=0}^\infty ker(T^n).

The algebraic multiplicity of λ\lambda with respect to TT is dim(ker (TλI))\dim(\ker^\infty(T - \lambda I)). (In my preferred approach to linear algebra, that’s the definition of algebraic multiplicity.) Thus, the “slightly less famous fact” states that

ker (gfλI)ker (fgλI) \ker^\infty(g f - \lambda I) \cong \ker^\infty(f g - \lambda I)

for all λ0\lambda \neq 0.

To prove this, note that ff restricts to a map from the left-hand side to the right-hand side of this supposed isomorphism, and gg restricts to a map from right to left. So it’s enough to prove that gfg f, as an operator on the left-hand side, is invertible (and similarly fgf g on the right-hand side). Now gfλIg f - \lambda I acts nilpotently on the left-hand side, so its only eigenvalue is 00, so the only eigenvalue of gfg f (as an operator on the left-hand side) is λ\lambda. But λ0\lambda \neq 0, so we’re done.

Posted by: Tom Leinster on September 14, 2015 2:19 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Tom,

It is well known in semigroup theory that the smallest equivalence relation on the monoid of all self maps of a finite set generated by fg is equivalent to gf is precisely having the same cardinality eventual range (or set of recurrent points) and the same cycle type as a permutation of their eventual range as you suggest.

This is also equivalent to all complex characters agreeing on the two elements.

Posted by: Benjamin Steinberg on September 14, 2015 3:01 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

* — This comment and those replying to it refer to an old version of the post. — *

Excellent; thanks.

You didn’t mention condition 2, that the equivalence relation is a congruence with respect to disjoint union. Maybe that’s not needed in the context of finite sets — and maybe not for finite-dimensional vector spaces either. It would be nice if it could be dropped. I’ll have to think that over.

Posted by: Tom Leinster on September 14, 2015 3:08 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I didn’t think about it but it seems clear. If you put together mappings of disjoint sets, then the cycle structure of the eventual range depends only on the two mappings since they don’t interact.

Posted by: Benjamin Steinberg on September 14, 2015 3:22 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Condition 2 (on coproducts) is redundant.

That is, the theorem in my post remains true if we drop that condition, and dropping it also leaves unchanged the definitions of \sim, invertible spectral value and invertible spectrum. (This also means that in the definition, we no longer need to assume the existence of coproducts.)

When I get a moment, I’ll edit the post to delete all mention of condition 2 (on coproducts). The old condition 3 (on automorphisms) will get renumbered as condition 2. What the old condition 2 of the theorem said was that if Φ(S 1)=Φ(S 2)\Phi(S_1) = \Phi(S_2) and Φ(T 1)=Φ(T 2)\Phi(T_1) = \Phi(T_2) then Φ(S 1T 1)=Φ(S 2T 2)\Phi(S_1 \oplus T_1) = \Phi(S_2 \oplus T_2), where \oplus is direct sum and these four operators can be on any spaces (potentially all different). Similarly, the old condition 2 of the definition said that if S 1S 2S_1 \sim S_2 and T 1T 2T_1 \sim T_2 then S 1+T 1S 2+T 2S_1 + T_1 \sim S_2 + T_2.

To see that the old condition 2 of the definition is redundant, let \sim be the equivalence relation on {endomorphisms in}\{ \text{endomorphisms in}\,\,\mathcal{E}\} generated by the old conditions 1 and 3, that is, the new conditions 1 and 2. I claim that if S 1S 2S_1 \sim S_2 and T 1T 2T_1 \sim T_2 then S 1+T 1S 2+T 2S_1 + T_1 \sim S_2 + T_2.

It’s enough to prove that if T 1T 2T_1 \sim T_2 then S+T 1S+T 2S + T_1 \sim S + T_2 for all SS. So, define another equivalence relation \approx by

T 1T 2S+T 1S+T 2for allS. T_1 \approx T_2 \iff S + T_1 \sim S + T_2 \,\,\text{for all}\,\, S.

Then \approx satisfies the other two conditions:

  • Let XfYgXX \stackrel{f}{\to} Y \stackrel{g}{\to} X, and let SS be an endomorphism of AA, say. Then we get maps A+XS+fA+Y1 A+gA+X, A + X \stackrel{S + f}{\to} A + Y \stackrel{1_A + g}{\to} A + X, so S+gfS+fgS + g \circ f \sim S + f \circ g, as required.

  • Let T 1T_1 and T 2T_2 be operators on XX with T 1T 2T_1 \approx T_2, let FF be an automorphism of Endo()\mathbf{Endo}(\mathcal{E}) over \mathcal{E}, and let SS be an endomorphism of AA in \mathcal{E}. We have an endomorphism F 1(S)F^{-1}(S) of F 1(A)F^{-1}(A), and F 1(S)+T 1F 1(S)+T 2F^{-1}(S) + T_1 \sim F^{-1}(S) + T_2 by definition of \approx. The old condition 3 (i.e. the new condition 2) allows us to apply FF to each side, giving S+F(T 1)S+F(T 2)S + F(T_1) \sim S + F(T_2), as required.

But \sim is the smallest equivalence relation satisfying those two conditions, so \sim \subseteq \approx — which was exactly what had to be proved.

It follows that the theorem also remains true if we drop the old condition 2.

Thanks to Ben for prompting these thoughts.

Posted by: Tom Leinster on September 14, 2015 1:22 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Another approach: the spectrum contains the same information as the trace of all powers, so it suffices to motivate the trace categorically. But this is standard: traces canonically make sense in any symmetric monoidal category with duals. One reason to prefer this story is that cyclicity comes for free from the category theory, rather than having to be put in by hand.

Posted by: Qiaochu Yuan on September 14, 2015 4:22 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

That’s a really nice observation.

Can you make it apply to the category of finite sets?

Posted by: Tom Leinster on September 14, 2015 8:21 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

A standard way to apply symmetric-monoidal-trace-theory to a category without duals is to map it into one that does, such as by the free vector space functor.

Posted by: Mike Shulman on September 14, 2015 5:17 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

In fact we can apply it to the category of sets! To do that we need to turn it somehow into a symmetric monoidal category with duals, which we do by passing to the category of sets and (isomorphism classes of) spans. Every object is now self-dual, and you can compute that the trace of an endomorphism is its set of fixed points.

Posted by: Qiaochu Yuan on September 15, 2015 4:17 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Is the first bit true in positive characteristic? I don’t think one can get e 2(mod2)e_2\pmod 2 from p 1p_1 and p 2(mod2)p_2 \pmod 2.

Posted by: Jesse C. McKeown on September 14, 2015 2:42 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Could you explain your notation?

Posted by: Tom Leinster on September 14, 2015 6:25 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

ee for elementary symmetric polynomials, pp for power sum; over the rationals, they convey the same information, but not over a ring with torsion, such as the finite fields. Specifically, they’re related thus p ne 1p n1++(1) n1e n1p 1+(1) nne n=0, p_n - e_1 p_{n-1} +- \cdots + (-1)^{n-1} e_{n-1} p_1 + (-1)^n n e_n = 0, which means you can’t get e ne_n from pp when nn is zero in the coefficient field. Qiaochu’s starting-point seems to be that, when ee are the symmetric polynomials of a Spectrum, i.e. the coefficients of the characteristic polynomial, the pp are the power traces.

If we don’t have to worry about torsion, of course, it’s a non-issue.

Posted by: Jesse C. McKeown on September 14, 2015 7:35 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I get your point. So Qiaochu’s statement that

the spectrum contains the same information as the trace of all powers

is true in characteristic 0 but false in all positive characteristics. A counterexmample for any field kk of characteristic p>0p \gt 0: if we denote by 0 p0_p and I pI_p the zero operator and identity operator on k pk^p, then

tr(0 p r)=0=tr(I p r) tr(0_p^r) = 0 = tr(I_p^r)

for all integers r0r \geq 0. But, of course, 0 p0_p and I pI_p have different spectra.

Posted by: Tom Leinster on September 14, 2015 8:24 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I agree that the preceding does put trace and spectrum in a nice categorical context, and confirms their utility; I’m not sure how students will find it, before they’re convinced the spectrum is as worth-while (or “natural”) as it is. Particularly, the invocation of Jordan Canonical Form puts the argument well after the spectrum has been introduced and put to a lot of work.

The perspective that makes most sense to me is: a linear operator TT on VV turns VV, a (finite-dimensional) kk-module into a (finitely-generated) k[x]k[x]-module. While k[x]k[x] isn’t a field, it is a Euclidean domain, so (V,T)(V,T) is a (finite) direct sum of quotients k[x]/(p(x))k[x]/(p(x)); more: working in the coordinate vector spaces k mk^m, one can analyze this direct sum by applying Smith’s algorithm to any presentation of (V,T)(V,T) as a k[x]k[x]-module, and it so happens that we have one in the matrix TxI T - x I . It’s as easy *{}^{*} as factoring polynomials!


It took me a long time after learning about this picture to really appreciate it.

One thing that strikes me about it now is how it gives a good starting-place to build up finite group characters. In that context one is led (for example) to study not only single operators on/in the group ring k[G]k[G], but the whole center of k[G]k[G]; the punch-line in that direction is: the conjugacy classes in GG give a nice basis for the center, and that in turn makes the group ring a module for k[x [g]...]k[x_{[g]} \mid ...], polynomials on several generators; these rings aren’t euclidean anymore, but there is still lots to say!

Posted by: Jesse C. McKeown on September 15, 2015 2:32 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I think the point of asking these questions is not for the benefit of students. Rather, it’s for our benefit: suppose that in the future we find ourselves in a category and wanting to know some interesting invariants of endomorphisms in it. Where should we look? Since the spectrum has been such a fruitful invariant for endomorphisms of vector spaces, we might want categorical machinery that spits it out so we can then apply it to new settings and see what we get.

The commutative algebra story is nice so far as it goes but it doesn’t generalize well: you really need to know that you’re dealing with vector spaces. The story about symmetric monoidal categories with duals, on the other hand, is extremely general, and has a rich continuation involving topological field theory, the Lefschetz fixed point theorem, etc. etc.

Posted by: Qiaochu Yuan on September 15, 2015 4:43 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Asking for ourselves, and pondering, of course are fine things to do; I don’t think I denied that.

I’m not convinced that the commutative localization of a category is so far from commutative algebra as all that! One definitely ends up with a category whose endomorphism monoids are abelian monoids, and the monoid rings of such things are Natural things to consider, et.c. …

But I will come back and read your comment again.

Posted by: Jesse C. McKeown on September 16, 2015 5:21 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I don’t know what you mean by “commutative localization” here. If you’re referring to the dimension / trace construction described in my second top-level comment involving quotienting by fggff g \sim g f, the result is not a category, it’s just a set.

This is directly analogous to the zeroth Hochschild homology of an algebra, which is a vector space (not an algebra) given by quotienting by the subspace generated by commutators. In particular it’s not the abelianization, which is an algebra given by quotienting by the ideal generated by commutators.

Posted by: Qiaochu Yuan on September 16, 2015 7:11 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Oh dear… yes, I did go a bit silly there…

There… I’m sure there’s a category somewhere under the thing I’m thinking of… and you just might tell me it’s the thing that everyone else is already talking about.

Well, I don’t feel like a wheelwright today, not for this stuff.

Posted by: Jesse C. McKeown on September 18, 2015 1:18 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Here is another approach which has the benefit of making sense in any category, no symmetric monoidal structure needed, and which also has the property that cyclicity appears rather than being put in by hand.

There is a 2-category whose objects are (small, for simplicity) categories, whose 1-morphisms CDC \to D are bimodules

F:D op×CSetF : D^{op} \times C \to \text{Set}

(where composition is given by tensor product), and whose 2-morphisms are morphisms of bimodules. This 2-category is symmetric monoidal, and every category is dualizable with dual the opposite category, so you can take the trace of any endomorphism F:C opCSetF : C^{op} \otimes C \to \text{Set}, and you get the coend

cCF(c,c).\int^{c \in C} F(c, c).

In particular you can define the “dimension” (or “Hochschild homology”) of a category to be the trace of the identity endomorphism, which turns out to be the coend of the Hom bifunctor Hom(,):C op×CSet\text{Hom}(-, -) : C^{op} \times C \to \text{Set}.

When you compute this coend, you get precisely the quotient of the set of endomorphisms in the category by the cyclicity relation fggff g \sim g f (where f:cd,g:dcf : c \to d, g : d \to c are morphisms, not necessarily endo). There is now a “universal trace” of any endomorphism in the category which takes values in this set. If you apply this to FinVect\text{FinVect} while remembering that it’s enriched over Vect\text{Vect} (so everything is enriched over Vect\text{Vect} in the previous discussion), I think you just get the trace back. If you apply this to FinSet\text{FinSet} you get something more complicated that Todd Trimble explained on MO once; I don’t remember the details.

Incidentally, I think it’s a feature and not a bug that you only get the invertible spectrum and not the full spectrum. Knowing only the traces of the powers of a linear operator, and not the dimension of the vector space it acts on, also only recovers the invertible spectrum at best.

Posted by: Qiaochu Yuan on September 15, 2015 4:39 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Just one little comment for now:

Knowing only the traces of the powers of a linear operator, and not the dimension of the vector space it acts on, also only recovers the invertible spectrum at best.

In characteristic zero, the trace of the zeroth power of the operator is the dimension of the space. So you do get the full spectrum.

Posted by: Tom Leinster on September 15, 2015 4:58 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

The coend chom(c,c)\int^c \hom(c, c) that Qiaochu is describing has a name: it’s called the trace of a category. Here I’ve linked to the calculation I once described here at the Café involving finite sets. The calculation is closely related to eventual images, and shows that the trace of finite sets and functions is the same as the trace of finite sets and bijections, and is summarized as the set of conjugacy classes of permutations.

Posted by: Todd Trimble on September 15, 2015 11:11 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Thanks Todd; that’s neat.

Do you have any thoughts about the role in my post of the automorphisms of the category of endomorphisms? E.g. if you quotient out the set of endomorphisms in FDVect\mathbf{FDVect} by gffgg f \sim f g then you don’t, as far as I’m aware, get the invertible spectrum. All the same, it would be nice to get rid of condition 2.

Posted by: Tom Leinster on September 16, 2015 12:06 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

This is exactly the equivalence relation that all complex characters of the monoid of self-maps on a finite set agree on the two elements. So this means in some sense that allowing linear representations and then taking the trace gives the same answer as the categorical notion of trace.

Posted by: Benjamin Steinberg on September 16, 2015 3:46 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I think following Todd’s argument for FinSet\mathsf{FinSet} can show that the categorical trace in FinVect\mathsf{FinVect} is indeed the invertible spectrum – at least when the field is algebraically closed. That is, I think that condition (2) is unnecessary to characterize the invertible spectrum when the field is algebraically closed.

It’s clear, as before that when we restrict to the groupoid FinVect iso\mathsf{FinVect}_{\mathrm{iso}}, the categorical trace becomes a complete conjugation invariant, i.e. the categorical trace of FinVect iso\mathsf{FinVect}_{\mathrm{iso}} is the Jordan normal form (in a groupoid, the cyclicity relation is just another way to express conjugation invariance).

The same argument as Todd makes in FinSet\mathsf{FinSet} also shows that the categorical trace of FinVect\mathsf{FinVect} forgets the generalized eigenspace of generalized eigenvalue 0. Putting these together, the categorical trace of FinVect\mathsf{FinVect} can only depend on the part of the Jordan form associated to nonzero spectrum.

Unlike in the case of FinSet\mathsf{FinSet} (although I admit, I don’t follow the logic in this case), there are further identifications. For example, setting

A=[λ 0 a 0 λ b]B=[1 0 0 1 0 0]A = \left[\begin{array}{ccc} \lambda & 0 & a \\ 0 & \lambda & b \end{array} \right]\qquad B = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array} \right]

we get

AB=[λ 0 0 λ]BA=[λ 0 a 0 λ b 0 0 0]A B = \left[ \begin{array}{cc} \lambda & 0 \\ 0 & \lambda \end{array} \right] \qquad B A = \left[ \begin{array}{ccc} \lambda & 0 & a \\ 0 & \lambda & b \\ 0 & 0 & 0 \end{array} \right]

The two have the same invertible spectrum (two copies of λ\lambda, assuming λ\lambda is nonzero), but BAB A has a nilpotent part while ABA B doesn’t (as long as aa or bb is nonzero).

I think iteratively applying maps analogous to AA and BB ought to allow us to replace the nilpotent part of any endomorphism at will. This leaves the invertible spectrum as the only remaining invariant after applying the cyclicity condition, and Tom’s “slightly less well-known fact” tells us that the nonzero spectrum is exactly the categorical trace.

I’m inclined to think that for non-algebraically-closed fields, the categorical trace is something like the invertible spectrum of the operator after tensoring with the algebraic closure of the ground field?

Posted by: Tim Campion on September 17, 2015 3:45 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

A calculation shows however that the Jordan normal form of your BAB A is diagonalizable, so I’m not yet seeing “unexpected reductions” (such as two non-conjugate Jordan normal form matrices of the same dimension mapping to the same element in the trace of the category).

But I haven’t yet ruled out this possibility.

Posted by: Todd Trimble on September 17, 2015 9:36 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Wires crossed, I guess we realized just how wrong I was at the same time!

Actually, now I believe that the Jordan form associated to the invertible spectrum is the categorical trace. Tom’s proof above that the invertible spectrum is cyclic-invariant can be refined to show that ker(gfλI) nker(fgλI) n\operatorname{ker} (gf - \lambda I)^n \cong \operatorname{ker}(fg-\lambda I)^n for all nn when λ0\lambda \neq 0. From the dimensions of these spaces, one can recover the Jordan form associated to the invertible spectrum.

Posted by: Tim Campion on September 17, 2015 10:12 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Well, my claim that those matrices are in different conjugacy classes is horribly wrong. One point remains: the categorical trace of FinVect k\mathsf{FinVect}_k when kk is algebraically closed is somewhere between the Jordan normal form corresponding to the invertible spectrum and the invertible spectrum itself, which is a razor-thin distinction. It should be doable to resolve exactly what it is.

Posted by: Tim Campion on September 17, 2015 9:43 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Tim, I haven’t had time to go through this thread properly, but are you assuming we’re over a field of characteristic zero? As Jesse and I were discussing earlier, in positive characteristic, two operators SS and TT with different invertible spectra can still satisfy tr(S r)=tr(T r)tr(S^r) = tr(T^r) for all natural numbers rr.

Posted by: Tom Leinster on September 17, 2015 11:37 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

All I need is that every linear endomorphism has a Jordan normal form, which I think (though I haven’t really checked) only requires that the field be algebraically closed. The actual trace of linear algebra doesn’t appear explicitly in anything I’m doing, I’m just computing the categorical trace, i.e. the universal extranatural transformation η:Hom FinVect?\eta: \mathrm{Hom}_{\mathsf{FinVect}} \implies ? which mods out by the cyclic relation fggffg \equiv gf.

I argue that when kk is algebraically closed, the vertex, which I’ve simply labeled “??\,\,”, is the set of all Jordan normal forms of invertible endomorphisms, and the categorical trace η\eta assigns to a linear endomorphism its Jordan normal form when restricted to its eventual image, or equivalently its Jordan normal form with the block of generalized eigenvalue 0 deleted.

The argument (spread out over several comments, with false claims to the contrary mixed in!) boils down to this:

  • The cyclic relation implies that η\eta is conjugation-invariant, so it can only depend on the Jordan normal form.

  • The cyclic relation also implies that the categorical trace η(T)\eta(T) only depends on the restriction of TT to its eventual image (this is Todd’s argument on the nlab, written for the case of FinSet\mathsf{FinSet}, but it still works here). The endomorphism TT’s restriction to its eventual image is obtained by deleting the 0- Jordan block, so η\eta depends only on this part of the Jordan normal form.

  • Conversely, I claim that the part of the Jordan normal form associated to the nonzero eigenvalues is invariant under the cyclic relation. This amounts to observing that in your (Tom’s) argument that ker (gfλI)ker (fgλI)\ker^\infty(gf - \lambda I) \cong \ker^\infty(fg - \lambda I) (when λ0\lambda \neq 0), you really establish that ker(gfλI) nker(fgλI) n\ker(gf-\lambda I)^n \cong \ker(fg - \lambda I)^n for any n>0n \gt 0 (when λ0\lambda \neq 0). But the dimensions of these spaces are enough to recover the Jordan blocks for nonzero eigenvalues.

I’m not sure what the story is in the non-algebraically-closed case. And I don’t know what happens when we treat everything as Ab\mathsf{Ab}-enriched when taking this coend.

Posted by: Tim Campion on September 18, 2015 12:58 AM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Back in 1969/70 Anders Kock had a student called Maan Justersen. Her speciale thesis was on the categorical notion of trace. I do not know whether the Matematisk Institut in Aarhus still keeps copies of it.

Posted by: Gavin Wraith on September 17, 2015 10:32 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

This comment is in response to Tom’s conjecture on the form of automorphisms of Endo(FDVect)\mathbf{Endo(FDVect)}.

If I understand it correctly, we are trying to understand automorphisms F:Endo(FDVect)Endo(FDVect)F: \mathbf{Endo(FDVect)} \to \mathbf{Endo(FDVect)} as a category over Vect\mathbf{Vect}, i.e., if U:Endo(FDVect)VectU: \mathbf{Endo(FDVect)} \to \mathbf{Vect} is the forgetful functor, then we are interested in describing automorphisms FF such that UF=UU \circ F = U.

At first I thought Tom’s conjecture would follow by applying some form of Tannaka reconstruction. We can think of Endo(FDVect)\mathbf{Endo(FDVect)} as the Vect\mathbf{Vect}-enriched functor category Vect fd k[x]\mathbf{Vect}_{fd}^{k[x]} where the polynomial algebra k[x]k[x] is regarded as a one-object Vect\mathbf{Vect}-enriched category, and hope that endomorphisms F:Vect fd k[x]Vect fd k[x]F: \mathbf{Vect}_{fd}^{k[x]} \to \mathbf{Vect}_{fd}^{k[x]} over UU would correspond to enriched functors = algebra maps k[x]k[x]k[x] \to k[x]. The idea here is that endomorphisms FF, or at least automorphisms FF on the functor category, would be obtained by pulling back along an algebra endomorphisms or algebra automorphisms on k[x]k[x]. And note that algebra automorphisms are given by maps xax+bx \mapsto a x + b where ak ×a \in k^\times is nonzero, as Tom was suggesting.

Something like this might be the case if we weren’t just dealing with finite-dimensional modules over k[x]k[x]. An automorphism of Vect k[x]\mathbf{Vect}^{k[x]} over the underlying functor UU would be in particular part of an adjoint equivalence, and so we’d be getting into a Morita context where I think the Cauchy completion of k[x]k[x] is a tractable object (it might not be so tractable if it were polynomials in several variables), and there I think something like Tom’s conjecture would likely be true. But in the finite-dimensional context, I think it’s the spectrum itself, or rather the spectral decomposition of a pair (V,T:VV)(V, T: V \to V), that suggests more possibilities.

To each (V,T)(V, T) and each λk\lambda \in k, there is a corresponding generalized eigenspace of TT attached to λ\lambda (which is obviously zero for all but finitely many λ\lambda), which I will denote as V λV_\lambda. Thus we have a direct sum or product decomposition V λkV λV \cong \prod_{\lambda \in k} V_\lambda. It is important to note for our purposes that this is a canonical decomposition: there is a canonical idempotent operator π λ:VV\pi_\lambda: V \to V whose image is the subspace V λVV_\lambda \hookrightarrow V. Perhaps that is an obvious point, but in case not, here’s the way I look at it. If p(x)p(x) is the characteristic polynomial of T:VVT: V \to V, then the module structure k[x]hom(V,V)k[x] \to \hom(V, V), i.e., the algebra map sending xx to TT, factors through the quotient k[x]k[x]/(p)k[x] \to k[x]/(p) by the Cayley-Hamilton theorem. Working over an algebraically closed field kk (and let’s go ahead and assume char(k)0char(k) \neq 0, in view of other comments in this discussion), the polynomial pp splits completely, say as p(x)=x r i(xλ i) e ip(x) = x^r \prod_i (x - \lambda_i)^{e_i}. Then by the Chinese remainder theorem we have a canonical product decomposition

k[x]/(p)k[x]/(x r)× ik[x]/((xλ i) e i)k[x]/(p) \cong k[x]/(x^r) \times \prod_i k[x]/((x - \lambda_i)^{e_i})

induced by uniquely determined primitive idempotents in the ring k[x]/(p)k[x]/(p). If e λ:k[x]/(p)k[x]/(p)e_\lambda: k[x]/(p) \to k[x]/(p) denotes multiplication by a typical such idempotent, then pushing out e λ1 V:k[x]/(p) kVk[x]/(p) kVe_\lambda \otimes 1_V: k[x]/(p) \otimes_k V \to k[x]/(p) \otimes_k V along the structure map k[x]/(p) kVVk[x]/(p) \otimes_k V \to V produces an idempotent operator π λ:VV\pi_\lambda: V \to V whose image is the eigenspace attached to λ\lambda.

That out of the way, we for each (V,T)(V, T) a canonical decomposition V λkV λV \cong \prod_{\lambda \in k} V_\lambda, with 1 V= λπ λ1_V = \sum_\lambda \pi_\lambda. Now suppose ρ:kk ×\rho: k \to k^\times is any function taking values ρ(λ)\rho(\lambda) that are invertible in kk. Define an automorphism

F ρ:Vect fd k[x]Vect fd k[x]F_\rho: \mathbf{Vect}_{fd}^{k[x]} \to \mathbf{Vect}_{fd}^{k[x]}

by taking (V,T)(V, T) to (V, λρ(λ)Tπ λ)(V, \sum_\lambda \rho(\lambda) T \pi_\lambda). I think it’s pretty clear that if f:VWf: V \to W is a map (V,T)(W,S)(V, T) \to (W, S), i.e., if Sf=fTS \circ f = f \circ T, then also π λ,Wf=fπ λ,V\pi_{\lambda, W} \circ f = f \circ \pi_{\lambda, V}, since (Tλ i) e i(v)=0(T - \lambda_i)^{e_i}(v) = 0 implies (Sλ i) e i(fv)=0(S - \lambda_i)^{e_i}(f v) = 0. And so each f:(V,T)(W,S)f: (V, T) \to (W, S) gives a map f:(V,F ρ(T))(W,F ρ(S))f: (V, F_\rho(T)) \to (W, F_{\rho}(S)). The inverse of F ρF_\rho is, naturally enough, F ρ 1F_{\rho^{-1}} where ρ 1(λ)ρ(λ) 1\rho^{-1}(\lambda) \coloneqq \rho(\lambda)^{-1}.

These F ρF_\rho are generally not of the form TaT+bT \mapsto a T + b.

I should mention how I came to all this, because it wasn’t immediately apparent to me. As I said, I thought at first maybe Tom’s conjecture would be true by some sort of Tannaka reconstruction argument. At some point I had to remind myself that Tannaka reconstruction ought really to be considered in terms of finite-dimensional comodules over coalgebras (or, passing to a richer monoidal setting, comodules over bialgebras such as group bialgebras). Going with this, at some point it occurred to me that a finite-dimensional vector space VV equipped with an endomorphism T:VVT: V \to V could be considered a module over the free kk-algebra on a 1-dimensional vector space (also called the tensor algebra; we have Tens(k) n0k nk[x]Tens(k) \coloneqq \sum_{n \geq 0} k^{\otimes n} \cong k[x]), or – it could also be considered as a comodule over the cofree kk-coalgebra over a 1-dimensional space, Cof(k)Cof(k). That is to say, we could consider it in terms of an algebra map k[x]hom(V,V)k[x] \to \hom(V, V), or – we could consider it dually in terms of a coalgebra map V * kVCof(k)V^\ast \otimes_k V \to Cof(k), the unique coalgebra map induced by the linear map given by the composite

V * kV1TV * kVevalk.V^\ast \otimes_k V \stackrel{1 \otimes T}{\to} V^\ast \otimes_k V \stackrel{eval}{\to} k.

(Note that we use finite-dimensionality of VV in order to define a coalgebra structure on V * kVV^\ast \otimes_k V!) This coalgebra map transforms to a comodule structure VV kCof(k)V \to V \otimes_k Cof(k).

One virtue of the comodule picture is that the canonical spectral decompositions already reside within Cof(k)Cof(k). What turns out to be true is that there is a coalgebra decomposition

Cof(k) λkk[x]Cof(k) \cong \sum_{\lambda \in k} k[x]

where each summand k[x]k[x] has the usual bialgebra comultiplication given by δ(x)=x1+1x\delta(x) = x \otimes 1 + 1 \otimes x, or δ(x n)= i+j=nx ix j\delta(x^n) = \sum_{i + j = n} x^i \otimes x^j. More explicitly, Cof(k)Cof(k) is the union of coalgebra duals of finite-dimensional algebra quotients q:k[x]k[x]/(p)q: k[x] \to k[x]/(p), seen in terms of linear embeddings q *(k[x]/(p)) *k[x] *k[[x]]q^\ast (k[x]/(p))^\ast \to k[x]^\ast \cong k[ [x] ], and Cof(k)Cof(k) is in fact the space of rational functions in the localization k[x] (x)k[x]_{(x)} which via their MacLaurin expansions form a subspace of the space of power series k[[x]]k[ [x] ]. This involves partial fraction decompositions of rational functions and Mac Laurin expansions of 1(xλ i) e i\frac1{(x - \lambda_i)^{e_i}}. I’m actually rushing through a story told in more detail here, which should make the spectral decomposition story more apparent.

Anyway, the comodule picture might be worth keeping in mind in this discussion.

(I almost forgot to mention that Cof(k)Cof(k) can also be regarded as a predual of the profinite completion of k[x]k[x], although that might have been obvious already.)

Posted by: Todd Trimble on September 18, 2015 3:45 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Thanks for this excellent food for thought. This is (the end of) one of the busiest weeks of the academic year for me, so I’m struggling to keep up with what everyone’s written; in particular, I’ve only had time to skim your comment. So please forgive me if the following questions betray some superficial misunderstanding.

  1. You emphasize that the decomposition V λkV λV \cong \prod_{\lambda \in k} V_\lambda is canonical. By “canonical”, do you mean to say that it, and the operators π λ\pi_\lambda, are independent of TT? (It doesn’t seem so to me, but this is related to question 2.)

  2. If I understand your comment correctly, we get for each function ρ:kk ×\rho: k \to k^\times an automorphism of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) over FDVect\mathbf{FDVect}. Suppose we define ρ(λ)={2 ifλ=1, 1 otherwise. \rho(\lambda) = \begin{cases} 2 &\text{if}\,\, \lambda = 1,\\ 1 &\text{otherwise}. \end{cases} This gives rise to an automorphism (V,T)(V, λρ(λ)Tπ λ.) (V, T) \mapsto (V, \sum_\lambda \rho(\lambda)T \pi_\lambda.) What is the effect of this automorphism on the diagonal matrices T=diag(1,2)T = diag(1, 2) and T=diag(2,2)T = diag(2, 2), respectively?

Off to get train now and think about this more, well, thoughtfully; again, sorry about the rushed response to your carefully thought-out comment.

Posted by: Tom Leinster on September 18, 2015 6:52 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

(1) No, it should depend on TT. When I wrote that, I had in mind the fact that subspace inclusions (in this case of generalized eigenspaces) generally admit many retractions; in this case I wanted to say that the retraction or projection onto the eigenspace is not something arbitrary, but is determined from the structure of TT.

(2) Well, it does look like the effect is to produce diag(2,2)diag(2, 2) in both cases, which is to say: something looks wrong. Offhand I would guess that I either mistranslated or misinterpreted something from the comodule picture, or overlooked something else from what I thought it was telling me.

It’s probably up to me to figure out the root of my mistake. But my thought process behind it was that each coalgebra automorphism of Cof(k)Cof(k) would induce an automorphism functor on Comod fd(Cof(k))Comod_{fd}(Cof(k)); I had convinced myself that this comodule category is your Endo(FDVect)\mathbf{Endo(FDVect)} in different language. Coalgebra endomorphisms Cof(k)Cof(k)Cof(k) \to Cof(k) are quite plentiful; they correspond to linear functions Cof(k)kCof(k) \to k which I still think correspond to the elements in the profinite completion of k[x]k[x] (here when I say “profinite completion”, I mean the inverse limit with respect to the system of finite-dimensional quotients of k[x]k[x] and quotient maps between them). I also thought the coalgebra automorphisms would be plentiful, but I may have misidentified them or something.

I’m hoping it won’t take me too long to find time to sort through this (and sorry for the noise, and thanks for your thoughts).

Posted by: Todd Trimble on September 18, 2015 11:16 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

I think the statement about F ρ 1F_{\rho^{-1}} being the inverse of F ρF_\rho is incorrect; at least, I don’t see why it’s true. At first I thought it was true, but then I realized that my calculation assumed that π λ\pi_\lambda was independent of the operator involved. That was the connection between my two questions.

Posted by: Tom Leinster on September 19, 2015 4:44 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

Long comment coming up; here’s the summary.

  1. Although there may have been some things wrong in Todd’s comment, he’s substantially right: there are indeed non-obvious automorphisms of the category of linear operators on finite-dimensional vector spaces.

  2. However, this doesn’t appear to affect the substance of my post. Although the automorphism group of this category is larger than I thought, the newly-discovered elements don’t falsify anything I wrote about the (invertible) spectrum.

  3. If we work with all vector spaces rather than just the finite-dimensional ones, then (as we both expected) the category of linear operators has no automorphisms or endomorphisms other than the obvious ones.

In what follows, everything is over an algebraically closed field kk. There’s no need to make any assumption about its characteristic.

We’re going to be looking for ways (especially reversible ways) of turning a linear operator on a finite-dimensional vector space into another operator on the same space. Formally, this means considering the category Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) whose objects are operators on finite-dimensional vector spaces and whose maps are linear maps making the evident square commute. Then we consider endomorphisms (especially automorphisms) of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) over FDVect\mathbf{FDVect}, that is, commuting with the forgetful functor to FDVect\mathbf{FDVect}.

(1)   I’ll begin by restating some of what Todd wrote. For an operator TT on a vector space XX (always finite-dimensional for now), the eventual kernel is the subspace

ker (T)= n0ker n(T). ker^\infty(T) = \bigcup_{n \geq 0} ker^n(T).

It’s a theorem that

X= λkker (Tλ) X = \bigoplus_{\lambda \in k} ker^\infty(T - \lambda)

where, of course, ker (Tλ)ker^\infty(T - \lambda) is trivial for all but finitely many values of λ\lambda (the eigenvalues). Moreover, TT restricts to an operator T λT_\lambda on ker (Tλ)ker^\infty(T - \lambda), for each λ\lambda, so

T= λkT λ. T = \bigoplus_{\lambda \in k} T_\lambda.

The only eigenvalue of T λT_\lambda is λ\lambda; equivalently, T λλT_\lambda - \lambda is nilpotent. All these facts are closely related to Jordan normal form. Another fact that may or may not be relevant here: the projections associated with this direct sum decomposition of XX are all polynomials in TT.

This decomposition is functorial. That is, if f:(X,T)(Y,S)f: (X, T) \to (Y, S) is a map in Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}), then ff restricts to a map

f:ker (Tλ)ker (Sλ) f: ker^\infty(T - \lambda) \to ker^\infty(S - \lambda)

for each λk\lambda \in k, and therefore defines a map

f:(ker (Tλ),T λ)(ker (Sλ),S λ) f: (ker^\infty(T - \lambda), T_\lambda) \to (ker^\infty(S - \lambda), S_\lambda)

for each λk\lambda \in k.

Now take any map of sets kk[t]k \to k[t], which I’ll write as λp λ\lambda \mapsto p_\lambda. Then we obtain an endomorphism FF of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}), defined by

(X,T)(X, λkp λ(T λ)). (X, T) \mapsto \Bigl(X, \bigoplus_{\lambda \in k} p_\lambda(T_\lambda)\Bigr).

We have to check functoriality. In other words, we have to check that if f:(X,T)(Y,S)f: (X, T) \to (Y, S) is a map in Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) then

f λkp λ(T λ)= λkp λ(S λ)f. f \circ \bigoplus_{\lambda \in k} p_\lambda(T_\lambda) = \bigoplus_{\lambda \in k} p_\lambda(S_\lambda) \circ f.

And that’s true by the previous paragraph.

Usually, such an FF won’t be an automorphism of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}). But it sometimes is. Let’s start with a bijection σ:kk\sigma: k \to k that fixes 00. Then, for λk\lambda \in k, define p λp_\lambda to be the constant polynomial σ(λ)/λ\sigma(\lambda)/\lambda (to be interpreted as 11 when λ=0\lambda = 0). The resulting endomorphism of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}), which I’ll call G σG_\sigma, is

G σ:(X,T)(X, λkσ(λ)λT λ). G_\sigma: (X, T) \mapsto \Bigl( X, \bigoplus_{\lambda \in k} \frac{\sigma(\lambda)}{\lambda} T_\lambda \Bigr).

The only eigenvalue of T λT_\lambda is λ\lambda, so the only eigenvalue of σ(λ)λT λ\frac{\sigma(\lambda)}{\lambda} T_\lambda is σ(λ)\sigma(\lambda). Thus, G σG_\sigma has the effect of permuting the generalized/eventual eigenspaces ker (Tλ)ker^\infty(T - \lambda) according to σ\sigma. In particular, G σG_\sigma is invertible, with inverse G σ 1G_{\sigma^{-1}}.

For example, let σ\sigma be the permutation of \mathbb{C} that interchanges 11 and 22 but fixes everything else. Then G σG_\sigma has the following effect:

(1 2 1 2 3 4)(2 1 1/2 1 3 4). \begin{pmatrix} 1& & & & \\ &2&1& & \\ & &2& & \\ & & &3& \\ & & & &4 \end{pmatrix} \qquad \mapsto \qquad \begin{pmatrix} 2& & & & \\ &1&1/2& & \\ & &1& & \\ & & &3& \\ & & & &4 \end{pmatrix}. where all unlabelled entries are zero, and both sides are operators on 5\mathbb{C}^5. This automorphism is not one of the “obvious” ones, i.e. not of the form TaT+bT \mapsto a T + b for scalars a0a \neq 0 and bb.

I don’t know whether there are endomorphisms or automorphisms of Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) other than those just described.

(2)   In my original post, I used the “fact” that if we have an automorphism FF of the category Endo(FDVect)\mathbf{Endo}(\mathbf{FDVect}) over FDVect\mathbf{FDVect}, and a vector space XX, then Spec(T)Spec'(T) determines Spec(F(T))Spec'(F(T)) for operators TT on XX. Is this true?

When I wrote it, I believed that FF had to be of the form TaT+bT \mapsto a T + b for some scalars a0a \neq 0 and bb. It’s certainly true then. (As I said in my post, when the space XX is known, knowing Spec(T)Spec'(T) is equivalent to knowing Spec(T)Spec(T); and Spec(aT+b)=aSpec(T)+bSpec(a T + b) = a Spec(T) + b.)

But it’s also true for the automorphisms G σG_\sigma defined in (1). Indeed, the multiplicity of a scalar λ\lambda in Spec(G σ(T))Spec(G_\sigma(T)) is the multiplicity of σ 1(λ)\sigma^{-1}(\lambda) in Spec(T)Spec(T).

(3)   Now dropping the assumption of finite-dimensionality, there are no more endomorphisms of Endo(Vect)\mathbf{Endo}(\mathbf{Vect}) over Vect\mathbf{Vect} than you think there are. That is, they’re all of the form Tp(T)T \mapsto p(T) for some polynomial pp. This is true over any field kk whatsoever, not necessarily algebraically closed.

To see this, let FF be an endomorphism of Endo(Vect)\mathbf{Endo}(\mathbf{Vect}) over Vect\mathbf{Vect}. We have the vector space k[t]k[t] of polynomials over kk and the operator tt\cdot - on it, hence also the operator F(t)F(t\cdot -) on it. Let

p=(F(t))(1)k[t]. p = (F(t\cdot -))(1) \in k[t].

I claim that F(T)=p(T)F(T) = p(T) for all operators TT. Indeed, let TT be an operator on a space XX, and let xXx \in X. There’s a linear map f:k[t]Xf: k[t] \to X defined by f(q)=q(T)(x)f(q) = q(T)(x), for polynomials qq. This defines a map

f:(k[t],t)(X,T) f: (k[t], t\cdot -) \to (X, T)

in the category Endo(Vect)\mathbf{Endo}(\mathbf{Vect}) of operators. Hence we get a map

f:(k[t],F(t))(X,F(T)) f: (k[t], F(t\cdot -)) \to (X, F(T))

in the same category. It follows that fF(t)=F(T)ff \circ F(t \cdot -) = F(T) \circ f. Evaluating both sides at 1k[t]1 \in k[t] gives q(T)(x)=F(T)(x)q(T)(x) = F(T)(x), as required.

It’s clear, I guess, that Tp(T)T \mapsto p(T) is only an automorphism when p(t)p(t) is of the form at+ba t + b for some a,bka, b \in k with a0a \neq 0.

Posted by: Tom Leinster on September 19, 2015 6:14 PM | Permalink | Reply to this

Re: Where Does The Spectrum Come From?

just a random thought for you. This post got me thinking about something I never did quite understand. I spent a lot of time in school studying “the method of moments” or “boundary element method” (https://en.wikipedia.org/wiki/Boundaryelementmethod). The reference I used in college was “Field Computation by Moment Methods” by Harrington. In that book,section 1-3 he wrote, “The classical eigenfunction method leads to a diagonal matrix, and can be thought of as a special case of the method of moments”.

That never made sense to me (like most things!)

Might be worth looking at.

Posted by: Rob MacDonald on September 25, 2015 8:16 PM | Permalink | Reply to this

Post a New Comment