## May 18, 2007

### Linear Algebra Done Right

#### Posted by David Corfield

– guest post by Tom Leinster

One of the best things about the n-Category Café is the willingness of its clientele to re-examine notions that are usually thought of as too basic to bother with — things that you might think were beneath your dignity.

Another great thing about the n-Café is the generosity of its proprietors in letting me post here! Thanks to them.

I’m not going to talk about fancy linear algebra: just the good old theory of finite-dimensional vector spaces, like you probably met when you were an undergraduate. Or rather, not quite like what you met…

The inspiration is Sheldon Axler’s textbook Linear Algebra Done Right. Unfortunately you can’t download it, but you can download an earlier article making the same main points, Down with determinants!.

I’ve just given you a dirty great hint as to what’s behind that provocative first title. For Axler, ‘done right’ means ‘done without determinants’. He claims that determinants should be delayed until as late as possible in a linear algebra course - and he shows that ‘as late as possible’ is a lot later than most people think. He doesn’t introduce them himself until Chapter 10 of his 10-chapter book. In fact, he goes as far as to claim that the only part of undergraduate mathematics where determinants are needed is the change of variables formula for multi-variable integrals.

He makes his point in the Introduction:

The audacious title of this book deserves an explanation. Almost all linear algebra books use determinants to prove that every linear map from a finite-dimensional complex vector space to itself has an eigenvalue. Determinants are difficult, nonintuitive, and often defined without motivation. To prove the theorem about existence of eigenvalues on complex vector spaces, most books must define determinants, prove that a linear map is not invertible if and only if its determinant equals 0, and then define the characteristic polynomial. This tortuous (torturous?) path gives students little feeling for why eigenvalues must exist. In contrast, the determinant-free proof presented here […] is simpler and clearer and offers more insight.

I’ll give you the proof below.

I think most people would agree that Axler’s line of development is very nice and represents a good way of teaching. Nevertheless, perhaps rather fewer would join in wholeheartedly with the cry of ‘Down with determinants!’ After all, determinants are polynomials, and once you have a polynomial you can engage the power of algebraic geometry. For example, it’s useful to know that the invertible $n\times n$ matrices are Zariski-dense in the space of all $n\times n$ matrices, which follows from the fact that the invertible matrices are those whose determinant is nonzero. Determinants are also re-emerging in all sorts of fancy settings, such as quasideterminants, hyperdeterminants, and multidimensional determinants.

I’m interested to know what other readers think. What are determinants good for?

This is a long post, so I’ll split the rest into two sections. First I’ll tell you more about what Axler does. Then I’ll tell you why his book could also have been called Dynamics In A Finite-Dimensional Vector Space.

Axler’s Linear Algebra

First, here’s the proof that Axler mentioned. Our task is to show that if $V$ is a nontrivial, finite-dimensional vector space, then every endomorphism $T$ of $V$ has an eigenvalue. Choose a nonzero vector $v$. The vectors $v, T(v), T^2(v), T^3(v), \ldots$ can’t all be linearly independent. So there’s a dependence relation, which is of the form $p(T)(v) = 0$ for some nonzero polynomial $p$. Since we’re over the complex numbers, $p$ splits into linear factors, and $(T - \lambda_1) \cdots (T - \lambda_k) (v) = 0.$ Since $v \neq 0$, not all the maps $(T - \lambda_i)$ can be injective: in other words, some $\lambda_i$ is an eigenvalue.

This is a ‘dynamical’ proof: it works by considering the trajectory of $v$ under iterated application of $T$. It’s similar to the proof that every element of a finite group has finite order.

If you like that, you’ll probably like Axler’s whole approach. Here’s some more. Given an endomorphism $S$ of a vector space $V$, its eventual kernel $evKer(S)$ is the union of the subspaces $Ker(S) \subseteq Ker(S^2) \subseteq Ker(S^3) \subseteq \cdots,$ and its eventual image $evIm(S)$ is the intersection of the subspaces $Im(S) \supseteq Im(S^2) \supseteq Im(S^3) \supseteq \cdots.$ You can show that if $V$ is finite-dimensional then $V = evKer(S) \oplus evIm(S).$ You might think this doesn’t say anything about the familiar concerns of linear algebra - but wait a moment.

Fix an endomorphism $T$ of $V$, a finite-dimensional complex vector space. For $\lambda \in \mathbb{C}$, write $E_\lambda = evKer(T - \lambda)$, and call its elements eventual $\lambda$-eigenvectors. (The ‘eventual’ terminology is mine, not Axler’s; he calls them ‘generalized eigenvectors’, which I don’t find very evocative.) Using the results above, you can show that $V = \oplus_{\lambda \in \mathbb{C}} E_\lambda.$ In particular, this implies that the dimensions of the $E_\lambda$’s add up to $\dim V$: writing $m(\lambda) = \dim(E_\lambda)$, $\dim V = \sum_{\lambda \in \mathbb{C}} m(\lambda).$ In particular, $m(\lambda)$ is nonzero for only finitely many values of $\lambda$. But $m(\lambda)$ is nonzero if and only if $evKer(T - \lambda)$ is nontrivial, if and only if $\lambda$ is an eigenvalue. So $T$ has only finitely many eigenvalues!

I’m omitting the proofs, but it’s as easy to find them yourself as to look them up.

In fact, $m(\lambda)$ is the mysterious quantity known as the algebraic multiplicity of $\lambda$. It’s usually defined as the power of $(x - \lambda)$ appearing in the characteristic polynomial $\chi_T(x)$, which doesn’t give much of an idea of what it really is.

Well, it’s not mysterious any more! The algebraic multiplicity of $\lambda$ is simply the dimension of $evKer(T - \lambda)$, the space of eventual $\lambda$-eigenvectors. Clearly it’s at least as big as the dimension of $Ker(T - \lambda)$, the space of ordinary $\lambda$-eigenvectors. This inequality is usually phrased as $\text{algebraic multiplicity} \geq \text{geometric multiplicity}.$

If you already know about Jordan normal form, you can see that the two descriptions of algebraic multiplicity are equivalent. In fact, you come naturally to Jordan normal form if you walk just a little further along the Axlerian path.

Despite Axler’s anti-determinant rhetoric, this approach does in fact give a very nice way into determinants. You simply define $\det T = \prod_{\lambda \in \mathbb{C}} \lambda^{m(\lambda)}.$ (Look, no matrices!) Similarly, the characteristic polynomial is $\chi_T(x) = \prod_{\lambda \in \mathbb{C}} (x - \lambda)^{m(\lambda)},$ and the Cayley-Hamilton Theorem becomes a pushover.

Easy exercise: find a similar formula for trace.

Dynamics in a Finite-Dimensional Vector Space

A dynamical system is a system that changes through time. The central questions of dynamics are things like: what do the trajectories/orbits look like? Are there fixed points? Are there cycles? Do all trajectories ‘settle down’ or can there be chaos? If you start with points close together, do they necessarily exhibit similar long-term behaviour?

I want to talk about dynamical systems in some generality - so much generality that it won’t be clear what all of these questions mean. This will take us back to Linear Algebra Done Right! I’ll stick to discrete-time dynamical systems, where time is modelled on the integers, not the real numbers.

Sloganeering,

A dynamical system is an endomorphism.

Thus, we take a point in whatever space the endomorphism is defined on, repeatedly apply the endomorphism to it, and ask what the trajectory might look like.

For example, I have before me a paper that defines an ‘abstract dynamical system’ as a continuous map from a metric space to itself. (Here at the $n$-Category Café, we hardly notice the word ‘abstract’ tacked onto the front.)

For another example, complex dynamics is about what happens when you iterate a rational function (over $\mathbb{C}$), or more generally, what happens when you iterate a holomorphic self-map of a Riemann surface. This is ‘more generally’ because a rational function is the same thing as a holomorphic self-map of the Riemann sphere $\mathbb{C} \cup \{\infty\}$. Here we meet Julia sets, the Mandelbrot set, etc.

For another example, you might call combinatorial dynamics the study of iterating a function from a finite set to itself. There’s a surprising amount to be said here, and it’s curiously absent from common knowledge. I first noticed this when reading Milnor’s book Dynamics in One Complex Variable, where now and again he needs some chewy little result in combinatorial dynamics. For instance, did you know that for any endomorphism $f$ of a finite set, the set $\{ f, f^2, f^3, \ldots \}$ contains precisely one idempotent? It’s not especially hard to prove - in fact, it’s among the most basic results in combinatorial dynamics. But what’s interesting is that so few people just know it off the top of their heads. Contrast the vast amount we all know about permutations of finite sets.

And for another example, take dynamics of a vector space! This is what Axler is really doing. You can see from the stuff above that we’re concerned with iterated applications of the endomorphism $T$, and eventual behaviour of a vector $v$. That’s the changed point of view that allows him to do without determinants.

For instance, take the proof that every linear endomorphism has an eigenvalue. The first step is to show that every trajectory $v, T(v), T^2(v), T^3(v), \ldots$ eventually ‘repeats itself’, in the weak sense that after a finite number of steps, all the points we obtain are in the linear span of those listed previously. This is a rough analogue of the topological statement ‘every trajectory is attracted to a cycle’.

I’d like to know whether it’s fruitful to take other standard statements in dynamics and formulate linear-algebraic analogues. In other words, can the analogy between dynamical systems and Axlerian linear algebra be pushed further?

Posted at May 18, 2007 6:27 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1278

### Re: Linear Algebra Done Right

I’m really torn about Axler. My first course at the University of Maryland was the upper-level linear algebra, and we used Axler. Overall, I’d say that the theme is to resist using a basis, which I wholly support.

The problem then is how to do determinants without a basis, and I really dislike how Axler ends up handling them. The “right” way (in my opinion) is to look at the top degree of the exterior algebra rather than to introduce the characteristic polynomial and then back into determinants, but this is pretty sophisticated even for senior math majors. In fact, Dr. Adams tacked on a section from Lang’s Algebra at the end of the course to cover this approach and promptly lost all but three of us.

Posted by: John Armstrong on May 18, 2007 6:58 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Thnaks for making it clear that Axler is not intended for all undergrads who need linear algebra. I heartily approve of introducing the vector product basis free (right hand rule) but to calculate without 3 x 3 determinants?

By the way, I am a firm believer in Rainich’s maxim to the effect: The last thing you want to do is write the problem in coordiantes - his double entendre was deliberate.

Posted by: jim stasheff on May 20, 2007 1:16 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Does Axler’s approach bring any insight into the topic discussed here, namely Gelfand’s approach to quasideterminants? In other words, when looking for a generalisation of determinants to matrices with entries from noncommutative rings, could thinking of these matrices as endomorphisms, or dynamical systems, help?

Posted by: David Corfield on May 18, 2007 7:28 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Given Axler’s approach to eigenvalues via the factorisation of $P(T)$, perhaps in the noncommutative case you’d need Gelfand et al.’s pseudoroots (section 7).

Can you square Axler’s cry “Down with determinants!” with Gelfand’s claim:

Classical (commutative) determinants play a key role in representation theory. Frobenius developed his theory of group characters by studying factorizations of group determinants … Therefore, one cannot start a noncommutative representation theory without looking at possible definition of noncommutative determinants and traces. (p. 66)?

Posted by: David Corfield on May 18, 2007 10:55 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Haven’t given this a lot of thinking time yet but am I right to believe that while trace and many other linear algebraic concepts have a natural abstract counterpart in terms of compact closed categories, determinants don’t seem to fit in that realm?

Posted by: bob on May 18, 2007 9:31 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

If so, might that be something to do with:

det(exp($A$)) = exp(tr($A$))?

How does the matrix exponential sit with compact closed categories?

Posted by: David Corfield on May 18, 2007 10:28 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

In linear logic, i.e. for star-autonomous categories (cf. Seely ), for exponentials you typically would like some (cofree) comonoid structure:

* d : A -> A (x) A

* e : A -> I

which allows you to build canonical morphisms such as:

* A -> I + A + A^2 + A^3 + …

What bugs me a bit is that this requires coproducts, so, as Robin Houston showed, in the case that your star-autonomous catergory is also compact, this will automatically imply biproducts. And then of course you basically are dealing with a matrix calculus over the semiring C(I,I) with I -> I (+) … (+) I as base-vectors for your finitary fragment i.e. the full subcategory of objects of the form I (+) … (+) I. (although, maybe, that’s exactly what one desires)

Posted by: bob on May 18, 2007 11:05 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Bob writes:

Haven’t given this a lot of thinking time yet but am I right to believe that while trace and many other linear algebraic concepts have a natural abstract counterpart in terms of compact closed categories, determinants don’t seem to fit in that realm?

I think you’re right.

As John Armstrong notes, determinants are probably best understood using a subject called ‘exterior algebra’, or ‘Grassman algebra’.

The idea is that if you have a vector space $V$, you can take the $p$th tensor power $V^{\otimes p}$ and mod out by the action of the symmetric group $S_p$ where a permutation $\sigma$ acts on a tensor $v_1 \otimes \cdots \otimes v_p$ by permuting the factors and throwing in a sign $sign(\sigma)$. The resulting quotient space is called the $p$th exterior power of $V$, or $\Lambda^p V$.

This $\Lambda^p$ operation is a functor, so any linear operator

$f: V \to W$

gives a linear operator

$\Lambda^p f : \Lambda^p V \to \Lambda^p W .$

If $V$ is $n$-dimensional then $\Lambda^n V$ is 1-dimensional. In this case, given

$f : V \to V$

the operator

$\Lambda^n f : \Lambda^n V \to \Lambda^n V$

is just multiplication by a number! And, this number is none other than our friend $det(f)$. The properties

$det(1_V) = 1$

$det(f g) = det(f) det(g)$

come from the fact that $\Lambda^n$ is a functor.

If one knows some category theory, it’s not hard to figure out a general context in which we can do all this stuff.

If we have a symmetric monoidal abelian $k$-linear category $C$, we can define the exterior powers $\Lambda^p V$ of any object $V$, and any morphism

$f: V \to W$

gives a morphism

$\Lambda^p f : \Lambda^p V \to \Lambda^p W$

Indeed, we have a functor

$\Lambda^p : C \to C$

This is an example of a ‘Schur functor’. We talked about Schur functors a while back here in the café, and we came up with a a nice conjecture about the deep category-theoretic meaning of Schur functors. But, we don’t need this conjecture for what we’re doing now.

To get a ‘determinant’ for the endomorphisms

$f: V \to V$

of some object $V \in C$, what we really need is for $V$ to be $n$-dimensional in a certain sense. Namely, we need $End(\Lambda^n V)$ to be a 1-dimensional vector space. Then we can define the determinant by

$\Lambda^n f = det(f) 1_{\Lambda^n V},$

and a bunch of the usual properties of determinant will hold.

You may wonder how the above definition of ‘$n$-dimensional’ objects is related to the definition in terms of a trace:

$\dim(V) = tr(1_V).$

This was first studied by Doplicher and Roberts in their famous paper on reconstructing a group from its category of representations.

Posted by: John Baez on May 18, 2007 6:20 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I was never taught exterior algebra, nor have I ever befriended it in private. Consequently, it makes me feel a bit insecure. But I’ve just worked out what’s going on here.

The exterior algebra approach is a nice way of repackaging something that’s more likely to be familiar from undergraduate linear algebra: namely, the axiomatic characterization of determinant.

I claim that the following two statements are equivalent:

1. if $V$ is an $n$-dimensional vector space then $\Lambda^n V$ is one-dimensional
2. there is a unique multilinear map $d: \underbrace{k^n, \ldots, k^n}_n \to k$ that is alternating and satisfies $d(e_1, \ldots, e_n) = 1$, where $(e_i)$ is the standard basis of $k^n$. (I’m having a problem with the Latex command ‘underbrace’ here, but the domain’s meant to be $n$ copies of $k_n$.)

Since both statements are true, of course they’re equivalent! But you know what I mean. And of course, ‘$d$’ is the determinant.

Statement 1 is equivalent to the dual space $(\Lambda^n V)^*$ being one-dimensional. But a linear functional on $\Lambda^n V$ is an alternating multilinear map $\underbrace{V, \ldots, V}_n \to k,$ so (1) asserts that there is only one of these, up to scalar multiplication. And that’s statement (2).

Posted by: Tom Leinster on May 22, 2007 2:51 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

You’re right. Actually it was statement (2) that Dr. Adams pulled in from Lang’s book to supplement Axler’s treatment (and which thoroughly confused all but a handful of the students).

Posted by: John Armstrong on May 22, 2007 5:05 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Could anyone give me references on abstract treatments on exterior algebra e.g. is *abelian* and *k-linearity* truly essential? Someone must have figured out a way around that. No?

Posted by: bob on June 7, 2007 10:54 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Whilst I agree that determinants are often not motivated, I find it hard to believe that they are non-intuitive. Isn’t the determinant a measure of the increase in volume effected by a transformation? Whilst I realise that making that into a precise statement is somewhat difficult, surely that is the intuition.

I don’t see how the definition of the determinant in terms of algebraic multiplicities of eigenvalues is giving much in the way of intuition. Maybe that’s just me.

However, the intuition behind the trace is something that I have a problem with – and no, being the infinitesimal version of the determinant is not something I find useful for an arbitrary linear map.

Posted by: Simon Willerton on May 18, 2007 10:59 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I recall that when I was first taught linear algebra I did think: “Given enough time, I can imagine eventually having come up with much of this stuff myself. But no matter what, never ever would I have found the formula (in terms of minors, the way I was taught) for the determinant, nor even why it is useful!”

It was only when I learned about differential forms that, as a student, I fully appreciated the naturalness of the determinant.

So I fully agree with John Armstrong and Simon Willerton here: the determinant is about volume factors, made precise by the transformation property of top degree exterior powers.

One of the very first hints that superizing can lead to crucial simplifications…

Posted by: urs on May 18, 2007 11:12 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Simon wrote:

However, the intuition behind the trace is something that I have a problem with

Here’s the way I like to think about trace. I don’t know whether you’d call it “intuition”.

We have an isomorphism $Hom(V, W) \cong V^* \otimes W$, for finite-dimensional vector spaces $V$ and $W$ over a field $k$. We also have the evaluation map $V^* \otimes V \to k$. Thus, we have a canonical map $Hom(V, V) \to k$. And that’s the trace!

(If you got as far as the ‘easy exercise’ above, you probably realize that the trace is also the sum of the eigenvalues, counting by algebraic multiplicity. Incidentally, I’d prefer to call it dynamic multiplicity.)

When Axler finally introduces determinants, he does in fact stick closely to the idea of determinant as volume scale factor. This intuition very strongly suggests the formula $\det(S T) = \det(S) \det(T)$, which as far as I can see is difficult to prove if you define determinant as the product of the eigenvalues.

Posted by: Tom Leinster on May 18, 2007 11:48 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

In the graphical language for monoidal categories, due to Joyal and Street, the (partial) trace can be interpreted as a feedback loop. This view is very intutitive and extremely useful in computer science, for example to describe interaction between systems. (eg Abramsky - Retracing some paths in process algebra. Concurrency Theory 96 )

Also, in basic quantum mechanics, it is the trace which matters, determinants are nowhere; all relevant inverses are adjoints by unitarity.

Posted by: bob on May 18, 2007 12:44 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Also, in basic quantum mechanics, it is the trace which matters, determinants are nowhere;

That’s not quite true (though it is right that in most QM textbooks that don’t treat field theory there may not appear many applications of determinants):

The path integral, in as far as it is Gaussian, computes the determinant (needs lots of regularization in real-world setups, of course) of the kinetic energy operator.

Especially the determinants of Dirac operators appearing this way play a crucial role in quantum field theory.

Posted by: urs on May 18, 2007 2:23 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Sure. By *basic* I was referring von Neuman axiomatics and indeed not field theory.

Posted by: bob on May 18, 2007 2:58 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I’m quite happy with the graphical calculus of traces in a monoidal category (and will soon be foisting it upon some unsuspecting algebraic geometers), however that doesn’t seem to be telling me much about linear maps.

When I’m teaching undergraduates about linear algebra, I can draw a figure in the plane, apply a linear transformation and then point to the determinant of the transformation: “See how the size of the figure has changed.” How does one point to the trace of a linear map?

I guess when I say “intuition” I mean “geometric intuition”.

Posted by: Simon Willerton on May 20, 2007 10:07 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

In the graphical language for monoidal categories, due to Joyal and Street, the (partial) trace can be interpreted as a feedback loop.

Just wanted to note that this is also evident in Penrose’s notation.

Posted by: porges on July 4, 2009 4:55 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

” Isn’t the determinant a measure of the increase in volume effected by a transformation”

If you find this intuitive, then how about saying that Tr(A) is

d (Volume of (1+tA)*unit box)/dt

at t=0? I can visualize this by imagining a unit cube, with one vertex at 0, and nudging the vector in the e_i direction by a small multiple of A_i.

Posted by: David Speyer on May 18, 2007 8:15 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

To see that the vector space multiplicative trace can be interpreted in terms of a feedback loop, consider the identity and the additive swap on K(+)K. In the case of the identity, the trace connects the input with the output so sends every *traveling particle* to itself so produces a maximal inner-product. In the case of the swap every particle is send on its orthogonal complement, which yields zero as inner-product. The sum you calculate when usually evaluating the trace is the net effect of identifying a number of such inner-products. In this view the full trace is only a special case of the partial trace; and the inner product is an instance of the trace namely Tr(f^dagger o g) where f,g : K -> K(+)K.

Posted by: bob on May 22, 2007 9:47 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I just cannot agree with Axler when he says “Determinants are difficult, nonintuitive, and often defined without motivation”: there is a perfectly intuitive definition of the determinant of a family $(v_1,\dots,v_n)$ of $n$ vectors in a real $n$-dimensional vector space $V$ as the (obviously $n$-multilinear) map which returns the volume of the parallelepiped spanned by those vectors. This does require a choice of basis, but since the determinant of a linear map $T:V\rightarrow V$ is then the ratio $\det(T):=\det(Tv_1,\dots,Tv_n)/\det(v_1,\dots,v_n)$, we clearly have an intrinsic notion here (namely a number which measures how $T$ modifies such volumes, whaterever basis one may choose in pratice).

Now as a fan of dynamical systems, the question of extending Axler’s approach to more linear algebraic notions, and David’s question on quasideterminants are certainly very interesting!

Posted by: thomas1111 on May 18, 2007 11:06 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I tend to agree that Axler goes too far in arguing against determinants. As I see it, there are several different things going on, and it’s not helpful to simplify it into a ‘pro’ vs. ‘anti’ debate:

1. The simple fact that it is possible to do all the standard, basic linear algebra without determinants. Axler deserves credit for pointing this out so vividly.
2. How should we teach linear algebra? Here I think I’m with Axler: his overall development is cleaner and simpler than a determinant-based one.
3. Do we think determinants are ‘good’? Obviously, it’s an ill-posed question. There do seem to be good things we can do with determinants, and the interpretation of determinant as volume gives an excellent intuition. At the end of Down with determinants!, Axler says that there are many proofs outside linear algebra that use determinants and that could be improved by finding a determinant-free alternative. I’m interested to try this idea out - and challenged by the fact that the paper I’m writing at the moment is full of the things!
4. Dynamics of a vector space. I’m eager to run with this idea too.

So the way I see it, you can be enthusiastic about Axler’s approach without being anti-determinant.

Posted by: Tom Leinster on May 18, 2007 12:01 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I’d like to know whether it’s fruitful to take other standard statements in dynamics and formulate linear-algebraic analogues. In other words, can the analogy between dynamical systems and Axlerian linear algebra be pushed further?

In “real” dynamical systems, namely in physics, there is a big deal about “conserved quantities”, related to unitarity of certain endomorphisms.

While trivial as a technical statement, it is important and does mesh well with your philosophy here, it seems.

For instance, we would demand that for any “fumdamental” evolution endomorphism $T$ the discrete-time trajectory $v, T(V) , T^2(V), \cdots$ must never contain any multiples of $v$ that have different norm than $v$.

Posted by: urs on May 18, 2007 11:40 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

That’s interesting. Is the condition you gave in the last paragraph (that the trajectory must never contain any multiples of $v$ that have different norm than $v$) equivalent to the ‘unitarity of certain endomorphisms’, or is it just a consequence? In other words, is there a stronger condition that you really want the trajectory to satisfy? On its own, the condition seems a little bizarre.

Posted by: Tom Leinster on May 18, 2007 12:08 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Oh, I just meant to express “unitary” in a way more closely exhibiting the “dynamical” aspect which you stressed. It’s just supposed to say that $|T(v)|^2 = |v|^2$.

Posted by: urs on May 18, 2007 12:31 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

My immediate reaction to the discussion here is - well what about the INTERESTING cases?

By which I mean what happens over non-complete fields? It’s all fine and well to go over the Jordan normal form to charakterstic polynomials and suchlike, but what about when you cannot necessarily rely on having the charakteristic polynomials factor to linear factors?

I’m not saying that this is an argument for the “Old State”, rather I am truly curious if a modernized approach deals with these cases too, and if so, then how.

Posted by: Mikael Johansson on May 18, 2007 1:47 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

For vector spaces over non-complete fields, doen’t it work in a similar way?

We pick our vector $v \in V$, and we form the iterates $v, Tv, T^2v, \ldots$. At some point we get a linear dependency,

(1)$a_n T^n v + \ldots + a_1 Tv + a_0 v = 0.$

We think of the expression above as a polynomial

(2)$a_n X^n + \ldots + a_1 X + a_0$

which we factor into its irreducible components

(3)$(b_{1,p} X^p + \cdots +b_{1,1} X + b_{1,0})\cdots (b_{m,q}X^q + \cdots + b_{m, 1} X + b_{m, 0}) = a_n X^n + \ldots + a_1 X + a_0$

where the $m$ expressions in brackets are irreducible. We now need to translate this back into the vector space picture…which we do according to the algorithm below. In other words, we have the equation

(4)$(b_{1,p} T^p + \cdots +b_{1,1} T + b_{1,0})\cdots (b_{m,q}T^q + \cdots + b_{m, 1} T + b_{m, 0})v = 0$

and we work from right-to-left. Though… I’m a bit confused since it appears to depend upon the order in which we wrote down the factors : mmm.

Anyhow, it seems - at least to someone like me with a poor understanding of linear algebra over non-complete fields - that this is close to the kind of thing you would expect to get out.

Posted by: Bruce Bartlett on May 18, 2007 3:53 PM | Permalink | Reply to this

### Axler’s approach also gives eigenvectors

I find Axler’s approach quite interesting. One thing you’ll notice is that it not only gives you an eigenvalue, it also gives you the corresponding eigenvector . Though I’m a little bit confused .

We start with an arbitrary vector $v \in V$, and we form the iterates

(1)$v, Tv, T^2v, T^3v, \ldots$

They can’t all be linearly independent, so at some point, as Tom explained, we get an equation of the form

(2)$(T - \lambda_k)(T - \lambda_{k-1})\cdots (T-\lambda_2)(T-\lambda_1)v = 0.$

Now we start from the right hand side, working according to the following algorithm :

Is $(T-\lambda_1)v = 0$? If so, $v$ is an eigenvector of $T$ with eigenvalue $\lambda_1$, and we are done.

If not, we move left one step. Set $v' = (T-\lambda_1)v$. Is $(T-\lambda_2)v' = 0$? If so, $v'$ is an eigenvector of $T$ with eigenvalue $\lambda_2$, and we are done. And we carry on in this way.

At the end, we’ll get an eigenvalue $\lambda_k$, together with an explicit basis-free formula for its associated eigenvector

(3)$\tilde{v} = (T-\lambda_{k-1})\cdots (T-\lambda_1)v.$

At first glance, that seems a slight advantage over the old determinant based approach. We could get the eigenvalues out by solving the polynomial equation

(4)$det (\lambda I - T) = 0,$

but then to get the corresponding eigenvectors we had to (I think?) :

(a) choose a basis for $V$,

(b) choose a root $\lambda_i$ and express the equation $Tv = \lambda_i v$ in this basis

(c) solve these linear equations for $v$.

The advantage of the old approach is that it systematically gives you all the eigenvectors.

The advantage of Axler’s approach is that you can get a single (eigenvalue, eigenvector) pair without solving any linear equations at all. Moreover, you get an excplicit basis-free formula for the eigenvector,

(5)$\tilde{v} = (T-\lambda_{k-1})\cdots (T-\lambda_1)v.$

Mmm.

Posted by: Bruce Bartlett on May 18, 2007 3:18 PM | Permalink | Reply to this

### Re: Axler’s approach also gives eigenvectors

Shouldn’t you be able to get all the eigenvectors (not just one of them) this way, by permuting the factors?

Posted by: Scott McKuen on May 18, 2007 8:03 PM | Permalink | Reply to this

### Re: Axler’s approach also gives eigenvectors

Shouldn’t you be able to get all the eigenvectors (not just one of them) this way, by permuting the factors?

Well no, it depends what vector you start with. For instance, if $v$ already was an eigenvector of $T$, then we’d get linear dependence after just one step. So our polynomial equation would be

(1)$X - \lambda = 0$

and the only (eigenvalue, eigenvector) pair you can extract from that is just $(\lambda, v)$. But perhaps you’re right in some sense… it seems a generic $v$ would be generally expected to produce quite a few eigenvalues (and their corresponding eigenvectors).

I guess I should read Axler’s paper :-) I’m curious to know how the polynomial $P(v)$ extracted from a vector $v$,

(2)$P(v) = a_n X^n + \cdots + a_1 X + a_0$

relates to the honest characteristic polynomial,

(3)$P = det (\lambda id - T).$
Posted by: Bruce Bartlett on May 19, 2007 1:51 AM | Permalink | Reply to this

### Re: Axler’s approach also gives eigenvectors

The story goes something like this. Take, as usual, a finite-dimensional complex vector space $V$ and an endomorphism $T$ of it.

Let $v \in V$. The polynomials $p$ for which $p(T)(v) = 0$ form an ideal of $\mathbb{C}[x]$. But $\mathbb{C}[x]$ is a principal ideal domain, so this ideal is of the form $(m_v)$ for some polynomial $m_v$, unique if we insist that it’s monic or $0$. This $m_v$ is Bruce’s ‘$P$’.

$m_v$ deserves to be called the minimal polynomial of $v$ with respect to $T$. It’s the same as the minimal polynomial of $T_v$, where $T_v$ is the restriction of $T$ to the smallest $T$-invariant subspace containing $v$.

To answer Bruce’s question, $m_v$ divides the minimal polynomial $m$ of $T$, since $m$ is in the ideal mentioned above. Hence $m_v$ also divides the characteristic polynomial of $T$.

I just said that the minimal polynomial of any vector divides the ‘overall’ minimal polynomial. But you can prove that there’s some vector $v$ whose minimal polynomial is actually equal to the overall minimal polynomial: $m_v = m$. The proof is very like the standard (?) proof that every finite abelian group achieves its exponent (i.e. there exists an element whose order is the least common multiple of the orders of all the elements).

Posted by: Tom Leinster on May 19, 2007 3:01 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I see the “Down with Determinants!” battle cry as a bit reactionary, a throwback to some old Bourbaki (and other) prejudices. Gian-Carlo Rota has some amusing observations on this. Speaking of his undergraduate teacher Emil Artin (Indiscrete Thoughts, p. 14), he writes

Even the teaching of undergraduate linear algebra carried the imprint of Emil Artin’s very visible hand: we were to stay away from any mention of bases and determinants (a strange injunction, considering how much he liked to compute). The alliance of Emil Artin, Claude Chevalley, and Andre Weil was out to expunge all traces of determinants and resultants from algebra. Two of them are now probably turning in their graves.

Rota goes much further in detailing the Algebra wars which have been going on now for over a hundred years, between what he calls the Algebra One camp (Hilbert, Emmy Noether, and van der Waerden would be representative) which emphasizes most of the algebra underlying algebraic geometry and algebraic number theory, commutative algebra, homological algebra, etc., opposed to the Algebra Two camp, whose exemplars would include Boole, Grassmann, Sylvester, Paul Gordan, Alfred Young, and others, emphasizing things like Boolean algebra, operator calculus, exterior algebra, invariant theory, symmetric functions, … Rota sums it up nicely as ‘algebraic combinatorics’. Algebra One is better developed and is politically stronger, but perhaps with the advent of computers which can do symbolic algebra, Algebra Two will be in the ascendant.

In a discussion of Dunford-Schwartz’s massive treatise Linear Operators, Rota mentions some determinantal identities, which were used to give an alternative proof of the Brouwer fixed-point theorem (chapter 5, section 12). This proof was submitted for publication in 1954, but rejected by a referee who was incensed that the proof didn’t use any homology theory. [I’m reminded too of a conversation I had once with John Baez, where he showed me a very clever proof due to John Milnor of the ‘hairy ball theorem’, that a vector field on a sphere of even dimension must vanish somewhere. My memory of the details is fuzzy, but homology was never mentioned, and the punch line was an elementary observation about polynomials which John told so well that I nearly fell out of my seat with surprise!]

See Indiscrete Thoughts for more great anecdotes about the Algebra Wars.

Posted by: Todd Trimble on May 18, 2007 4:15 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

As an amusing footnote to Rota’s account, you should check out the short paper “A topologists view of the Dunford-Schwartz proof of the Brouwer fixed point theorem” by N. Ivanov (available on his webpage here [linked fixed - DC]). He shows that the proof is really the standard proof using de-Rham cohomology, with the complicated determinant manipulations masking the use of differential forms.

Posted by: Andy Putman on May 18, 2007 7:32 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Thanks, Andy; I’ll try to have a look. (I can’t handle dvi files where I am right now. By the way, it looks like the link you gave is broken.)

Even without looking at the paper, I’m sure the point is right, that the determinant manipulations could be elegantly recast in terms of differential forms. But “is really the standard proof” sounds like an exaggeration, since before you can begin the standard proof, you need to get the machinery of homology theory up and running, which takes a certain amount of work. I’m guessing what is meant is something like, “the Dunford-Schwartz proof is really a ‘machine-level’ proof whose underlying idea is presentable in a conceptually clearer form, if you already know De Rham cohomology.”

I think I can buy that. I’m not sure what the motivations of Dunford & Schwartz were in giving their proof; maybe during the 50’s, homology theory seemed more formidable than it does to us today, and they wanted a self-contained proof which would fit on 2 pages.

Posted by: Todd Trimble on May 18, 2007 11:28 PM | Permalink | Reply to this

### determinants in the Clifford algebra

In the Clifford Algebra approach that likes to call itself Geometric Algebra, the concept of eigenvectors is extended to eigenblades, and the determinant is just the n-dimensional (pseudoscalar) eigenblade (in n dimensional space). You can turn that in to the scalar determinant by dividing out by the unit pseudoscalar. This is a rather nasty example of forgetting units, or forgetting type – which is (if I understand it) a simple sort of decategorification.

It seems that this sort of determinant is meaningful in a linear space which doesn’t have an absolute metric - like, for example, scacetime. Dividing out by the pseudoscalar is detorsoring – something we should presumably be avoiding if we can.

Posted by: Robert Smart on May 18, 2007 4:31 PM | Permalink | Reply to this

### Re: determinants in the Clifford algebra

In geometric algebra the determinant of a matrix (not immediately viewed as representing a linear transformation) also arises in the (possibly semi-symbolic) evaluation of the dot product between two blades. Likewise, there’s the Cayley-Menger determinant relating n-dimensional simplex hyper-volumes to a determinant involving squared side lengths (I’ve never come across an online proof, so again I don’t know if this determinant arises naturally as a linear map). On the other hand the Wronskian determinant is naturally viewable as using the linear map view of a determinant but in a different context.

So the “matrix determinant” seems to crop up in some evaluational processes. I tend to agree with earlier posters: it’s the initial presentation in terms of expansion into minors that makes determinants seem incredibly abstruse.

Posted by: dave tweed on May 18, 2007 9:00 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I am a bit confused about Axler’s aim. Because much as I like a basis free approach and its results, a first course of linear algebra that doesn’t support applications must be a real stumbling blocks for other courses.

My first course was explicitly matrix based, and avoided determinants were it could since they are no good for numerical calculations. As I remember it, other courses (mainly physics) had to work around our lack of basic linear algebra until we had caught up.

Btw, could there be a connection between determinants being algorithmically harder and bob’s observation that

trace and many other linear algebraic concepts have a natural abstract counterpart in terms of compact closed categories, [but] determinants don’t seem to fit in that realm?

### Re: Linear Algebra Done Right

The course I took that tried using Axler as a text was not a sophomore-level linear algebra following on after the calculus sequence, but an upper-level course at the same level as a first course in abstract algebra. I think it might have even required abstract algebra as a co- or pre-requisite.

Posted by: John Armstrong on May 18, 2007 7:23 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Yes, the course I took was at least for physicists and engineers; the mathematicians tended to drift in and out on their mysterious ways.

For some reason groups were not a big thing. I even had to take an abstract algebra course later to begin to understand what the fuzz was really about. :-)

### Re: Linear Algebra Done Right

Btw, could there be a connection between determinants being algorithmically harder and Bob’s observation that

trace and many other linear algebraic concepts have a natural abstract counterpart in terms of compact closed categories, [but] determinants don’t seem to fit in that realm?

Quite possibly! Different portions of linear algebra work in contexts of differing complexity. You give me a basic concept from linear algebra, I’ll tell you a general context where that idea makes sense:

• The basic idea of ‘composing operators’ and ‘identity operators’ is very simple: it makes sense in any category.
• The idea of ‘adding operators’ makes sense in any additive category.
• The idea of ‘multiplying operators by scalars’ makes sense in any $k$-linear category.
• The idea of ‘kernels’ and ‘cokernels’ makes sense in any abelian category.
• The idea of ‘trace’ makes sense in any compact closed category.
• The idea of ‘determinant’ makes sense in any symmetric monoidal abelian category, at least for objects of well-defined dimension.

Stuff like this is one thing that category theorists love, but normal mathematicians seem to hate. A normal mathematician learns a concept in some context and is happy to let matters rest there until generalization is necessary. A category theorist always wants to find the appropriate context for a given concept. Given a context, what concepts does it support? Given a concept, what contexts can it live in? These are things category theorists want to know!

(By the way, in my list above I haven’t tried to give maximally general contexts where these various linear algebra ideas make sense — just fairly general, fairly obvious nice contexts.)

Anyway: if you want to use all the ideas you know and love from linear algebra, you probably want to be in a symmetric monoidal compact closed $k$-linear abelian category!

That sounds like a mouthful, but there are a lot of examples. The obvious one is $Vect_k$, the category of finite-dimensional vector spaces over the field $k$. But, you could also try $Rep(G)_k$, the category of finite-dimensional representations of a group $G$ on vector spaces over $k$. Or, the category of finite-dimensional vector bundles over your favorite base space! Or, a category of sheaves of finite-dimensional vector spaces over some base space.

I don’t know general results saying ‘concepts that can only be defined in sophisticated contexts tend to lead to algorithmically complicated problems’. But, it would be fun to try to prove something like this.

Finally, a digression:

When I said that category theorists always want to know which concepts live in a given context, and which contexts support a given concept, you have to realize this is even true of this concept itself: concept/context duality!

In fact, concept/context duality is one thing Lawvere is famous for having formalized, using the concept of a doctrine.

And now, if you immediately read about doctrines, get the hang of them, and then ask “So, in what contexts does the concept of ‘doctrine’ make sense?”…

…then you’re starting to get the hang of it all.

But, we shouldn’t spend all our time climbing this particular bean-stalk.

But, it would be rather interesting to know what’s up there….

Posted by: John Baez on May 24, 2007 9:47 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

John wrote:

* The idea of ‘multiplying operators by scalars’ makes sense in any k-linear category.

This is something you can do in any monoidal category I think, no? Given a morphism f : A -> B and scalar s : I -> I you can set

* s . f := l’_B (s (x) f) l_A

where l_A : A ISO I (x) A and l’_B : I (x) B ISO B. By naturality you get the expected behavior of scalar multiples with respect to multiplying and tensorring matrices, and this carries over to addition when there is one. This is what we use in our analysis of quantum protocols in A categorical semantics of quantum protocols, Quantum measurements without sums, POVMs and Naimark’s theorem without sums, and in some more sophisticated protocols which we are currently writing down. In all of these scalar multiples emerge as probabilistic weights, due to branching in measurements, and this seems to be all you need. (you of course did mention: haven’t tried to give maximally general contexts where these various linear algebra ideas make sense — just fairly general, fairly obvious nice contexts)

Posted by: bob on May 25, 2007 12:21 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Thanks for the thoughtful reply and the digression!

I don’t know general results saying ‘concepts that can only be defined in sophisticated contexts tend to lead to algorithmically complicated problems’. But, it would be fun to try to prove something like this.

My speculation was based on the correspondence between cartesian closed categories and lambda calculus.

The connection between cartesian closed vs compact closed categories are roughly lambda logic vs quantum lambda logic, if I understand it correctly.

And it seems that lambda logic is a powerful basis for algorithmic constructions. Something requiring a more elaborated basis could possibly be algorithmically complicated.

It was of course terribly naive which your list of explicit contexts shows. The basis is not the same as the terrain.

On the digression, believe it or not but it seems I am in-doctrinated by hanging out on the Café - I remember that thread.

Hanging out has unfortunately not transformed into getting the hang of it yet. But you are kind to show me where to find the bean-stalks among the confusing lush of your mathematical garden.

### Re: Linear Algebra Done Right

Many thanks to Tom Leinster for starting this discussion about my book Linear Algebra Done Right and for suggesting that I might like to add a few comments.

Let me begin by noting that although the entire book “Linear Algebra Done Right” is not freely available on-line, I posted two important chapters on the book’s web site: see http://www.axler.net/LADR.html and then click on “sample chapters”. The two chapters available there deal with inner product spaces (Chapter 6) and linear maps on them (Chapter 7), culminating in the finite-dimensional spectral theorem and its easy corollaries such as the polar decomposition and the singular-value decomposition of a linear operator (all without determinants, of course).

Now for some comments about the comments on this blog. My article “Down with Determinants!” was intentionally polemical because I was trying to counter what I believe is an overemphasis on determinants in elementary linear algebra.

For example, stop a random mathematics graduate student on the street and ask why an n-by-n matrix has at most n distinct eigenvalues. Almost surely you will get an answer that says (1) eigenvalues are precisely the zeros of the characteristic polynomial (which is usually defined via determinants), (2) the characteristic polynomial has degree n, (3) a polynomial of degree n has at most n distinct zeros, and thus (4) an n-by-n matrix has at most n distinct eigenvalues.

The proof sketched above is mathematically correct, but it gives little insight and it is much more complicated than necessary. If not for the fixation on determinants, most people would come up with the following much simpler proof: (1) A collection of nonzero eigenvectors corresponding to distinct eigenvalues is linearly independent (as can be proved in a few lines using only the definitions of eigenvalue and linearly independent); (2) in an n-dimensional vector space no linearly independent set can have more than n elements; thus (3) an n-by-n matrix has at most n distinct eigenvalues.

The advantage of the proof sketched above is that it relies only on simple consequences of the definition of linear independence and eigenvalues, which is where we want to focus the attention of students.

Although my article “Down with Determinants!” was intentionally polemical, my book “Linear Algebra Done Right” is much less so (except for the title and a few stray sentences). The book is intended to be used as a textbook for a second course in linear algebra, so it is still at a fairly elementary level. The first course in linear algebra (at least in the United States) is usually matrix- and computationally-oriented, and a majority of students in the class are not mathematics majors. In contrast, in the second course, which is the intended audience for my book, the focus is much more on linear maps and vector spaces, the course is proof-oriented, and most of the students are mathematics majors.

Although my field of research is analysis, some of my best friends are algebraicists who frequently use determinants in their research. So of course I know that determinants have many important uses in advanced mathematics such as some of the topics discussed on this blog. I do believe, however, that determinants have little utility for understanding elementary linear algebra (say at the level of my book).

In his initial comment, Tom used the term “eventual eigenvector” instead of “generalized eigenvector”, which I used both in the paper and in the book. I agree with Tom that “generalized eigenvector” is a lousy term (actually, so is “eigenvector”) and that his suggestion is a nice improvement. In fact, in the first draft of the paper I instead used the term “power vector”. But since “generalized eigenvector” is the standard term and I was challenging several other ideas, I decided not to fight on that issue. Perhaps if I had thought of Tom’s term “eventual eigenvector” I would have found it sufficiently appealing to try changing away from “generalized eigenvector”. Perhaps I may even do that in the next edition of the book.

I do like the interpretation of determinants as volume, as I stated in the paper and the book. The application of determinants to volume seems to me to be the only reasonable use of determinants at the undergraduate level.

Another of the comments on this blog asks how to deal with linear operators on vector spaces over non-complete fields. There are two ways to do this: (1) Embed the non-complete field in its algebraic completion, as I do in the paper or (2) Give up on factoring a polynomial into linear factors and work directly with the non-complete field; the techniques for doing this are illustrated in Chapter 9 (titled “Operators on Real Vector Spaces”) of my book.

Many thanks to everyone who has taken the time to comment on the ideas raised in my paper and book.

Posted by: Sheldon Axler on May 21, 2007 8:20 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

“For example, I have before me a paper that defines an abstract dynamical system as a continuous map from a metric space to itself.”

What paper is that? Within abstract topological dynamics, metric phase spaces and phase groups based on a single generator would both be considered to be special cases. Or maybe I’m not reading “to itself” correctly in regard to assuming that there is a group involved here. At any rate, I’ve been a little uneasy with inconsistency that I see in the naming of the various varieties of group actions.

Off topic - Why have I had so much trouble getting a block quote to work on this site?

Posted by: Richard on May 22, 2007 3:39 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Why have I had so much trouble getting a block quote to work on this site?

The XML checker is really strict. You have to wrap the quote in paragraph tags too. For instance, the above is:
<blockquote><p>Why have I had so much trouble getting a block quote to work on this site?</p></blockquote>

Posted by: John Armstrong on May 22, 2007 3:57 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

John wrote:

You have to wrap the quote in paragraph tags too.

Not if you use the text filter ‘itex to MathML with parbreaks’, as I always do. Then you just need to wrap the quote with a blank line before and after. That’s what ‘parbreaks’ accomplishes: a blank line obviates the need for that <p> stuff.

Posted by: John Baez on May 23, 2007 7:14 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Richard wrote:

What paper is that?

Roy L Adler, `Symbolic dynamics and Markov partitions’, Bulletin of the AMS 35 (1998), no. 1, 1-56. But I’ve seen that definition in several other papers/books too.

‘To itself’ means that the map is an endomorphism. Thus, an abstract dynamical system is defined as a pair $(X, \phi)$ where $X$ is a metric space and $\phi: X \to X$ is a continuous map.

(Actually, Adler insists that the space is compact. I guess that’s analogous to sticking to finite-dimensional vector spaces.)

Posted by: Tom Leinster on May 22, 2007 2:29 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

A propos of determinants being better done with exterior algebra, I’ve had success teaching linear algebra and introducing exterior algebra very early on. The point is that bivectors, trivectors, 1-forms, 2-forms, and their twisted versions all have very natural geometric interpretations. Introducing them all geometrically at the beginning, before even talking about bases and coordinates, is a good way to motivate an abstract approach to vector spaces, since you have all these different objects which all have addition and scalar multiplication. I learned about the beautiful geometric pictures of exterior algebra from these books:

William L. Burke, “Applied Differential Geometry”

Gabriel Weinreich, “Geometrical Vectors”

Posted by: Mike Shulman on May 22, 2007 9:02 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

What’s a bi/trivector?

Posted by: Tom Leinster on May 22, 2007 10:55 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Tom wrote:

What’s a bi/trivector?

Like determinants, bivectors and trivectors are creatures from the world of exterior algebra.

If $V$ is a vector space, $p$-vectors are elements of the $p$th exterior power $\Lambda^p(V)$. For $p = 2$ or $3$ we call these bivectors or trivectors. I’ve never heard people mention ‘tetravectors’ — but people do often talk about ‘multivectors’, or sometimes ‘polyvectors’.

Just as you can visualize a vector as an arrow, you can visualize a bivector as an ‘oriented area element’. But, it takes some practice to visualize oriented area elements! In the simplest case, you can imagine

$v \wedge w \in \Lambda^2(V)$

as a little parallelogram with sides $v$ and $w$, perhaps marked with a little spiralling arrow to distinguish it from

$w \wedge v = - v \wedge w$

The parallelogram is an ‘area element’ — a little piece of area. The spiralling arrow is the ‘orientation’.

One must be careful, though, since we have equations like

$v \wedge w = (v + w) \wedge v$

and

$v \wedge w = {1\over \sqrt{2}}(v + w) \wedge {1\over \sqrt{2}}(w - v)$

So, the detailed shape of the parallelogram doesn’t matter — we can shear it or rotate it, for example, without changing the oriented area element!

It gets even harder to visualize linear combinations

$v \wedge w + v^' \wedge w^'$

and at this point we usually resort to algebra. But the visual intuition is nonetheless important.

Similarly, trivectors are ‘oriented volume elements’. They’re equivalence classes of linear combinations of oriented parallelipipeds:

It’s hard to visualize these equivalence classes of linear combinations — but not as hard as it is to spell ‘parallellipiped’!

If $V$ is a vector space, a linear map

$f: V \to V$

gives rise to a linear map

$\Lambda^p(f) : \Lambda^p(V) \to \Lambda^p(V)$

on $p$-vectors. If $V$ is $p$-dimensional, $\Lambda^p(V)$ is one-dimensional, so this linear map is just multiplication by a number — the determinant of $f$!

In short, the determinant says how ‘oriented $p$-dimensional volume elements’ transform.

All this stuff goes back to Grassmann’s famous, and famously unreadable, Ausdehnungslehre. Anyone who wants (multi)linear algebra done right has got to absorb the insights of Grassmann.

(But I agree, this determinant stuff is fancier and should be put off ‘til later, if possible.)

Posted by: John Baez on May 23, 2007 4:49 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

In short, the determinant says how ‘oriented p-dimensional volume elements’ transform.

Even better, exterior algebra tells us how k-dimensional volume elements transform for all k<p. If you insist on picking a basis (ugh) you just get back matrices of minors. But writing it out in exterior algebra shows how they all fit together into one seamless whole!

Posted by: John Armstrong on May 23, 2007 4:59 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I’ve never heard people mention ‘tetravectors’.

Thank goodness for that! After bivectors and trivectors we should expect a Latin-based continuation to quadrivectors.

Posted by: David Corfield on May 23, 2007 11:04 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

David wrote:

Thank goodness for that! After bivectors and trivectors we should expect a Latin-based continuation to quadrivectors.

Hmm… good point! Then why does Todd talk about tetracategories instead of quadricategories — and why didn’t you scold him for it?

Posted by: John Baez on May 24, 2007 11:50 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Thanks for the explanation, John.

So to check my understanding: suppose we have linearly independent elements $v_1, \ldots, v_p$ of a vector space. Then the $p$-tuples $(w_1, \ldots, w_p)$ for which $w_1 \wedge \cdots \wedge w_p = v_1 \wedge \cdots \wedge v_p$ are exactly those that can be expressed as \begin{aligned} w_1& =& a_{11} v_1 + \cdots + a_{1p} v_p,\\ &\vdots&&\\ w_p& =& a_{p1} v_1 + \cdots + a_{p p} v_p \end{aligned} where $(a_{i j})$ is a matrix of scalars with determinant $1$… right?

(It’s hard to spell parellellepiped - but not as hard as it is to remember to type ‘$p$’ instead of ‘$n$’!)

Now I have another question… Sheldon Axler wrote:

I do like the interpretation of determinants as volume, as I stated in the paper and the book.

I like it too. But one thing I’ve never properly understood is: why should volume be multilinear? In two dimensions, I can draw a picture and see that area should be multilinear, but why should it be true in higher dimensions?

Posted by: Tom Leinster on May 23, 2007 5:30 PM | Permalink | Reply to this

### Inductive Intuition; Re: Linear Algebra Done Right

By your admitted intuition, it is obvious that volume is multilinear in 2-D. Hence, in n-dimensional vector space, it is multilinear on each PAIR of bases, hence on n(n+1)/2 such pairs. Once you see that the volume does not depend on the sequence of axes chosen, you next see that the pairwise multilinear volume forces n-multilinearity.

Determinants not needed!

Another approach: convince yourself that it’s true for n-simplex. Warning: not every polytope can be simplexified. Volume of n-simplex often given in terms of determinant, but that is not necessary for proof.

Posted by: Jonathan Vos Post on May 24, 2007 6:19 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Tom wrote:

So to check my understanding […]

[…] right?

Exactly! And that funky formula we all learn for the determinant is easily derived this way!

(It’s hard to spell parellellepiped - but not as hard as it is to remember to type ‘$p$’ instead of ‘$n$’!)

True. By the way, I’d never name something ‘$a_{p p}$’ like you just did, especially when teaching an undergraduate math class. I even try to avoid the infinite-dimensional projective unitary group, $PU$.

But one thing I’ve never properly understood is: why should volume be multilinear?

Length times width times breadth times…

Posted by: John Baez on May 24, 2007 10:10 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

By the way, I’d never name something app like you just did, especially when teaching an undergraduate math class.

This is why it’s also not a good idea to name a projection operator ‘P’ when teaching undergraduate math: inevitably one has to teach them things like PP = P.

Posted by: Todd Trimble on May 24, 2007 11:13 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Is it really so hard to defuse this? A couple years ago I was teaching cryptology to 14- to 16-year olds at a summer camp. Since I had the oldest group I wanted to throw some basics quantum mechanics at them so I wouldn’t lift off the ground as I waved my hands past quantum computation.

Anyhow, I came up to one pair of terms…

“So physicists like to split up this ‘bracket’ into the part that tells us a state and the part that measures it. We call that part a ‘ket’ and that one – go ahead and get it out of your system – a ‘bra’.”

So of course there was a round of giggling through the classroom. And then it stopped. And we kept saying “bra” with no ill effects for another week.

Posted by: John Armstrong on May 24, 2007 11:53 PM | Permalink | Reply to this

### Linear Algebra Done Silly

Is it really so hard to defuse this?

I wasn’t serious of course. Just a little attempt at some light-hearted banter… well, at least I was smiling. :-P

Posted by: Todd Trimble on May 25, 2007 1:25 AM | Permalink | Reply to this

### Re: Linear Algebra Done Silly

Ah, well, just sharing an anecdote that might be useful in teaching undergraduates (or lower). Little communication tricks like that come in more and more useful the more teaching I get to do.

Posted by: John Armstrong on May 25, 2007 3:35 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

So physicists like to split up this ‘bracket’ into the part that tells us a state and the part that measures it.

By the way: my functional analysis teacher back when tried to convince us that writing $\langle \psi |$ and $|\phi \rangle$ is a most stupid idea of those physicists. But I don’t think so (and didn’t back then). It is in fact one step closer to the right diagrammatic reasoning.

Posted by: urs on May 25, 2007 3:42 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

I remember one lecture of Kauffman’s at the short course in quantum computation back at the 2000 AMS meeting where he played very fast and loose with the diagrammatics. Write an operator as a ket-bra and draw lines connecting operators. Then use knot theory on the diagrams to do your calculations. At first it’s just silly playing around with notation, but after a while you realize that it’s really works and it’s hard to see it any other way.

Posted by: John Armstrong on May 25, 2007 4:26 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Over at the Noncommutative Geometry Blog Masoud Khalkhali has posted an interesting and very detailed followup to our discussion here:

Determinant, Trace, and Noncommutative Geometry

His main point is the one also made above, that determinants are best thought of in terms of exterior algebra. But he has a couple of interesting add-ons to that observation.

Posted by: urs on June 14, 2007 10:10 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

A stroke of genius: Sergei Treil has written a book called Linear Algebra Done Wrong.

Posted by: Tom Leinster on January 23, 2008 12:11 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Speaking of dynamics of square matrices, here’s some result one can prove if one’s familiar with linear algebra:

We may identify a n by n matrix A with the linear map mapping the column vector v to Av. Let A, B be n by n, m by m matrices (of complex entries). The following are equivalent.

1. The dynamical system (C^m,B) is a factor of the system (C^n, A), i.e., there is a linear onto map T from C^n to C^m such that TA = BT.

2. The system (C^m, B) is a subsystem of (C^n, A), i.e., there is an injective linear map T from C^m to C^n such that TA = BT.

3. The list of Jordan blocks of B can be obtained from the list of Jordan blocks of A by deleting some Jordan blocks and reducing the size of some of the remaining Jordan blocks.

See book “An introduction to Symbolic Dynamics and Coding” Chapter 12. where this is related to right closing factor maps between shift spaces (of equal entropy), or see this mathlinks thread.

Posted by: Jisang on August 3, 2009 5:45 PM | Permalink | Reply to this

### Re: Linear Algebra Done Right

In his Bonn thesis, Nikolai Durov introduced among many other things a notion of alternating generalized ring (Sec. 5.5), what is a commutative finitary monad on Set with certain alternativity property. For such monads he has shown how to do exterior algebra, including the determinants.

Posted by: Zoran Skoda on August 13, 2009 10:07 PM | Permalink | Reply to this
Read the post The 1000th Post on the n-Category Café
Weblog: The n-Category Café
Excerpt: Meet our new hosts: Alex Hoffnung, Tom Leinster, Mike Shulman and Simon Willerton!
Tracked: November 13, 2009 3:29 AM
Weblog: The n-Category Café
Excerpt: Remarks on the Mandelbrot set, on the occasion of his death.
Tracked: October 18, 2010 9:40 AM

### Re: Linear Algebra Done Right

In fact, he goes as far as to claim that the only part of undergraduate mathematics where determinants are needed is the change of variables formula for multi-variable integrals.

It's not needed there either. You can work out any specific integral just using the anticommutativity of the wedge product of differentials.

Posted by: Toby Bartels on August 30, 2013 4:03 AM | Permalink | Reply to this

### Re: Linear Algebra Done Right

Well, everyone is different. Determinants (as volume) were the only part of linear algebra I even partly understood (although why alternating? I never got that), so knowing Cramer’s rule let me extend a bit and invertibility obviously fails if you squashed one whole dimension onto 0.

That said, I was someone who understood zero, or close to zero, of the proofs I was shown in school. Now many years later, I get the point of doing things cleanly and basis-free, and something like this is perfect for improved, logical understanding. I think Axler is totally right to call it a second course in linear algebra.

But what’s better in terms if mathematical purity is probably not better in terms of pedagogy. Most students will want to do computations in orthonormal R^n, then get on to engineering things.

Posted by: Isomorphismes on September 15, 2015 11:06 AM | Permalink | Reply to this

Post a New Comment