Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 22, 2007

Geometric Representation Theory (Lecture 13)

Posted by John Baez

Happy Thanksgiving! Some blogs may quiet down during the holidays, but not this one. We know your desire for math, physics and philosophy doesn’t slack off just because some Americans are snoozing as they recover from gorging on turkey.

In this episode of the Geometric Representation Theory seminar, James Dolan begins to explain how the story so far is connected to knot theory — or for now, the theory of braids. Namely, he shows how decorated braids can be used to describe Hecke operators between flag representations of GL(n,F q)GL(n,F_q) — or its q=1q = 1 version, the symmetric group n!n!.

He’s already described these operators using certain matrices. The braid picture is equivalent. But, I think it’s the tip of a bigger iceberg: a categorified version of the theory of quantum groups and their associated braid group representations! We’ll take a serious stab at that next quarter, once we get some more machinery in place.

It’s not the most important part, but the coolest part is near the end. We’ve already seen how Bruhat classes in the Grassmannian of kk-planes in nn-space correspond to Young diagrams with k\le k columns and nk\le n-k rows. And, we’ve already seen how these Bruhat classes correspond to certain Hecke operators between flag representations. Today, Jim shows us how draw Hecke operators between flag representations as decorated braids. Putting these three facts together, we should be able to see Young diagrams sitting inside the decorated braids that correspond to Bruhat classes in Grassmannians!

And we can… in fact, it’s very easy.

  • Lecture 13 (Nov. 8) - James Dolan on Hecke operators between flag representations of n!. Comparing two notations for such Hecke operators: crackpot matrices and braid diagrams. Preview of the q-deformed case, where the braid diagrams will allow us to categorify the Jones polynomial (thought of as an invariant of positive braids). Seeing a Young diagram in the braid describing a Hecke operator coming from a Schubert cell of a Grassmannian.

Posted at November 22, 2007 4:32 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1507

34 Comments & 0 Trackbacks

Re: Geometric Representation Theory (Lecture 13)

Rather than say anything wise or witty about Hecke operators (on which my contribution would be likely to run along the lines of “Wow. Uh. Cool.”), I thought I’d devote a little space to rambling about finite sets and the “field with one element”. I can’t remember having seen this material collected together before but, even if it’s out there somewhere, it might be interesting to someone to see some thoughts, or thought-like activity, on the subject here.

This rambling was set off by a remark that jim dolan made in passing in one of the geometric representation lectures (though unfortunately I can’t remember which one) comparing finite sets to finite vector spaces, and my having the immediate reaction that, if I understood anything at all about the whole 𝔽 1\mathbb{F}_1 business, it’s that finite sets correspond not to vector (nor yet affine) spaces, but to projective spaces. (I assume jd knows this.) So I thought I’d start with that. The justification goes as follows:

The “field 𝔽 1\mathbb{F}_1” should have one member, 00. We start by constructing some nn-dimensional vector spaces over this in exactly the usual way, by taking the space of nn-tuples over the set of its members. Of course, the “set of ways of choosing nn members of {0}\{0\}” is fairly trivial. We get a 00-dimensional vector space consisting of all ordered, um, niltuples, i.e. {()}\{()\}, a 11-dimensional vector space consisting of all ordered, er, simples, (or perhaps I mean singletons) i.e. {(0)}\{(0)\}, a 22-dimensional vector space consisting of all ordered pairs, i.e. {(0,0)}\{(0, 0)\}, a 33-dimensional vector space consisting of all ordered triples, i.e. {(0,0,0)}\{(0, 0, 0)\}, etc. Obviously each of these spaces has just one point.

The affine spaces can be, as usual, considered to consist of the same underlying sets but forgetting the addition on the vectors. (“Forgetting which point is the origin”, as people usually put it, would be quite hard, seeing as how the origin is the only point there is).

We then follow a standard procedure for constructing projective spaces out of affine spaces, thus:

The projective point is the same as the affine point, 𝔽 1P 0={()}\mathbb{F}_1P^0=\{()\}.

The projective line is the affine line, {(0)}\{(0)\}, together with a projective point “at infinity”, giving 𝔽 1P 1={(0),()}\mathbb{F}_1P^1=\{(0), ()\}.

The projective plane is the affine plane, {(0,0)}\{(0, 0)\}, together with a projective line “at infinity”, giving 𝔽 1P 2={(0,0),(0),()}\mathbb{F}_1P^2=\{(0, 0), (0), ()\}.

The projective 33-space is the affine 33-space, {(0,0,0)}\{(0, 0, 0)\}, together with a projective plane “at infinity”, giving 𝔽 1P 3={(0,0,0),(0,0),(0),()}\mathbb{F}_1P^3=\{(0, 0, 0), (0, 0), (0), ()\}.

And so forth. Thus a “projective nn-space over 𝔽 1\mathbb{F}_1” is a set with n+1n+1 elements.

Then other stuff works the way we expect in projective spaces. For instance, any set of two different points determines a line—in fact, any set of two different points is a line. Likewise, in a projective plane (i.e. a 33-element set), any two different lines (i.e. any two different pairs, each consisting of two different points) intersect in a single point. Poincaré duality goes over to set complementation. Etc.

Some weirdness arises when we try to think of a projective nn-space as the set of lines through the origin in an n+1n+1-dimensional vector space. The weirdness lies in the fact that it seems very odd to talk about nn lines going through the origin when not only does each line “consist of” a single point, but it’s the same point for all the lines!

But even this can be made to make sense—of a sort. In general, we can specify a line through the origin in a vector space by specifying a vector in terms of its components in some basis, and then forgetting about the absolute values of the components and considering only their ratios. So we are concerned about the scalars a ija_{ij} such that, for components x ix_i and x jx_j, we have x i=a ijx jx_i = a_{ij }x_j.

But the scalars a ija_{ij} are drawn from the field over which the vector space is defined and in the case of “𝔽 1\mathbb{F}_1”, there is only one such scalar, namely 00. So for any two components, x ix_i and x jx_j, we must have either x i=0x jx_i = 0 x_j or x j=0x ix_j = 0 x_i, i.e. either x ix_i or x jx_j must be zero. If this is to be true for any pair of components in the vector, it follows that a most one component can be “non-zero”—whatever the heck that means. Since there are n+1n+1 components, any one of which can be “non-zero”, while the rest must be zero, there are n+1n+1 possible lines! They are “simply” the “coordinate axes”. Once we’ve got this far, it then becomes obvious that “planes through the origin” must be in bijection with pairs of different lines (so we have, e.g., the “xy-plane”, the “yz-plane”, the “xz-plane”, etc), and similarly, in general, “pp-spaces through the origin” are in bijection with pp-tuples of different lines.

At this point I feel I ought to be explicit about the strange fact that although, in “𝔽 1\mathbb{F}_1”, it is surely the case that 00=00\cdot 0=0, nonetheless zero is being considered not to have a multiplicative inverse, any more than it does in a genuine field: this is what prevents x i=0x jx_i = 0 x_j from implying that x j=0x ix_j = 0 x_i, and thereby leaves us free to imagine that one component, at any rate, really is “non-zero”. So the “field” genuinely is missing the element 11 (which is why I called its sole element 00, you see … ). (Which means, among other things, that the identity law is not being treated as a property but, at least, as structure, so there isn’t necessarily an “official” multiplicative identity, i.e. 11, even when there happens to be, in actual fact, an unofficial one, i.e. 00.) (Insofar as any of this makes sense at all, of course ….)

In the context of projective spaces, we can imagine the “non-zero” component taking the value \infty, I guess; or maybe, in a “vector space” over “𝔽 1\mathbb{F}_1”, we can imagine it to be the “ghost” of the missing element 11.

At any rate, it does make a sort of bizarre, crippled sense. And it gives the answer we want, which is the main thing.

Something else to consider is how linear maps work. On the face of it, these ought to be adequately described by matrices all of whose entries are drawn from “𝔽 1\mathbb{F}_1”, i.e. are equal to 00. For some purposes, this is clearly right, e.g. it acts correctly on the points (i.e. the point) of the vector space, and these matrices form a “vector space” of the correct dimension and with just one point, as expected, etc, etc.

On the other hand, while these matrices act correctly on points, they aren’t good enough to handle lines (i.e. projective points) which is where the action is (so to speak).

Under normal circumstances, the action on the lines could be specified by giving a basis for the source vector space, and the destination of each basis element would be given as a vector in the target vector space, i.e. a linear combination of basis elements. The problem over “𝔽 1\mathbb{F}_1” is that we don’t have a good definition of “linear combination”; this is because we don’t have a good concept of addition, except for the fact that the zero vector is a good additive identity, which we can add to anything to give back that same thing. The trouble is that for two vectors with one non-zero component each (but a different component in each case), adding them together would give an illegitimate vector with two non-zero components. It doesn’t help, of course, that the actual value of the non-zero component is ill-defined, though I’ve been intermittently calling it \infty.

I think we pretty much have two choices. We can either force each well-defined line to go to one other well-defined line, in which case linear maps must act as functions between projective spaces (i.e. functions between finite sets—and hence linear automorphisms are, as expected, permuations), or we can allow addition to translate into some kind of logical exclusive or, in which case, by a process too ill-defined for me to describe, we end up with linear maps being relations. Of course, we can do the latter in a well-defined way, too, by defining morphisms as spans: perhaps the lesson is that compared to trying to work over “𝔽 1\mathbb{F}_1”, the difference between rigs like Bool (I mean the rig of truth values—not the category of boolean algebras!) and rigs like \mathbb{C} is not as big as all that, since it can be seen as a degenerate and mutilated version of both.

In the same spirit, we might want to consider “projective varieties over 𝔽 1\mathbb{F}_1”. All constant terms are zero, so can be ignored, and higher powers than the first of any variable don’t add anything, so we get (depending on how we want to play it) either purely linear expressions of the form x+y+z=0x + y + z = 0, or expressions containing terms like xyzx y z (where the variables are all different).

An expression like x=0x=0 simply selects a subset of the set (that is, of the projective space), namely the set excluding the element corresponding to the coordinate xx. The sum of two terms is only zero if both terms are, so addition corresponds to intersection of sets. The case of products is more delicate, but the most natural (or, at least, the most amusing) interpretation is that xy=0xy=0 is true iff either x=0x=0 or y=0y=0, in which case multiplication corresponds to set union. The alternative is to reject instances of 00\cdot\infty and force both xx and yy to 00, meaning that multiplication is the same as addition (which makes some sense too, when we consider that this holds in “𝔽 1\mathbb{F}_1” itself (insofar as anything holds in this peculiar entity)).

At any rate, it looks as though algebraic geometry over “𝔽 1\mathbb{F}_1” is the elementary “algebra of sets”. (I shan’t attempt to construct étale cohomology over “𝔽 1\mathbb{F}_1” :-D)

Algebraic extensions of “𝔽 1\mathbb{F}_1” don’t seem to make any sense; perhaps it’s best to consider it algebraically closed. We’d expect its extensions of degree nn to have cardinality 1 n1^n, which is perhaps suggestive.

So those are my rambling thoughts on this subject.

Posted by: Tim Silverman on November 28, 2007 7:56 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Thanks for the remarks! A huge amount of this class is about the analogy between finite sets and projective spaces, especially projective spaces over finite fields. So, all the issues you raise are important.

It’s interesting that you say this:

Algebraic extensions of “𝔽 1\mathbb{F}_1” don’t seem to make any sense; perhaps it’s best to consider it algebraically closed. We’d expect its extensions of degree nn to have cardinality 1 n1^n, which is perhaps suggestive.

You’d have had me convinced, except that Kapranov and Smirnov have written about linear algebra over 𝔽 1 n\mathbb{F}_{1^n} in these scanned-in notes of their unpublished paper “Cohomology determinants and reciprocity laws: number field case”.

Posted by: John Baez on November 29, 2007 7:53 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Interesting. Thanks! I was wondering if an approach along these lines might be a way to delve deeper into the heart of the matter, but I just don’t know enough to follow through with this myself.

However, I did notice something interesting concerning their approach to algebraic extensions. They say that their field 𝔽 1 n\mathbb{F}_{1^n} consist of 00 together with the nnth roots of unity. But notice that for n=1n=1, this includes two elements—both 00 and 11! So it looks rather as though they are, in effect, defining 𝔽 1\mathbb{F}_1 not by taking the q1q\rightarrow 1 limit of 𝔽 q\mathbb{F}_q, but rather by taking the n0n\rightarrow 0 limit of 𝔽 p n\mathbb{F}_{p^n} and only specialising to p=1p=1 afterwards.

The other thing to notice, if we follow through the addition and multiplication tables of the n=1n=1 version of 𝔽 1 n\mathbb{F}_{1^n} is that it’s nothing but the rig of truth values!

Their approach does seem to end up with finite sets really being vector spaces rather than projective spaces. So maybe I was wrong, or perhaps we are interpreting the same thing differently.

Posted by: Tim Silverman on November 29, 2007 8:30 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

From Durov’s paper, p. 35:

Modules over 𝔽 1 n\mathbb{F}_{1^n} are sets XX with a marked element 0 X0_X and a permutation ζ X:XX\zeta_X : X \to X, such that ζ X n=id X\zeta_{X}^{n} = id_X and ζ X(0 X)=0 X\zeta_X(0_X) = 0_X.

Can we have a generalized ring whose modules are similar but lacking the marked point?

Posted by: David Corfield on November 30, 2007 9:54 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Presumably there’s also a 𝔽 1 G\mathbb{F}_{1^G}, whose modules are GG-sets with a fixed marked point.

Posted by: David Corfield on November 30, 2007 12:10 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

What about 𝔽 q G\mathbb{F}_{q^G} … ?

Posted by: Tim Silverman on November 30, 2007 8:11 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Good question. I guess I was hoping that F 1 nF_{1^n} was F 1 C nF_{1^{C_n}}. But whether that makes sense to think of F q nF_{q^n} as F q C nF_{q^{C_n}}, with modules something like a C nC_n-set which is a vector space.

Then again, there is the cyclic group of order nn generated by the Frobenius automorphism on F p nF_{p^n}. Maybe this is suggesting that the GG equal to C nC_n are special.

Posted by: David Corfield on December 1, 2007 9:55 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

…and GL(n,F 1 G)GL(n, F_{1^G}) would be the wreath product G nS nG \wr_n S_n.

Posted by: David Corfield on November 30, 2007 2:01 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

David wrote:

Can we have a generalized ring whose modules are similar but lacking the marked point?

Yeah!

Let me warm up slowly, so everyone in the universe can follow.

A generalized ring RR is a thing that has ‘modules’, and these modules are simply things with a bunch of operations on them.

So, a generalized ring is basically just a collection of abstract nn-ary operations for each n0n \ge 0, closed under composition, and containing an identity operation. A few other laws should hold, too, but this gives you the basic idea.

All that matters about a generalized ring is its modules! An RR-module is a set MM where the abstract nn-ary operations in our generalized ring RR get represented as actual nn-ary functions M nMM^n \to M.

An ordinary ring RR is a special case of a generalized ring, where the nn-ary operations in any module MM come from taking linear combinations:

(m 1,,m n)r 1m 1++r nm n(m_1, \dots, m_n) \mapsto r_1 m_1 + \cdots + r_n m_n

with coefficients r iRr_i \in R.

In particular — and here’s where that ‘marked point’ business comes in — the element 00 in a ring RR gives us a unique 0-ary operation

()0(\;) \mapsto 0

in any module MM. A 0-ary operation is also known as a constant. This one is usually called 0M0 \in M. This is why any module of an ordinary ring (or rig) is blessed with a special ‘marked point’, namely 00.

But, generalized rings can lack 0-ary operations!

In particular, Durov considers two generalized rings closely related to the mythical ‘field with one element’.

One of these is the generalized ring with no nn-ary operations except the identity operation (which is a 1-ary operation). Modules of this are just sets.

Another is the generalized ring with no nn-ary operations except the identity operation and a single 0-ary operation. Modules of this are just pointed sets.

I have a feeling Tim Silverman would like to call pointed sets ‘vector spaces over the field with 1 element’, and call sets ‘projective spaces over the field with one element’. I’m often tempted in this direction myself.

Anyway, they’re just modules over two closely related generalized rings.

A generalized ring with only 1-ary operations is the same as a monoid. We think of the operations as ‘multiplication by elements of the monoid’.

An example would be the generalized ring coming from the group /n\mathbb{Z}/n. Modules of this are just /n\mathbb{Z}/n-sets. And these are just

“sets XX with a permutation ζ X:XX\zeta_X : X \to X such that ζ X n=id X\zeta_X^n = id_X.”

So the answer to your question is yes.

And this is true too:

Presumably there’s also a F 1 GF_{1^G}, whose modules are GG-sets with a fixed marked point.

This would be a generalized ring… but it would be a commutative generalized ring only when GG is commutative. A commutative generalized ring is one where all the operations commute in a suitable sense — which reduces to the usual obvious sense for 1-ary operations.

It would be interesting to see when F 1 GF_{1^G} is actually a generalized field. GG needs to be commutative, but what else?

Offhand it seems GG needs to be simple — i.e., not have any nontrivial quotients — for F 1 GF_{1^G} to be a generalized field. But this would rule out F 1 nF_{1^n} except when nn is prime, right?

Posted by: John Baez on December 1, 2007 7:59 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

By the way: if all nn-ary operations commute, as in a ‘commutative’ generalized ring, there can be at most one 00-ary operation.

Posted by: John Baez on December 1, 2007 8:50 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

So can we understand your FinRel GFinRel^G and FinSpan GFinSpan^G in terms of generalized rings? I mean FinRel arose from ‘linear’ maps between free {0,1}\{0, 1\}-modules, i.e., powerset algebras.

The GG-invariance means that there is an action on these modules, but then the GG-sets in FinRel GFinRel^G don’t have to be free.

In fact (not necessarily free) GG-sets are just Eilenberg-Moore algebras for the free GG-set construction, right?

Posted by: David Corfield on December 2, 2007 3:45 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

David wrote:

So can we understand your FinRel GFinRel^G and FinSpan GFinSpan^G in terms of generalized rings?

The simplest sort of generalized ring is a rig.

Maybe these guys are just certain categories of modules of rigs.

For starters, FinRel is the category of finitely generated free modules of the Boolean rig B={0,1}B = \{0,1\}. Similarly, FinSpan (the category version, not the 2-category!) is the category of finitely generated free modules of the rig \mathbb{N}.

If you give me a rig RR and a group GG, I can form a new rig, the group rig R[G]R[G], whose modules are the same as RR-modules equipped with a representation of GG.

So, it’s worth looking at modules of B[G]B[G] and [G]\mathbb{N}[G]. As you note, we don’t want the free modules, since we don’t want to force the action of GG to be free. But, I guess we want the underlying action of BB or \mathbb{N} to be free. After all non-free \mathbb{N}-module can look like all sorts of things we don’t care about here, e.g. /8\mathbb{Z}/8 \oplus \mathbb{N}. (What’s a non-free BB-module like? Are there any?)

So, it would be good to look at the category of B[G]B[G]-modules whose underlying BB-modules are free. And, the category of [G]\mathbb{N}[G]-modules whose underlying \mathbb{N}-modules are free.

By the way, the category theorists will be less puzzled if instead of calling them ‘generalized rings’ we follow the normal vocabulary and call them ‘algebraic theories’, or equivalently, ‘algebraic monads’.

Mini-review for lurkers: an algebraic theory is a category ThTh with finite products whose objects are all of the form X nX^n for some object XX. A model of TT is a finite-product-preserving functor F:ThSetF: Th \to Set. We can think of this as a set F(X)F(X) equipped with a bunch of nn-ary operations coming from the morphisms X nXX^n \to X in ThTh. For any algebraic theory we get a monad T:SetSetT: Set \to Set assigning to each set the free model of ThTh on this set. The models of ThTh are then precisely the algebras of TT. We don’t get all monads on SetSet from algebraic theories in this way; we just get a certain class, the algebraic monads.

In fact (not necessarily free) GG-sets are just Eilenberg-Moore algebras for the free GG-set construction, right?

Yup. The category of GG-sets with GG-equivariant functions as morphisms is the Eilenberg–Moore category for the ‘free GG-set’ monad T:SetSetT: Set \to Set. And this is an algebraic monad. I hope you see that this an incredibly erudite way of saying practically nothing.

But, the whole point of Hecke theory is to introduce more general morphisms between GG-sets: at the very least GG-equivariant relations — but even better, maybe, spans of GG-sets.

Posted by: John Baez on December 3, 2007 9:33 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

I have a feeling Tim Silverman would like to call pointed sets ‘vector spaces over the field with 1 element’, and call sets ‘projective spaces over the field with one element’. I’m often tempted in this direction myself.

The question is How could you not be so tempted? That’s the ‘standard’ picture.

Sets, as modules of the trivial generalized ring, can then also be thought, as Durov does, as vector spaces over the field without elements.

Posted by: David Corfield on December 2, 2007 3:52 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

David wrote:

The question is How could you not be so tempted? That’s the ‘standard’ picture.

Right. I guess the point is that I didn’t learn about this ‘standard picture’ from reading it somewhere; Jim and I sort of fumbled our way to it on the basis of a few hints here and there.

So, I’ve suffered through a lengthy process of sorting out the difference between ‘vector spaces over the field with 1 element’ and ‘projective spaces over the field with 1 element’. And, it isn’t completely done yet, since structures like flags can be thought of as structures on either sort of entity.

Back in week185 I boldly stated that finite sets were like projective spaces over the field with one element, but I wasn’t anywhere near as certain as I may have sounded.

Posted by: John Baez on December 3, 2007 7:43 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

David apostrophised:

How could you not be so tempted? That’s the ‘standard’ picture?

As you can see, I manfully resisted this temptation by the ingenious expedient of not noticing it, and only stumbled into this path by a roundabout route. I started with a vector space over 𝔽 1\mathbb{F}_1 being only a single point, went from here to constructing projective spaces as finite sets, and only at that point came back to vector spaces as pointed sets, with the original single point remaining as the special point, and the other elements of the set being lines. So I wasn’t adhering very closely to the standard!

That’s what comes of making intuitive guesses instead of approaching a puzzle systematically.

Posted by: Tim Silverman on December 3, 2007 9:14 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

According to Durov’s definition (p. 267), F 1 nF_{1^n} is not a generalized field when n>1n \gt 1 (p. 268).

Posted by: David Corfield on December 3, 2007 9:22 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

I see why F 1 nF_{1^n} isn’t a generalized field when nn isn’t prime, since then the group /n\mathbb{Z}/n has nontrivial quotients, and these give nontrivial quotients of the generalized ring F 1 nF_{1^n}. But when nn is prime, the group doesn’t have nontrivial quotients… but the generalized ring still does! How does that work?

(I know, I should read the paper.)

Posted by: John Baez on December 3, 2007 7:55 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Durov says we might be tempted to opt for generalized fields as being generalized rings whose modules are all free. However he wants what he calls 𝔽 \mathbb{F}_{\infty} to be one, and this has non-free modules.

The underlying set of 𝔽 \mathbb{F}_{\infty} is the three point set resulting from collapsing the interior of the unit interval, (1)\mathbb{Z}_{\infty}(1), to zero (4.8.13). (n)\mathbb{Z}_{\infty}(n) are nn-dimensional octahedra.

So then he tries:

Definition5.7.7\mathbf{Definition 5.7.7} We say that a generalized ring KK is subfinal or subtrivial if it is isomorphic to a submonad of the final monad 1\mathbf{1}, i.e. to 1 +\mathbf {1_+} or 1\mathbf{1}.

The monad 1\mathbf{1} sends each set to the singleton set. Not sure what 1 +\mathbf {1_+} is.

We say that a generalized ring KK is a generalized field if it is not subtrivial, but all its strict quotients different from KK are subtrivial.

Now, for n>1n \gt 1, 𝔽 1 n\mathbb{F}_{1^n} admits 𝔽 1\mathbb{F}_1 as a strict quotient.

In fact, 𝔽 1 n\mathbb{F}_{1^n} is something like a local artinian ring with residue field 𝔽 1\mathbb{F}_1.

Hmmm.

Posted by: David Corfield on December 4, 2007 6:38 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Thanks a lot! I couldn’t find that definition when I looked for it last time. I’ll need to think about this a bit before giving a good reply.

But, basic training in category theory says it’s wise to avoid definitions of the form “We say … is a … if not … but…” Such “escape clauses” push one off the high road of ‘positive logic’ into the swamps of nitpicking, and they suggest one hasn’t really nailed the concept.

I need to work out the subobjects of the terminal monad on SetSet. A monad TT needs a unit xTxx \to T x, so TxT x can’t be empty unless xx is. So, shooting from the hip, I’d guess that the terminal monad is the one for which TxT x is the one-element set for all xx, while its only interesting submonad has TxT x being the one-element set for all nonempty xx, but the empty set when xx is empty. Maybe these are what Durov is calling 1\mathbf{1} and 1 +\mathbf{1_+}.

Posted by: John Baez on December 4, 2007 7:11 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

We say that a generalized ring K is a generalized field if it is not subtrivial,
but all its strict quotients different from K are subtrivial.

But, basic training in category theory says it’s wise to avoid definitions of the form “We say … is a … if not … but…” Such “escape clauses” push one off the high road of ‘positive logic’ into the swamps of nitpicking, and they suggest one hasn’t really nailed the concept.

It might be worth reminding everyone that fields fall short of this standard: a field is a ring which is nonzero but all quotients whose strict quotients are zero.

Are fields a good class of objects to study? I would say No, for exactly this reason. Commutative rings on the other hand are divine.

Posted by: James on December 4, 2007 10:27 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

James wrote (more or less):

a field is a ring which is nonzero but all of whose strict quotients are zero.

Interesting point. Similarly, a natural number is prime iff it is not 1 but all of its strict divisors are 1. A topological space is connected iff it is not empty but all of its proper clopen subsets are empty.

Well, obviously we do care about fields, primes and connected spaces. But you ask: ‘are fields a good class of objects to study?’ And I guess I’d say ‘no’, not because I know much about algebra, but because the answer is ‘no’ for primes and connected spaces. I mean, can you imagine studying only prime numbers, without ever allowing yourself to contemplate a composite? It’s a ludicrous idea.

(There must be some really famous sense in which fields are like prime numbers and rings are like arbitrary integers. Could someone put me out of my misery?)

Posted by: Tom Leinster on December 5, 2007 2:00 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Tom wrote:

There must be some really famous sense in which fields are like prime numbers and rings are like arbitrary integers. Could someone put me out of my misery?)

Yes, this is a pretty famous analogy. It arises naturally when we try to think of any commutative ring as a ring of functions on some space, and apply this idea to \mathbb{Z}.

Any commutative ring RR has a ‘spectrum’, Spec(R)Spec(R) — a space whose points are the prime ideals of RR. Each point in Spec(R)Spec(R) gives a homomorphism from RR to some field. So, we can think of an element of RR as giving a function on Spec(R)Spec(R) — but a function taking values not in a specific field, but in a field that varies from point to point!

When RR is the integers, the points of Spec(R)Spec(R) are the prime numbers.

So, very roughly: if we think of a general commutative ring as ‘built from fields’, and apply this idea to the integers, we see the integers are ‘built from primes’.

It takes some work to really flesh out this idea. You could say this is what modern ‘arithmetic geometry’ is all about.

But here’s a little thing taken from week205, which explains a nuance in what I just said:

In the geometry we learned in high school, once we see one point, we’ve seen ‘em all: all one-point spaces are isomorphic. But not all fields are isomorphic! So, if we’re trying to think of algebra as geometry, it’s a funny sort of geometry where points come in different flavors!

Moreover, there are homomorphisms between different fields. These act like “flavor changing” maps - maps from a point of one flavor to a point of some other flavor.

If we have a homomorphism f:Rkf: R \to k and a homomorphism from kk to some other field k k^', we can compose them to get a homomorphism f :Rk f^': R \to k^'. So, we’re doing some funny sort of geometry where if we have a point mapped into our space, we can convert it into a point of some other flavor, using a “flavor changing” map.

Now let’s take this strange sort of geometry really seriously, and figure out how to actually turn a commutative ring into a space! First I’ll describe what people usually do. Eventually I’ll describe what perhaps they really should do - but maybe you can guess before I even tell you.

People usually cook up a space called the “spectrum” of the commutative ring RR, or Spec(R)Spec(R) for short. What are the points of Spec(R)Spec(R)? They’re not just all possible homomorphisms from RR to all possible fields. Instead, we count two such homomorphisms as the same point of Spec(R)Spec(R) if they’re related by a “flavor changing process”. In other words, f :Rk f^': R \to k^' gives the same point as f:Rkf: R \to k if you can get f f^' by composing ff with a homomorphism from kk to k k^'.

This is a bit ungainly, but luckily there’s a quick and easy way to tell when f:Rkf: R \to k and f :Rk f^': R \to k^' are related by such a flavor changing process, or a sequence of such processes. You just see if they have the same kernel!

The kernel of a homomorphism to a field is a “prime ideal”, and two homomorphisms are related by a sequence of flavor changing processes iff they have the same kernel. Furthermore, every prime ideal is the kernel of a homomorphism to some field. So, we can save time by defining Spec(R)Spec(R) to be the set of prime ideals in RR.

For completeness I should remind you what a prime ideal is! An “ideal” in a ring RR is a set closed under addition and closed under multiplication by anything in RR. It’s “prime” if it’s not all of RR, and whenever the product of two elements of RR lies in the ideal, at least one of them lies in the ideal.

[…stuff about how Grothendieck developed this idea further…]

Addendum: Here’s something a friend of mine wrote, and an expanded version of my reply.

By the way, I very much liked your explanation of points and prime ideals. Up until now I haven’t seen a satisfactory explanation of why points correspond to *prime* rather than *maximal* ideals, and although I haven’t completely digested what you wrote, it looks like it might do the job…

Both here and in my discussion of spectra in “week199”, I’ve been avoiding saying the things people usually say. People usually note that a maximal ideal is the same as the kernel of a homomorphism ONTO a field, while a prime ideal is the same as the kernel of a homomorphism ONTO an integral domain. (Recall that an integral domain is a commutative ring where xy=0x y = 0 implies that xx or yy is zero.) If we define the “points” of a commutative ring RR to be its maximal or prime ideals, we can therefore think of these as the kernels of homomorphisms from RR onto fields or integral domains.

However, defining points in terms of homomorphisms ONTO a given sort of commutative ring is rather irksome, because it doesn’t tell us how points transform under homomorphisms of commutative rings, nor how they transform under the “flavor-changing operations” I was describing. The problem is that the composite of a homomorphism with an onto homomorphism needn’t be onto!

So, what really matters is that a prime ideal is the same as the kernel of a homomorphism TO a field. To see how this follows from the usual story, note that any integral domain is contained in a field called its “field of fractions” - just as \mathbb{Z} is contained in \mathbb{Q}. Any homomorphism ONTO the integral domain thus becomes a homomorphism TO this field, with the same kernel. Conversely, any homorphism TO a field becomes a homomorphism ONTO its image, with the same kernel - and this image is always an integral domain.

Posted by: John Baez on December 5, 2007 6:37 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Thanks, John.

(As you may remember, I was that nameless friend.)

Posted by: Tom Leinster on December 5, 2007 9:22 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Actually I’d completely forgotten! I left you nameless in case you didn’t want to be known as someone who hadn’t completely digested what I wrote… but now you’ve blown your own cover.

Posted by: John Baez on December 6, 2007 2:48 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

I think it is fair to say that the weirdness in scheme theory around closed points vs all points and maximal ideals vs prime ideals is really an artifact of scheme theory relying on point-set topology rather than topos theory. And things are this way because Grothendieck invented toposes after he inveted schemes, and no one since then has bothered/wanted to rewrite the foundations in terms of topos theory.

The point is that a scheme is a kind of locally ringed space, that is, a topological space with a certain kind of sheaf on it. But back in 1958 or whenever, a sheaf had to live on a topological space, which had to have an underlying “substrate of points” (Mazur’s words). If you want to be able to think about any commutative ring as some kind of space (and this is the whole point, so you definitely do), and if every space has to have a substrate of points, then you’re forced to consider ridiculous non-closed points. If you set things up topos-theoretically, you have no points and hence you don’t have to stress about non-closed points and prime ideals and so on. (Or rather you do have them if you want them, but you can ignore them, just like how the number 239835 exists in Peano arithmetic, but you don’t have to talk about it unless you want to.) In particular, you don’t need to have use the theology of the axiom of choice. (You have to in usual scheme theory because you might have a nonzero ring with no prime ideals and then it’s spectrum would be the empty set, which is the same as the spectrum of the zero ring, so your algebraic geometry would include all of commutative ring theory.)

Soon after I first learned the basics of scheme theory and was all proud of myself, I read a book on algebraic geometry in my library’s new book shelf. In the very beginning, the author said he would use the word “point” to mean closed point. I thought that was treason at the time, but now I see that there’s really nothing wrong with it.

Just thought some people might find that point of view a little interesting. Gotta a plane to catch. Apologies for any typos etc.

Posted by: James on December 6, 2007 6:27 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

James wrote:

things are this way because Grothendieck invented toposes after he invented schemes, and no one since then has bothered/wanted to rewrite the foundations in terms of topos theory.

That’s really bizarre! You’d think that algebraic geometry would be important enough that someone would have done it. Maybe it’s one of those things where people are afraid to rewrite foundations because they think people are too set in their ways. Or maybe people have tried to do it, but got the reaction ‘why are you telling me things I already know, just in some obscure language?’

Anyway, I’m all for revisionism. It reminds me of a conversation I had with a certain James, who told me in vivid terms that algebraic spaces are much more satisfactory than schemes, and that everyone who’s given serious thought to the matter agrees on this. And that this is ironic, given that schemes are the most famous among Grothendieck’s many inventions.

So do we need someone to rewrite the foundations of algebraic geometry in terms of topos theory and algebraic spaces?

Posted by: Tom Leinster on December 6, 2007 5:49 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Yeah, algebraic geometry reminds me of US politics—the system you’re working in was way ahead of its time back when it was revealed (in 1958 or 1789), and then because of this there has grown a bit too much reverence for the system itself, and hence a lot of inertia to changing it.

But writing foundations is a *lot* of work, and when it’s done, you want people to do it really well. So there are some good reasons for conservative approaches to foundations, or at least understandable ones.

By the way, I mistakenly said that Grothendieck invented schemes. While he definitely made them into what they are, it was Cartier who invented them. I read this in Grothendieck (somewhere) and Cartier himself told me, so the two most relevant people agree. On the other hand, Serre wrote somewhere that no one invented them—they were just in the air.

Posted by: James on December 18, 2007 8:21 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Tom Leinster on why mathematicians never went back and rewrote algebraic geometry in terms of topos theory:

they think people are too set in their ways.

Great pun, Tom!

Posted by: John Baez on December 7, 2007 7:09 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

John wrote:

But, basic training in category theory says it’s wise to avoid definitions of the form “We say … is a … if not … but…” Such ‘escape clauses’ push one off the high road of ‘positive logic’ into the swamps of nitpicking, and they suggest one hasn’t really nailed the concept.

James wrote:

It might be worth reminding everyone that fields fall short of this standard: a field is a ring which is nonzero but all quotients whose strict quotients are zero.

Right! That’s bugged me for years. And perhaps in some bizarre way it’s related to the mystery of the 1-element field. But, I don’t really see how.

If we defined a field merely as a ring where nonzero elements were invertible (already a somewhat ‘negative’ definition, alas), we’d have a 1-element field, namely the 1-element ring, which has 0=10 = 1.

This is ruled out by the standard definition. However, including it doesn’t seem to solve the mystery of how ‘vector spaces over the 1-element field are finite sets’. After all, modules over this ‘1-element field’ — the 1-element ring — are just 1-element sets.

Hmm! From what Tom just said, the 1-element ring is a very important generalized ring in Durov’s sense: namely, the terminal monad, whose only algebras are 1-element sets!

So, it does play a role in Durov’s theory… but a rather dull role, I guess.

One more comment. I mentioned ‘positive logic’ above. There’s something vaguely similar, but different, called ‘geometric logic’, important in topos theory. In Johnstone’s Elephant, he says that the concept of field cannot be defined in geometric logic… but something related can. I forget, but I think this related concept was the concept of ‘local ring’.

If I understood this, I’d probably know more about the relation between topos theory and algebraic geometry.

Posted by: John Baez on December 5, 2007 7:05 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

John, yes, your analysis of the subobjects of the terminal monad on SetSet is right.

(I say that with confidence because years ago I gave category theory tutorials for a course that Peter Johnstone was lecturing. Of course, his problem sheets were ferocious, and I had to put so much effort into solving them that the answers were seared permanently into my brain tissue. I am grateful for the experience.)

You can present the two subtheories of the terminal theory as follows.

The nontrivial subtheory is the theory generated by no operations and the single equation x=yx = y (for all x,yx, y). So a model is a set in which all elements are equal. Hence there are two models: \emptyset and 11.

The terminal theory itself is generated by a single nullary operation (constant) cc and the single equation x=cx = c (for all xx). So a model is a set equipped with an element to which all other elements are equal. Hence there is just one model: 11.

(I suspect the fact that there are two of them is somehow a consequence of there being two classical truth values, or subsets of the one-point set, or (1)(-1)-categories.)

Incidentally, as Peter taught me, our two theories are the only theories TT for which the unit map XT(X)X \to T(X) fails to be always injective. In other words, if you take any theory TT other than these two, and any set XX, then the canonical map from XX to the free algebra on XX is an injection.

Posted by: Tom Leinster on December 5, 2007 2:21 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Thanks, Tom! Interesting to hear some of the wisdom you imbibed in Cambridge.

I wonder if someone ever worked out the subobjects of the terminal monad on a general topos, and found that they correspond to points of the subobject classifier Ω\Omega. That would nicely substantiate your idea here:

(I suspect the fact that there are two [subobjects of the terminal monad on SetSet] is somehow a consequence of there being two classical truth values, or subsets of the one-point set, or (−1)-categories.)

Posted by: John Baez on December 5, 2007 7:38 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Here’s an idea: in a topos EE, let supp(X)supp(X) denote the support of an object XX, i.e., the domain of the image U1U \hookrightarrow 1 of the unique map X1X \to 1. Then

supp:EEsupp: E \to E

is a monad (the unit Xsupp(X)X \to supp(X) is just the epi XUX \to U of the image factorization, and there is a unique map supp(supp(X))supp(X)supp(supp(X)) \to supp(X) since supp(X)supp(X) is subterminal).

The monad suppsupp is the smallest submonad of the terminal monad, because the image of a map is the smallest subobject through which the map factors.

Now let V1V \hookrightarrow 1 be any subobject of 1 (corresponding to a point 1Ω1 \to \Omega). We may define a monad

supp V(X)=defsupp(X)Vsupp_V(X) \stackrel{def}{=} supp(X) \vee V

which is again clearly a submonad of the terminal monad. Notice that supp 0=suppsupp_0 = supp, and supp 1=1supp_1 = 1.

I don’t clearly see that the supp Vsupp_V are all the submonads of the terminal, but maybe someone else does.

Posted by: Todd Trimble on December 5, 2007 8:17 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Todd wrote:

I don’t clearly see that the supp Vsupp_V are all the submonads of the terminal, but maybe someone else does.

Nice idea! Here’s an attempt to get a bunch more submonads of the terminal monad. The subobjects of the terminal object form a poset whose top element is the terminal object, 11. Let FF be any order-preserving map from this poset to itself with

F(1)=1F(1) = 1

xF(x)x \le F(x)

F(F(x))F(x)F(F(x)) \le F(x)

Maybe we can define a monad TT as follows: if our object xx is a subobject of the terminal object, let T(x)=F(x)T(x) = F(x). Otherwise, let T(x)=1T(x) = 1.

I don’t know how to make TT into a functor; maybe we need some more conditions on FF for that?

But, I think there’s an obvious unit xTxx \hookrightarrow T x, since

xF(x)x \le F(x)

And, I think there’s an obvious multiplication TTxTxT T x \to T x, since

F(F(x))F(x)F(F(x)) \le F(x)

As you point out, any subobject ss of the terminal object gives such an FF, via

F(x)=xsF(x) = x \vee s

And, in the case where our topos is just SetSet, all such FF arise this way, so there are just two. But for a general topos there could be a bunch more.

I seem to only be getting idempotent monads, since my assumptions imply

F(F(x))=F(x)F(F(x)) = F(x)

Posted by: John Baez on December 5, 2007 5:11 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 13)

Yeah, what you said. :-)

It’s pretty simple, really: the image factorization provides an adjunction

(supp:ESub(1))(i:Sub(1)E)(supp: E \to Sub(1)) \dashv (i: Sub(1) \hookrightarrow E)

where Sub(1)Sub(1) is the category of subobjects of 1 and (necessarily mono) maps between them. Hence if you have any monad (= closure operator)

M:Sub(1)Sub(1),M: Sub(1) \to Sub(1),

which we can factor as a left adjoint followed by a right adjoint

Sub(1)FMAlgUSub(1),Sub(1) \stackrel{F}{\to} M-Alg \stackrel{U}{\to} Sub(1),

it becomes apparent that we get a monad on EE formed as the composite

EsuppSub(1)FMAlgUSub(1)iE.E \stackrel{supp}{\to} Sub(1) \stackrel{F}{\to} M-Alg \stackrel{U}{\to} Sub(1) \stackrel{i}{\to} E.

And yes, you’re right, this monad is (clearly) idempotent.

No doubt there are suitable exactness assumptions that one can impose on subterminal monads which would force them to be of the form supp Vsupp_V as in my prior comment, but working this out doesn’t particularly do anything for me :-).

Posted by: Todd Trimble on December 5, 2007 6:22 PM | Permalink | Reply to this

Post a New Comment