Rather than say anything wise or witty about Hecke operators (on which my contribution would be likely to run along the lines of “Wow. Uh. Cool.”), I thought I’d devote a little space to rambling about finite sets and the “field with one element”. I can’t remember having seen this material collected together before but, even if it’s out there somewhere, it might be interesting to someone to see some thoughts, or thought-like activity, on the subject here.
This rambling was set off by a remark that jim dolan made in passing in one of the geometric representation lectures (though unfortunately I can’t remember which one) comparing finite sets to finite vector spaces, and my having the immediate reaction that, if I understood anything at all about the whole business, it’s that finite sets correspond not to vector (nor yet affine) spaces, but to projective spaces. (I assume jd knows this.) So I thought I’d start with that. The justification goes as follows:
The “field ” should have one member, . We start by constructing some -dimensional vector spaces over this in exactly the usual way, by taking the space of -tuples over the set of its members. Of course, the “set of ways of choosing members of ” is fairly trivial. We get a -dimensional vector space consisting of all ordered, um, niltuples, i.e. , a -dimensional vector space consisting of all ordered, er, simples, (or perhaps I mean singletons) i.e. , a -dimensional vector space consisting of all ordered pairs, i.e. , a -dimensional vector space consisting of all ordered triples, i.e. , etc. Obviously each of these spaces has just one point.
The affine spaces can be, as usual, considered to consist of the same underlying sets but forgetting the addition on the vectors. (“Forgetting which point is the origin”, as people usually put it, would be quite hard, seeing as how the origin is the only point there is).
We then follow a standard procedure for constructing projective spaces out of affine spaces, thus:
The projective point is the same as the affine point, .
The projective line is the affine line, , together with a projective point “at infinity”, giving .
The projective plane is the affine plane, , together with a projective line “at infinity”, giving .
The projective -space is the affine -space, , together with a projective plane “at infinity”, giving .
And so forth. Thus a “projective -space over ” is a set with elements.
Then other stuff works the way we expect in projective spaces. For instance, any set of two different points determines a line—in fact, any set of two different points is a line. Likewise, in a projective plane (i.e. a -element set), any two different lines (i.e. any two different pairs, each consisting of two different points) intersect in a single point. Poincaré duality goes over to set complementation. Etc.
Some weirdness arises when we try to think of a projective -space as the set of lines through the origin in an -dimensional vector space. The weirdness lies in the fact that it seems very odd to talk about lines going through the origin when not only does each line “consist of” a single point, but it’s the same point for all the lines!
But even this can be made to make sense—of a sort. In general, we can specify a line through the origin in a vector space by specifying a vector in terms of its components in some basis, and then forgetting about the absolute values of the components and considering only their ratios. So we are concerned about the scalars such that, for components and , we have .
But the scalars are drawn from the field over which the vector space is defined and in the case of “”, there is only one such scalar, namely . So for any two components, and , we must have either or , i.e. either or must be zero. If this is to be true for any pair of components in the vector, it follows that a most one component can be “non-zero”—whatever the heck that means. Since there are components, any one of which can be “non-zero”, while the rest must be zero, there are possible lines! They are “simply” the “coordinate axes”. Once we’ve got this far, it then becomes obvious that “planes through the origin” must be in bijection with pairs of different lines (so we have, e.g., the “xy-plane”, the “yz-plane”, the “xz-plane”, etc), and similarly, in general, “-spaces through the origin” are in bijection with -tuples of different lines.
At this point I feel I ought to be explicit about the strange fact that although, in “”, it is surely the case that , nonetheless zero is being considered not to have a multiplicative inverse, any more than it does in a genuine field: this is what prevents from implying that , and thereby leaves us free to imagine that one component, at any rate, really is “non-zero”. So the “field” genuinely is missing the element (which is why I called its sole element , you see … ). (Which means, among other things, that the identity law is not being treated as a property but, at least, as structure, so there isn’t necessarily an “official” multiplicative identity, i.e. , even when there happens to be, in actual fact, an unofficial one, i.e. .) (Insofar as any of this makes sense at all, of course ….)
In the context of projective spaces, we can imagine the “non-zero” component taking the value , I guess; or maybe, in a “vector space” over “”, we can imagine it to be the “ghost” of the missing element .
At any rate, it does make a sort of bizarre, crippled sense. And it gives the answer we want, which is the main thing.
Something else to consider is how linear maps work. On the face of it, these ought to be adequately described by matrices all of whose entries are drawn from “”, i.e. are equal to . For some purposes, this is clearly right, e.g. it acts correctly on the points (i.e. the point) of the vector space, and these matrices form a “vector space” of the correct dimension and with just one point, as expected, etc, etc.
On the other hand, while these matrices act correctly on points, they aren’t good enough to handle lines (i.e. projective points) which is where the action is (so to speak).
Under normal circumstances, the action on the lines could be specified by giving a basis for the source vector space, and the destination of each basis element would be given as a vector in the target vector space, i.e. a linear combination of basis elements. The problem over “” is that we don’t have a good definition of “linear combination”; this is because we don’t have a good concept of addition, except for the fact that the zero vector is a good additive identity, which we can add to anything to give back that same thing. The trouble is that for two vectors with one non-zero component each (but a different component in each case), adding them together would give an illegitimate vector with two non-zero components. It doesn’t help, of course, that the actual value of the non-zero component is ill-defined, though I’ve been intermittently calling it .
I think we pretty much have two choices. We can either force each well-defined line to go to one other well-defined line, in which case linear maps must act as functions between projective spaces (i.e. functions between finite sets—and hence linear automorphisms are, as expected, permuations), or we can allow addition to translate into some kind of logical exclusive or, in which case, by a process too ill-defined for me to describe, we end up with linear maps being relations. Of course, we can do the latter in a well-defined way, too, by defining morphisms as spans: perhaps the lesson is that compared to trying to work over “”, the difference between rigs like Bool (I mean the rig of truth values—not the category of boolean algebras!) and rigs like is not as big as all that, since it can be seen as a degenerate and mutilated version of both.
In the same spirit, we might want to consider “projective varieties over ”. All constant terms are zero, so can be ignored, and higher powers than the first of any variable don’t add anything, so we get (depending on how we want to play it) either purely linear expressions of the form , or expressions containing terms like (where the variables are all different).
An expression like simply selects a subset of the set (that is, of the projective space), namely the set excluding the element corresponding to the coordinate . The sum of two terms is only zero if both terms are, so addition corresponds to intersection of sets. The case of products is more delicate, but the most natural (or, at least, the most amusing) interpretation is that is true iff either or , in which case multiplication corresponds to set union. The alternative is to reject instances of and force both and to , meaning that multiplication is the same as addition (which makes some sense too, when we consider that this holds in “” itself (insofar as anything holds in this peculiar entity)).
At any rate, it looks as though algebraic geometry over “” is the elementary “algebra of sets”. (I shan’t attempt to construct étale cohomology over “” :-D)
Algebraic extensions of “” don’t seem to make any sense; perhaps it’s best to consider it algebraically closed. We’d expect its extensions of degree to have cardinality , which is perhaps suggestive.
So those are my rambling thoughts on this subject.
Re: Geometric Representation Theory (Lecture 13)
Rather than say anything wise or witty about Hecke operators (on which my contribution would be likely to run along the lines of “Wow. Uh. Cool.”), I thought I’d devote a little space to rambling about finite sets and the “field with one element”. I can’t remember having seen this material collected together before but, even if it’s out there somewhere, it might be interesting to someone to see some thoughts, or thought-like activity, on the subject here.
This rambling was set off by a remark that jim dolan made in passing in one of the geometric representation lectures (though unfortunately I can’t remember which one) comparing finite sets to finite vector spaces, and my having the immediate reaction that, if I understood anything at all about the whole business, it’s that finite sets correspond not to vector (nor yet affine) spaces, but to projective spaces. (I assume jd knows this.) So I thought I’d start with that. The justification goes as follows:
The “field ” should have one member, . We start by constructing some -dimensional vector spaces over this in exactly the usual way, by taking the space of -tuples over the set of its members. Of course, the “set of ways of choosing members of ” is fairly trivial. We get a -dimensional vector space consisting of all ordered, um, niltuples, i.e. , a -dimensional vector space consisting of all ordered, er, simples, (or perhaps I mean singletons) i.e. , a -dimensional vector space consisting of all ordered pairs, i.e. , a -dimensional vector space consisting of all ordered triples, i.e. , etc. Obviously each of these spaces has just one point.
The affine spaces can be, as usual, considered to consist of the same underlying sets but forgetting the addition on the vectors. (“Forgetting which point is the origin”, as people usually put it, would be quite hard, seeing as how the origin is the only point there is).
We then follow a standard procedure for constructing projective spaces out of affine spaces, thus:
The projective point is the same as the affine point, .
The projective line is the affine line, , together with a projective point “at infinity”, giving .
The projective plane is the affine plane, , together with a projective line “at infinity”, giving .
The projective -space is the affine -space, , together with a projective plane “at infinity”, giving .
And so forth. Thus a “projective -space over ” is a set with elements.
Then other stuff works the way we expect in projective spaces. For instance, any set of two different points determines a line—in fact, any set of two different points is a line. Likewise, in a projective plane (i.e. a -element set), any two different lines (i.e. any two different pairs, each consisting of two different points) intersect in a single point. Poincaré duality goes over to set complementation. Etc.
Some weirdness arises when we try to think of a projective -space as the set of lines through the origin in an -dimensional vector space. The weirdness lies in the fact that it seems very odd to talk about lines going through the origin when not only does each line “consist of” a single point, but it’s the same point for all the lines!
But even this can be made to make sense—of a sort. In general, we can specify a line through the origin in a vector space by specifying a vector in terms of its components in some basis, and then forgetting about the absolute values of the components and considering only their ratios. So we are concerned about the scalars such that, for components and , we have .
But the scalars are drawn from the field over which the vector space is defined and in the case of “”, there is only one such scalar, namely . So for any two components, and , we must have either or , i.e. either or must be zero. If this is to be true for any pair of components in the vector, it follows that a most one component can be “non-zero”—whatever the heck that means. Since there are components, any one of which can be “non-zero”, while the rest must be zero, there are possible lines! They are “simply” the “coordinate axes”. Once we’ve got this far, it then becomes obvious that “planes through the origin” must be in bijection with pairs of different lines (so we have, e.g., the “xy-plane”, the “yz-plane”, the “xz-plane”, etc), and similarly, in general, “-spaces through the origin” are in bijection with -tuples of different lines.
At this point I feel I ought to be explicit about the strange fact that although, in “”, it is surely the case that , nonetheless zero is being considered not to have a multiplicative inverse, any more than it does in a genuine field: this is what prevents from implying that , and thereby leaves us free to imagine that one component, at any rate, really is “non-zero”. So the “field” genuinely is missing the element (which is why I called its sole element , you see … ). (Which means, among other things, that the identity law is not being treated as a property but, at least, as structure, so there isn’t necessarily an “official” multiplicative identity, i.e. , even when there happens to be, in actual fact, an unofficial one, i.e. .) (Insofar as any of this makes sense at all, of course ….)
In the context of projective spaces, we can imagine the “non-zero” component taking the value , I guess; or maybe, in a “vector space” over “”, we can imagine it to be the “ghost” of the missing element .
At any rate, it does make a sort of bizarre, crippled sense. And it gives the answer we want, which is the main thing.
Something else to consider is how linear maps work. On the face of it, these ought to be adequately described by matrices all of whose entries are drawn from “”, i.e. are equal to . For some purposes, this is clearly right, e.g. it acts correctly on the points (i.e. the point) of the vector space, and these matrices form a “vector space” of the correct dimension and with just one point, as expected, etc, etc.
On the other hand, while these matrices act correctly on points, they aren’t good enough to handle lines (i.e. projective points) which is where the action is (so to speak).
Under normal circumstances, the action on the lines could be specified by giving a basis for the source vector space, and the destination of each basis element would be given as a vector in the target vector space, i.e. a linear combination of basis elements. The problem over “” is that we don’t have a good definition of “linear combination”; this is because we don’t have a good concept of addition, except for the fact that the zero vector is a good additive identity, which we can add to anything to give back that same thing. The trouble is that for two vectors with one non-zero component each (but a different component in each case), adding them together would give an illegitimate vector with two non-zero components. It doesn’t help, of course, that the actual value of the non-zero component is ill-defined, though I’ve been intermittently calling it .
I think we pretty much have two choices. We can either force each well-defined line to go to one other well-defined line, in which case linear maps must act as functions between projective spaces (i.e. functions between finite sets—and hence linear automorphisms are, as expected, permuations), or we can allow addition to translate into some kind of logical exclusive or, in which case, by a process too ill-defined for me to describe, we end up with linear maps being relations. Of course, we can do the latter in a well-defined way, too, by defining morphisms as spans: perhaps the lesson is that compared to trying to work over “”, the difference between rigs like Bool (I mean the rig of truth values—not the category of boolean algebras!) and rigs like is not as big as all that, since it can be seen as a degenerate and mutilated version of both.
In the same spirit, we might want to consider “projective varieties over ”. All constant terms are zero, so can be ignored, and higher powers than the first of any variable don’t add anything, so we get (depending on how we want to play it) either purely linear expressions of the form , or expressions containing terms like (where the variables are all different).
An expression like simply selects a subset of the set (that is, of the projective space), namely the set excluding the element corresponding to the coordinate . The sum of two terms is only zero if both terms are, so addition corresponds to intersection of sets. The case of products is more delicate, but the most natural (or, at least, the most amusing) interpretation is that is true iff either or , in which case multiplication corresponds to set union. The alternative is to reject instances of and force both and to , meaning that multiplication is the same as addition (which makes some sense too, when we consider that this holds in “” itself (insofar as anything holds in this peculiar entity)).
At any rate, it looks as though algebraic geometry over “” is the elementary “algebra of sets”. (I shan’t attempt to construct étale cohomology over “” :-D)
Algebraic extensions of “” don’t seem to make any sense; perhaps it’s best to consider it algebraically closed. We’d expect its extensions of degree to have cardinality , which is perhaps suggestive.
So those are my rambling thoughts on this subject.