Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 1, 2006

Klein 2-Geometry VI

Posted by David Corfield

It’s time to convene the sixth monthly session of the longest running Pro-Am math event in the blogosphere. The September session won’t go down in history as one of the most productive of the series. The Professional was justifiably distracted. To recap on the last steps taken. John asked us to

guess the precise description of the projective space associated to k p,qk^{p,q}.

The answer to this was: kP p1kP^{p-1} worth of objects each with k qk^{q} worth of automorphisms.

John then asked for a description of all categorified Grassmanians. I attempted an answer, raised a concern, and posed a question.

Posted at October 1, 2006 10:22 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/955

72 Comments & 7 Trackbacks

Re: Klein 2-Geometry VI

I’m afraid I’m still too distracted to push this Klein 2-geometry project forwards very quickly. But let me give it a nudge.

David wrote:

So the 2-space of (n,0)(n,0) sub-2-spaces of k p,qk^{p,q} has the Grassmanian G n,pG_{n,p} as objects and linear maps from k nk^n to k qk^q as automorphisms.

That sounds right to me. We’re going by the seat of our pants here, but I think it all works out quite simply.

In fact my main worry now is that it’s too simple to be very interesting! We’ll need to ask some tougher questions about these Grassmannian to see if they have hidden depths. For example, let’s study the “incidence relations” between the (n,0)(n,0) sub-2-spaces and (m,0)(m,0) sub-2-spaces of k p,qk^{p,q}. Then we’ll finally be doing categorified incidence geometry.

Of course we’re using enough jargon and symbols now that nobody except us can possibly understand what we’re talking about. So, my claim that it’s “too simple” may strike them as unconvincing.

But maybe that’s how math advances: when you’re completely stuck and a problem seems incomprehensible, lots of people can understand what you’re whining about. But when you finally crack the problem and it seems completely lucid, nobody can understand you anymore!

To reach understanding it seems one must suffer confusion, even when there’s somebody standing there telling you anything you want to know. The neurons must be reconfigured.

Posted by: John Baez on October 2, 2006 7:28 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

If you had a moment to check on my worry. A subspace is surely a class of inclusions.

Thinking in concrete terms, we may as well look at the 2-spaces of (1,0) and (2,0) subspaces of k 3,1k^{3,1}. If we’re right, then in the second case we have G 2,3G_{2,3} worth of objects, with linear maps from k 2k^2 to k 1k^1 worth of automorphisms. That’s k 2k^2 worth of automorphisms for each object. How should we picture chain maps?

Posted by: David Corfield on October 3, 2006 8:26 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

I think perhaps this worry you had is just part of the loosening of thinghood that goes with categorification. A “point” is no longer a thing, but instead a set or possibly a groupoid of ways of being that thing (up to isomorphism).

Does this make sense? Is this what you meant by a “class of inclusions”?

Posted by: Tim Silverman on October 6, 2006 12:17 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

I’ve been running hard to try and catch up with this discussion. Gosh, there’s a lot of material! But some of it is a record of confusion and clarification rather than finished new stuff, and it might confuse me to spend too long on it. Therefore, in an effort to avoid absorbing the whole lot, I want to try and take a shortcut, by taking some big intuitive leaps to get quickly to the point where I really need to start to actually think.

I think the trick with this sort of learning is to start with what one already understands and expand from there. With this in mind, and with your permission, I want to take a brief diversion, and momentarily go back to the last thing I think I thoroughly understand, namely the automorphism 2-group of the groupoid S 3//A 3S_3//A_3 (the pair-of-triangles). And I want to think about spaces and their figures in the way I’m most familiar with, namely as collections of points, and generalise from there. Since emphasising points as concrete fundamental entities is kind of contrary to the spirit of Klein, I’ll call this exercise “semi-Klein geometry”. Craving your lordships’ indulgence … hopefullly this won’t take too long … ulp! …. here goes …

So I see the pair-of-triangles as a space of two points, each of which is internally a triangle. Symmetries let us permute the points arbitrarily and map their internal structures isomorphically or (in the weak case) by any equivalence. 2-symmetries relate symmetries which permute the points the same way but differ by internal automorphisms or (in the weak case) internal auto-equivalences.

So generalising this as much as possible without actually thinking, we can add more points, we can cut down on the number of ways we’re allowed to permute them, we can give them more complicated internal structures (i.e. bigger/smaller/different internal automorphism/auto-equivalence groups/2-groups), and we can give different points within the space different internal structures from each other.

To express this more categorificationally:

With semi-Klein 1-geometry, we have a set of points and a group of symmetries on them. With semi-Klein 2-geometry, we get a category of points or (in the weak case) a 2-category of points, and a 2-group of symmetries and 2-symmetries on them. A symmetry of the space can map a given point to another given point only if the two points are isomorphic to each other or (in the weak case) equivalent; and the mapping must be an isomorphism or (in the weak case) an equivalence. I.e. we not only care which points go where but also how they get there (or at least what they look like when they arrive). We also have 2-symmetries which relate certain symmetries to each other, namely those symmetries that agree on which points get mapped to which, but differ by internal automorphisms/equivalences of the points.

I guess there are various uses we could put this concept of 2-symmetry to, but one way, perhaps, is to think of the 2-symmetries as gauge transformations. Then symmetries are only defined as fully distinct entities “up to gauge transformations”.

Does any of this make any sense so far? I’m kind of worried I don’t really understand how the strict/weak distinction affects things, or not as well as I should. Plus no doubt lots of other things.

But assuming we can still keep going down this path, we can think about cutting down the symmetries and 2-symmetries to a sub-2-group of the full 2-group of the space. In a spirit of wild reckless abandon, i.e. continuing to avoid actual thinking, I’ll try to make this as intuitively “injective” as possible by simply throwing away morphisms and 2-morphisms and hoping we can tidy it all up later.

So we can

a) Cut down on which points can go to which points, possibly foliating the space into subspaces of some kind (?)
b) Cut down on the internal symmetries of (some of) the points, thereby
   i) Reducing the number of ways to get from one point to another, and/or
   ii) Making the space of points of different kinds fall apart (or further apart) into more isomorphism/equivalence classes of different kinds of points, and of course
   iii) Adding extra structure to the points themselves

and also

c) by cutting down on the 2-symmetries, we can subdivide the internal symmetries into more kinds.
   This will also affect whether and how we’re allowed to map between points.

I’m pretty sure I’m being somewhat “evil” here, at least in spirit, but I’m still hoping for “redemption” later, in the tidying-up phase. So to press on…

Now, since I’m thinking of the whole space as made of points, I want to think of figures in a pointy way too.

Figures in this semi-Klein 1-geometry I’ve been talking about are not the same as subgroups of the space’s symmetry group, but are made of points, namely the points in the orbit of a given point, under the appropriate subgroup. So in semi-Klein 2-geometry, we want to have figures which are “2-orbits” of points under sub-2-groups. In this case, a figure will be not a set of points, but a category of points or (in the weak case) a 2-category of points. This is because we care not only which points are in the figure, but also how we are allowed to get from one to another (via symmetries in the sub-2-group) and also which internal symmetries are isomorphic under (the remaining) 2-symmetries in the sub-2-group.

Then an incidence relation between figures becomes, depending on how virtuous we want to be, either a functor, or a set of functors, or a category of “functors together with natural transformations (or natural isomorphisms?) between them”.

And this makes sense because, in the full Klein geometry approach, the figures are sub-2-groups, and incidence relations are (closely related to?) inclusion relations among sub-2-groups, which we already know are functors, sets of functors or categories of functors. We are now in a position to analyse what we mean by “2-orbits” given the various kinds of “injectivity” and “surjectivity” conditions we want to impose on the functors, and given the ways we want functors to be naturally isomorphic or not. However, this requires me to think, so I will stop here.

Does any of this make any sense at all? (Or is it too mind-numbingly obvious? That would also be bad.)

The 2-Euclidean spaces that David was thinking about right at the beginning, where we make the base space 2\mathbb{R}^2 and the internal symmetries O(2) (or, more generally, the base space n\mathbb{R}^n and the internal symmetries O(n)), seem quite restrictive when viewed through this lens. The reason is that the internal space of a point contains all the information about the global geometry that isn’t coded in the location of the point. That is, to specify a symmetry of the space, we simply need one point (any point) and a morphism from that point. Is this true? Maybe the 2-symmetries allow a bit more wiggle-room. I guess it’s not surprising that a space specified as the quotient of two low-dimensional groups should be quite restrictive, and then mechanical categorification in the manner of going to the weak quotient doesn’t loosen things up much. Actually, that’s really mere oidisation rather than categorification in its full power and glory, so maybe there’s more to be done with these spaces (or spaceoids!)

Hmm, well, quickly looking at the posts on 2-vector spaces in the light of what I’ve just said, I think one way I may have been “evil” is by talking about individual points in the space as though they were well-defined entities, when maybe they should only be defined up to isomorphism or something. I suppose wondering if this is so might indicate progress of some sort in my understanding.

OK. I think I’ve now caught up and am ready to begin thinking.

We return you to your usual service.
Apologies for the interruption.

Posted by: Tim Silverman on October 5, 2006 10:50 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Gosh, there’s a lot of material! But some of it is a record of confusion and clarification rather than finished new stuff, and it might confuse me to spend too long on it

It’s clearly not designed for pedagogy. There’s something I like, though, about leaving a trace of mathematical thinking as it happened rather than all tidied up.

John’s attention and mine seem to have moved elsewhere. It’s hard to predict when and in which direction enthusiasm will propel you. That’s not to say we didn’t find out some good things. There must be treasures to be had in forming weak quotients of 2-groups by their sub-2-groups. Perhaps when Derek Wise publishes his work on Cartan geometry, it’ll be an opportunity to take things up again.

Posted by: David Corfield on October 7, 2006 12:03 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

David said

John’s attention and mine seem to have moved elsewhere. It’s hard to predict when and in which direction enthusiasm will propel you.

It’s generally a safe bet that my enthusiasm will take me in the direction that everybody else has just abandoned. :-(

Since I’m there, however, I suppose I might as well put down my thoughts on 2-Grassmannians, which might be of some use to someone. Well, I think there is something of interest here, even though only half-digested.

In an effort to understand what we are doing here, I want to pick something even more concrete to work with. Since we are dealing with abstract chain complexes, why not draw pictures of some geometry giving rise to concrete chain complexes? For instance, consider the 1-dimensional cell complex below, let’s call it C:

aba \rightarrow b
\uparrow \;\;\;\downarrow
cdc \leftarrow d

The 0-chains have an obvious basis of vertices aa, bb, cc and dd, and the 1-chains have a basis of oriented edges (ab)(a\rightarrow b), (bc)(b\rightarrow c), (cd)(c\rightarrow d) and (da)(d\rightarrow a). We’ll use the boundary operator to get from 1-chains to 0-chains. Let’s call the resulting chain complex C (with the bolding removed). Then both C 0C_0 and C 1C_1 are 4-dimensional. The kernel of the boundary operator is 3-dimensional. We’ll call the underlying field kk.

To get a 2-vector space V, we take V 0V_0 to be C 0C_0, and V 1V_1 to be C 0+C 1C_0 + C_1, i.e. the space with a nice basis of ordered pairs (x,yz)(x, y\rightarrow z), xx being a vertex and yzy\rightarrow z being an edge.

Then the identity map sends xx to (x,0)(x, 0)
The source map sends (x,yz)(x, y\rightarrow z) to xx
The target map sends (x,yz)(x, y\rightarrow z) to zyz-y

If we ask what the morphism (x,yz)(x, y\rightarrow z) “does”, we find that it sends

xx+(yz)=xy+zx \rightarrow x + \partial(y\rightarrow z) = x - y + z.

In particular, the target of (x,xy)(x, x\rightarrow y) is yy, so the edge ‘acts’ as a morphism in the obvious way in this case.

What’s the dimension of this thing? I think that it’s (1, 1). That’s if I’ve understood this whole dimension business correctly.

Now we can consider chain maps, particularly inclusions.

First, consider the cell complex consisting of a single point; let’s call this cell complex P (and the single point p).

           .p. p

The chain complex over this has dimension (1, 0). Let’s call it P.

Clearly maps from the chain complex P to the chain complex C comprise the vector space of linear maps whose basis is the four maps from the single vertex of P to each of the four vertices aa, bb, cc and dd of C.

This gives us a four-dimensional vector space of points.

However, this is before we mod out by the automorphisms of P, which reduces us to the projective space kP 3kP^3, i.e. the Grassmannian G 1,4G_{1,4}.

However, we still haven’t reduced enough, because we haven’t thought about chain homotopies!

Note that internally in C, the points are all homotopic to one another, via the edges of C. This is because (or is another way of saying that) our cell complex has one component.

Basically, I think every map from P 0P_0 to C 1C_1 gives a chain homotopy. After modding out by automorphisms of P 0P_0, I think what we’re left with is that all projective transformations of kP 3P^3 are chain homotopies, i.e. autoequivalences. I think this gives us a space of just 1 point, with a lot of automorphisms, corresponding to the fact that our original cell complex C has just one component.

This is looking increasingly like the example with the automorphisms of S 3//A 3S_3//A_3, except with a more general 1-cell complex instead of a groupoid.

Anyway, onwards and upwards! What can we do to make this more complicated?

OK, so let’s take the cell complex Px2 consisting of two isolated points, called p and q, and no edges.

           .p. p           .q. q

The dimension of its complex Px2 is (2, 0). If we want the map on 0-chains to be injective, then we have to map the two points of Px2 to separate points in C. So we get a 6-dimensional space of maps. Modding out by automorphisms of Px2 0Px2_0, we get a 4-dimensional space, basically the Grassmannian G 2,4G_{2,4} in C 0C_0. Note that these automorphisms include ones related to exchanging p and q, since p and q are themselves isomorphic to each other.

Looking at the graph homotopies allowed on 2 points, and allowing only injective ones, we basically get a space of chain homotopies on the chain maps Px2CPx2 \rightarrow C which is whatever the equivalent of projective transformations is for more general Grassmannians. In particular, up to equivalence, I think there’s still only one point. That’s interesting and a bit disconcerting …

I wonder what happens if we let the chain homotopies (or the chain maps) condense the two points to 1. I guess in the first place, we’d allow degenerate maps that project planes to lines, and in the second, umm … moving swiftly on …

Let’s now take a cell complex consisting of a single edge with two ends:

           p..qp .\;\rightarrow .\;q

Let’s call this E, and its chain complex E. This has, I believe, dimension (1, 0).

Chain maps from the complex over this little fellow must surely derive from a map of the one edge of E to a single edge of C, also mapping the end vertices appropriately (including getting the orientation to match). Otherwise the boundary operator won’t carry over correctly. (Is this right?)

So we appear to get a four-dimensional space of maps, corresponding to the four edges of C.

This is a bit tricky. We have a 4-dimensional space of maps from E 0E_0 to C 0C_0, and a 4-dimensional space of maps from E 1E_1 to C 1C_1. But these are not at all independent of each other. Or are they?

OK, let’s think very, very hard …

Considering just the maps from E 0E_0 to C 0C_0

Once we’ve picked a pair of target points in C (ordered of course), we get a two-dimensional space of maps. It seems reasonable to count this space as a point in the Grassmannian G 2,4G_{2,4} of 2-dimensional subspaces of the 4-dimensional space C 0C_0. This Grassmannian is 4-dimensional.

However, the map is effectively determined (projectively, i.e. up to automorphisms of EE) by the map of just one of the vertices. So AFAICS, we’re confined to a 3-dimensional subspace of G 2,4G_{2,4}, isomorphic to kP 3kP^3.

Which point in this subspace of the Grassmannian we get is determined by the map from E 1E_1 to C 1C_1, which is also kP 3kP^3, even more obviously. I guess there is a bijection between these two copies of kP 3kP^3, corresponding to the fact that the vertices are attached to the edge …

As for automorphism of this point, once again we can rotate the whole figure all the way around the four possible positions, so I’m guessing that all projective transformations of the kP 3kP^3 are automorphisms, and that up to automorphisms we have just one point.

I’m really not convinced I understand the relationship between the two copies of kP 3kP^3 here, but let’s just carry on assuming that the intuitive relationships are right …

Considering now the graph E2:

           pqrp\;\;\;q\;\;\;r
           .... \rightarrow . \rightarrow .

We appear to also have a space kP 3kP^3 here, for both C 0C_0 (as a subspace of G 3,4G_{3,4}) and for C 1C_1 (as a subspace of G 2,4G_{2,4}). Again, projective transformations are automorphisms. The two ways of mapping E to E2 give two projective transformations of kP 3kP^3. That’s a rather interesting incidence relation. The two sets of maps are obviously homotopic to one another.

Not only that, but the graph Px3:

           pqrp\;\;\;q\;\;\;r
           ....\;\;\;.\;\;\;.

gives rise to the same pattern: since the three vertices can be exchanged with each other by automorphisms of Px3 and are hence projectively modded out by the map from the chain Px3, it seems we would get the same kP 3kP^3 pattern.

The graph Ex2 seems more interesting:

           pqrsp\;\;\;q\;\;\;r\;\;\;s
           .....\rightarrow .\;\;\;. \rightarrow .

Because there’s so little room in C, if we want to be thoroughly injective then the images of the two components are compelled to be opposite each other, apparently making the corresponding space k 4k^4, and projectively kP 3kP^3 as before. But we can also swap the two components, since these are isomorphic to each other. This suggests the corresponding space is just k 2k^2, projectively kP 1P^1.

Finally, if we project a copy of C into C itself, the space will be the trivial k 0k^0, since projectively we mod out by all the automorphisms of C before starting.

An interesting phenomenon arises with images of the graph P+E

           pqrp\;\;\;q \;\;\;r
           .... \rightarrow .\;\;\;.

On the face of it, for C 0C_0, since we have two components which have some independence of movement, this might seem to give rise to a subspace of G 2,4G_{2, 4}. But on the other hand, the two components are not isomorphic, suggesting the projective maps lie in some larger space, maybe a space of 2-dimensional frames, which I don’t feel like thinking about right now.

Note that, if, instead of C, we’d used another graph with 2-chain of dimension (1, 1), but with many more vertices in the ring, then we’d have many more phenomena of this sort, with different non-isomorphic components being mapped into the target graph, e.g. E2+E3. Of course, with 3 or more components in our figure, we also come up against the phenomenon that different cyclic orderings of the images are not homotopic to one another, so giving rise to different homotopy components of the generalised Grassmannian.

Before I collapse in exhaustion, I just want to talk a little about spaces of dimension different from (1, 1). Consider the graph D of dimension (1, 2):

.... \rightarrow . \leftarrow .
\uparrow\;\;\;\downarrow\;\;\;\uparrow
.... \leftarrow . \rightarrow .

Actually, I don’t think I have the energy to think about this right now, or about targets with multiple components, which obviously also need to be considered. Perhaps someone else could think about this for me. <Cheesy and ingratiating grin>

So I’ll wrap up and say what I think I’ve discovered. Just as finite-dimensional vector spaces and their subspaces are intimately related to finite sets and their subsets, so finite-dimensional 2-vector spaces and their 2-subspaces are intimately related to finite graphs and their subgraphs. These are obviously much more complicated and I’m not at all confident my analyses here are correct, particularly the identification of the various 2-Grassmannians associated to various graph maps. The basic idea seems sound though.

There are some quite odd new phenomena here, for instance the fact that, while there’s no injection that will send a 2-element set to a 1-element set, there are maps injective on vertices and edges that send a 2-component graph to a 1-component graph. Of course, if we’re being injective on vertices and edges, we might also want to see what happens if we impose injectivity on components as well.

We also might be interested in relaxing vertex injectivity so that vertex maps are only injective up to homotopy.

But the result is rather dull. In these conditions, every component of a graph is equivalent to one with only 1 vertex. The graph corresponding to the 2-vector space of dimension (p, q) is one with p components, each with one vertex and q edges. Taking maps from a space of dimension (m, n) to one of (p, q), mod automorphisms (i.e. projectively), is just a matter of picking one of the C(p, m) m-element subsets of the p components, and, for each chosen component, picking one of the C(q, n) n-element subsets of the q edges, corresponding to a point in the product of Grassmannians G p,m×G q,nG_{p,m} \times G_{q,n}. Automorphisms correspond to the homotopy operation of moving a vertex around a loop and back to itself, leaving everything unchanged. Up in the vector space over the vertex, I think you do at least get to multiply by an element of the base field, but projectively I think this is irrelevant anyway. So David C and JB were correct that this case is kind of boring.

Posted by: Tim Silverman on October 8, 2006 1:44 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Well, I seem to have been terribly confused about how subspaces are related to subgraphs. Let me flounder a little more, possibly more usefully.

When comparing maps from P × 2 (2 disconnected vertices) with maps from E (2 vertices connected by an edge), the main real difference as far as C 0C_0 is concerned now seems to me to be that the chain complex on the latter graph allows fewer automorphisms than that on the former, that is a 2-dimensional space of them instead of a 4-dimensional space. Hence, while the space corresponding to maps from P × 2 is 4-dimensional, basically the grassmanniann G 2,4G_{2,4} on C 0C_0, the space corresponding to maps from E is 6-dimensional.

We get this by calculating:

space of maps from 2-dimensional space \rightarrow 4-dimensional space is 2×4=8-dimensional2\times4=8\text{-dimensional}.

From 8, we subtract the dimension of the space of automorphisms.

For P×2 this is the space of invertible maps from k 2k^2 to itself, of dimension 2×2=42\times2=4.

For E this is the direct sum of two copies of the space of invertable maps from kk to itself, of dimension 1+1=21+1=2.

On C 1C_1, on the other hand, we really do have the space of maps from kk 4k\rightarrow k^4 modulo maps from kkk\rightarrow k, in other words kP 3kP^3.

I guess the boundary map ties these together in an interesting way.

All this applies, mutatis mutandis, to the other examples too, of course.

I wonder if I’m really less confused now …

Posted by: Tim Silverman on October 8, 2006 4:31 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Tim writes:

So I’ll wrap up and say what I think I’ve discovered. Just as finite-dimensional vector spaces and their subspaces are intimately related to finite sets and their subsets, so finite-dimensional 2-vector spaces and their 2-subspaces are intimately related to finite graphs and their subgraphs. These are obviously much more complicated and I’m not at all confident my analyses here are correct, particularly the identification of the various 2-Grassmannians associated to various graph maps. The basic idea seems sound though.

Yeah, that’s a great idea!

The way I see it, your fundamental idea is this. There’s a 2-functor from the 2-category of directed graphs to the 2-category of vector 2-spaces (\simeq 2-term chain complexes). And, it goes like this:

  • Any directed graph gives a 2-term chain complex.
  • Any map between directed graphs gives a chain map between 2-term chain complexes.
  • And, there’s also a kind of “homotopy between maps between directed graphs”, which gives a chain homotopy between chain maps between 2-term chain complexes.

(Do graph theorists ever think about those homotopies? They should!)

All this generalizes further, to an (n+1)(n+1)-functor from nn-dimensional cell complexes to (n+1)(n+1)-term chain complexes.

I’m not sure how to get some amazing new insights into projective nn-geometry from this way of thinking - but that’s not surprising, since I just read your post 2 minutes ago. We should mull on it.

Thanks!

Posted by: John Baez on October 8, 2006 7:25 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

John said, re this whole graphs \rightarrow 2-vector spaces business:

Yeah, that’s a great idea!

Wahoo!

He also said:

(Do graph theorists ever think about those homotopies? They should!)

I’ve no idea. But I’ve thought about somewhat similar stuff before. It was a long time ago and it never came to anything, but I guess it was the first piece of “real” maths I did, so I have fond if dim memories of it, and look for opportunities to reanimate some of the ideas from time to time.

And also, even:

Thanks!

I owe you such an enormous debt for expository brilliance in twf and elsewhere, that if something interesting comes of this, it’s just a tiny return …

And elsewhere, when I threatened to “keep chuntering away”:

Yes, do! Though “chuntering” sounds sort of bad - I guess it’s one of those things only Brits know how to do, like “whinging” - but in fact you’re doing some cool stuff.

It’s actually not that bad. “Chuntering” isn’t in very common use, so I don’t know how well my definition matches up to other people’s, but I think of it as describing the activity of a small machine working away in a corner somewhere, possibly making a bit more noise and blowing out a bit more steam than strictly necessary, but basically doing something useful in a modest but reliable sort of way. So I was secretly being quite complimentary to myself! I think “slow but steady” kind of captures it.

Posted by: Tim Silverman on October 9, 2006 10:35 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

I couldn’t resist looking up the word “to chunter” - and here are some definitions I got:

Merriam-Webster. Chunter - Brit. To talk in a low inarticulate way: mutter.

Answers.com. Chunter - Brit. To mutter, murmur; to grumble, find fault, complain.

WordWeb Online. Chunter - Brit.. To make complaining remarks or noises under one’s breath - murmur, mutter, grumble, croak, gnarl, complain, kick, kvetch [N. Amer], plain [archaic], quetch, sound off.

You were doing something more like muttering than kvetching or sounding off, but I like your definition even better, as in “downstairs, the wood-fueled generator was chuntering away.”

Anyway, it’s great that you’ve joined the Klein 2-geometry team!

Posted by: John Baez on October 10, 2006 3:34 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Keep up the effort. Treasure can’t be too far off!

One technical point: if your C has dimension (1,1), shouldn’t we be surprised to find sub-(2,0)-spaces, such as when you look for chain maps from Px2? To avoid ‘evil’, hadn’t these better look like injections into the 2-term chain complex kkk \rightarrow k, the arrow being the zero map?

By the way, we haven’t been introduced. You can see my details from our front page. What’s your background?

Posted by: David Corfield on October 8, 2006 8:07 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Tim Silverman writes:

And this makes sense because, in the full Klein geometry approach, the figures are sub-2-groups, and incidence relations are (closely related to?) inclusion relations among sub-2-groups, which we already know are functors, sets of functors or categories of functors. We are now in a position to analyse what we mean by “2-orbits” given the various kinds of “injectivity” and “surjectivity” conditions we want to impose on the functors, and given the ways we want functors to be naturally isomorphic or not. However, this requires me to think, so I will stop here.

Right, there are some decisions to be made in categorifying the Klein geometry program, and the only way I know to make these decisions is to try different choices and see which ones give more interesting math.

By the way, I’d say incidence relations are closely related to inclusion relations among subgroups (in ordinary Klein geometry), rather than being such inclusion relations. For example, in projective geometry one has stabilizer groups for points, lines, planes etc., each of which is a “maximal parabolic” subgroup of SL(n)\mathrm{SL}(n). None of these subgroups include each other, but when two such figures are incident the intersection of their subgroups is a nontrivial parabolic subgroup in its own right. I tried to explain this in week178. The whole story generalizes nicely when we replace SL(n)\mathrm{SL}(n) by any other semisimple Lie group. One gets some very nice math this way, related to Dynkin diagrams.

So, it would be very tempting to categorify projective geometry and see how this incidence business generalizes. That’s (one reason) why David and I are currently at work categorifying projective geometry - in a rather leisurely manner.

Does any of this make any sense at all?

Yes, it makes perfect sense.

(Or is it too mind-numbingly obvious? That would also be bad.)

Well, I guess you could say David and I consider this stuff “obvious”, since it’s been our plan all along to categorify Klein geometry in precisely this way. But, it’s not “too mind-numbingly obvious”. In fact it’s great that you’re stating it so clearly: people reading what David and I have done so far are likely to be drowned in detail and not see the big picture.

As David notes, our project is moving slowly right now. I’ve been busy preparing my classes and my Long Now talk, while he has been emulating Aristotle. And, most of my spare time these days goes into learning what James Dolan has done over the summer.

So, now is actually a good time for you to catch up with us - and maybe shoot ahead!

Posted by: John Baez on October 8, 2006 6:56 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Since no one’s reading this anyway, I might as well keep chuntering away.

Looking at chain maps from the edge graph E

        pqp\rightarrow q

to the ring of four edges

        abcdaa\rightarrow b\rightarrow c\rightarrow d \rightarrow a

a little thought about the nature of Grassmannians as homogeneous spaces makes it clear that the space of chain maps as far as vertex space is concerned is either SO(4)SO(4) or some quotient of SO(4)SO(4) by Z 2Z_2, hence the 6 dimensions.

Trying to find a still simpler example to get a better picture of what’s going on, we consider maps from E to the graph

        a ba\array{\rightarrow\\\leftarrow} b

which also has dimension (1, 1).

Clearly, as far as maps to the space of 0-chains are concerned, each vertex of E can be mapped to either point, which gives a 2-dimensional space of maps, and modding out by automorphisms of the space over that vertex, we get a copy of the projective line. So the space of projective maps is the product of two copies of the projective line over kk. E.g. for k=k=\mathbb{R}, the space is topologically a torus.

The maps on 1-chains clearly also form a 2-dimensional space, projectively a projective line, and the boundary map sends a point x(x1,1x)x\rightarrow(x-1, 1-x), I think.

Compare this now with the slightly different graph

        a ba\array{\rightarrow\\\rightarrow}b

we have drastically reduced the space of projective 0-chains to a single point, while the space of 1-chains remains the same.

OK, now we can go back up to more complicated graphs. If there are pp places we can map our edge graph E to, then the maps on the 0-chains will projectively form the space kP p1×kP p1kP^{p-1}\times kP^{p-1} while the maps on the 1-chains will projectively form the space kP p1kP^{p-1}. The boundary map will map it into each factor of the 0-chain map space in more or less the obvious way, but with one being in some way the mirror image of the other. Oh, wait … I need to distinguish how many target elements each source element can go to …

OK, more generally, suppose we have a source graph with some vertices and some edges. Suppose that the source graph can be mapped into the target graph in various ways such that a vertex vv can be mapped into |v|\vert v\vert different vertices of the target graph, and that an edge ee can be mapped into |e|\vert e\vert different edges of the target graph. Then the projective space of maps of 0-chains will be v(kP |v|1)\prod_{v}(kP^{\vert v\vert-1}) and the projective space of maps of 1-chains will be e(kP |e|1)\prod_{e}(kP^{\vert e\vert-1}). The boundary map will send the latter into the former factor by factor according to the structure of the source graph.

No doubt there are many interesting homotopic things to be said about these maps, all well-known but not to me!

The above assumes there are no non-trivial automorphisms of the source graphs, which would give rise to extra automorphisms of the chains and would need to be modded out. Since these appear to be what give rise to more general Grassmannians than mere projective spaces, they are obviously also worth looking at.

Posted by: Tim Silverman on October 8, 2006 7:26 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Tim Silverman writes:

Since no one’s reading this anyway…

Don’t be so sure of that! I read it as soon as you posted it!

… I might as well keep chuntering away.

Yes, do! Though “chuntering” sounds sort of bad - I guess it’s one of those things only Brits know how to do, like “whinging” - but in fact you’re doing some cool stuff.

Posted by: John Baez on October 8, 2006 7:37 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Grr. )-(

I was getting so puzzled and bothered by the strange issue of edge orientation that I couldn’t sleep. I have been thinking once again about the boundary operation, and I think I am going to have to do a lot of backpedalling tomorrow. The maps on the vertices are tied to those on the edges by the boundary operation, but the source and target orientations don’t have to match (although it is still interesting to think about what happens if you force them to). That at least brings the graph back into line with the dimension of the chain complex.

Grumble. This always happens. As soon as it becomes clear, I realise I was doing it all wrong.

Posted by: Tim Silverman on October 9, 2006 12:48 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

So categorified incidence geometry has finally come up! I tried to think about this with little success a few months ago. The problem was how to define the dimension function - in the form of usual incidence geometry (in dimension nn) this is a function from the set of figures to the ordinal n\mathbf{n} (Better thought of as a functor from the lattice of figures to the lattice n\mathbf{n}). I toyed with Heyting algebras and got in a mess. :-(

It wasn’t until I saw Batanin’s talk at AustMS06 that a better idea came: use the `categorified natural numbers’, plane trees of height 2 (Section 2.1 in math.CT/0301221). The dimension should have target one of these. Then of course we get dimension labelled by two integers. So here’s a question - can a plane tree of height 2 (call it a 2-ordinal, say) be thought of as a 2-category, as an ordinal can be thought of as a category? Or the real question: can it be thought of as a 2-category in such a way that works with our concepts of categorified projective geometry?

Let’s go back a step - incidence geometry comes with a set SS and a function dim:Sndim:S \to \mathbf{n} (and some conditions, but they are properties, I think, so leave them for later). So if we have a category and a functor to a 2-ordinal, what can we say about the set up? A 2-ordinal is essentially a pair of ordinals and an order preserving map between them.

T=k 2k 1T = \mathbf{k}_2 \to \mathbf{k}_1

Think of it as the category 2\mathbf{2} with some extra info (actually an ordinal internal to the category of ordinals). Then considering the object and morphism components of the functor to TT we get a pair of numbers which are candidates for the dimension.

The major problem with this idea is that it matters how the branches of the tree are organised - not a problem with 1-ordinals.

Tim: I personally think incidence is nicely thought of as spans in the suboject lattice of the whole space. Keep up the good work - I too tried to jump on the moving car, but I think I’m hanging off the spoiler instead ;)

Posted by: David Roberts on October 9, 2006 9:46 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

David Roberts wrote:

Tim: I personally think incidence is nicely thought of as spans in the suboject lattice of the whole space. Keep up the good work - I too tried to jump on the moving car, but I think I’m hanging off the spoiler instead ;)

Don’t be so sure. What’s a span? And what’s a subobject lattice?

OK, I can guess what a subobject lattice is basically, but I’m sure there are a lot of subtleties that I’m not aware of.

What’s a span? I’ve seen people talk about them, but I haven’t looked them up. Is there an easy explanation?

Posted by: Tim Silverman on October 10, 2006 7:23 PM | Permalink | Reply to this

Spans

What’s a span?

By itself, a span is just a diagram of the form

(1)s p 2 b p 1 a. \array{ s &\stackrel{p_2}{\to}& b \\ p_1 \downarrow\;\; \\ a \,. }

Spans internal to some category CC with pullbacks form a bicategory. Objects are the objects a,b,ca,b,c of CC, a 1-morphism

(2)a(s,p 1,p 2)b a \stackrel{(s,p_1,p_2)}{\to} b

is a span, as above, composition of 1-morphisms abca \to b \to c is by pullback

(3)s× ps s c s b a. \array{ s \times_p s' &\to& s' &\to& c \\ \downarrow && \downarrow \\ s &\to& b \\ \downarrow \\ a \,. }

and a 2-morphism

(4)asb asb \array{ a \stackrel{s}{\to} b \\ \Downarrow \\ a \stackrel{s'}{\to} b }

is a diagonal arrow sss \to s' making

(5)s b a s \array{ s &\to& b \\ \downarrow &\searrow& \uparrow \\ a &\leftarrow& s' }

commute.

For instance, a category internal to CC is nothing but a monad in spans in CC #.

Posted by: urs on October 10, 2006 8:02 PM | Permalink | Reply to this

Intersections of Subobjects

I personally think incidence is nicely thought of as spans in the suboject lattice of the whole space. #

I can’t read David Robert’s mind here, but I guess what he has in mind is this:

Let VV be some object (a vector space, say), and let a,b,ca,b,c be subobjects of VV.

If VV is a vector space then its subobjects are nothing but sub vector spaces. Moreover, one subobject aa of VV may also be subobject of another subobject bb of VV, i.e. we may have monomorphisms

(1)abV. a \to b \to V \,.

In terms of these, consider a span

(2)c b a. \array{ c &\to& b \\ \downarrow \\ a } \,.

The existence of this span says that cc is a subobject of aa as well as of bb.

But both aa and bb are also subobjects of VV

(3)c b a V. \array{ c &\to& b \\ \downarrow && \downarrow \\ a &\to& V } \,.

So cc is in fact a subobject of VV that sits inside both aa and bb.

In terms of vector spaces, cc sits inside the intersection of aa with bb.

I guess we are interested here in the “largestcc with this property (the full intersection of aa with bb).

This should be nothing but the terminal object in

(4)Hom spans(a,b), \mathrm{Hom}_{\mathrm{spans}}(a,b) \,,

namely the object aba \cap b

(5)ab b a \array{ a \cap b &\to& b \\ \downarrow \\ a }

such that for any other cc with

(6)c b a \array{ c &\to& b \\ \downarrow \\ a }

there is

(7)c ab \array{ c \\ & \searrow \\ && a \cap b }

such that

(8)c b a ab \array{ c &\to& b \\ \downarrow & \searrow & \uparrow \\ a &\leftarrow& a \cap b }

commutes.

I assume David Roberts is telling us that we should look at the same diagrams, but internal to 2-vector spaces.

Posted by: urs on October 10, 2006 8:59 PM | Permalink | Reply to this

Re: Intersections of Subobjects

Urs: spot on!

Tim: As you have seen in the discussion, when doing category theory, one doesn’t think of things as being inside other things, but as being mapped to them 1-1 (monically). For any object xx in a category with an initial object (a topological space, say) consider all the monic maps to that object, and all the monic maps to their sources etc. This forms a new category with at most one map between any two objects. What’s more, we have a final object, xx , and an initial object \empty. This gives us a bounded lattice. A nicer example is the lattice of open subsets of a topological space.

I used `subobject’, but to connect with the Klein geometry, `subfigure’ sounds better (well, more logical to me), especially once it all gets categorified.

And, we are (eventually) not only interested in, say, vector spaces, but things invariant under a group action (e.g. spheres).

So, back to urs’ post:

Spans internal to some category C with pullbacks form a bicategory.

I would say aa and bb are incident if aba \cap b, the terminal object in the spans between aa and bb was not the initial object. This clearly gives a reflexive, symmetric relation.

In traditional incidence geometry, the incidence relation is not transitive. This corresponds with not being able to compose morphisms (=spans) with a source-target match. So we don’t really need spans in a category (the subfigure lattice) with pullbacks, just one where we can form the binary product aba \cap b, and with an initial object (hence all finite products)

When we talk about the dimension of a figure, we don’t want something of a higher dimension sitting inside something of smaller dimension. Thus, if the notion of dimension is defined for all objects in the `subfigure lattice’ Sub\mathbf{Sub}, we have a map of lattices Subn={0<1<<n}\mathbf{Sub} \to \mathbf{n} = \{0 \lt 1 \lt \ldots \lt n\}, where nn is the dimension of our space.

I know I’ve sidetracked a lot here from Klein geometry, but I just wanted to turn the problem upside down.

Posted by: David Roberts on October 11, 2006 8:21 AM | Permalink | Reply to this

Re: Intersections of Subobjects

In traditional incidence geometry, the incidence relation is not transitive. This corresponds with not being able to compose morphisms (=spans) with a source-target match. So we don’t really need spans in a category (the subfigure lattice) with pullbacks, just one where we can form the binary product a∩b, and with an initial object (hence all finite products)

I guess that depends on what one is trying to achieve. I could turn that argument around and say that one should not care so much about the incidence relation, but just about the category of spans.

Composing aba \cap b with bcb \cap c yields abca \cap b \cap c. It’s well defined. As you say, it may be empty (= the terminal object) even if aba\cap b and bcb\cap c are not.

[…] subfigure […]

In order to apply the concept of spans in the subobject lattice to 2-geometry, we need to finally figure out what notion of sub-2-object we really need.

I am a little surprised that apparently there is no literature on notions of sub-2-objects. Or is there?

Posted by: urs on October 11, 2006 11:37 AM | Permalink | Reply to this

Re: Intersections of Subobjects

urs wrote:

In order to apply the concept of spans in the subobject lattice to 2-geometry, we need to finally figure out what notion of sub-2-object we really need.

I am a little surprised that apparently there is no literature on notions of sub-2-objects. Or is there?

I’m not an authority, but none comes immediately to mind. I was going to say something about Mark Weber’s recent paper before, but nothing constructive would come.

The gist of it is: the forgetful functor Set *Set\mathbf{Set}_* \to \mathbf{Set} plays the role, inside Cat\mathbf{Cat} that 1{0,1}1 \to \{0,1\} does in Set\mathbf{Set}. A subcategory of CC is given by (weak) pullback of this subobject classifier along a functor CSetC \to \mathbf{Set}. One would think that just as the subobject lattice of a set SS arises from Hom(S,{0,1})\mathrm{Hom}(S,\{0,1\}), the sub-2-objects would be given by [C,Set][C,\mathbf{Set}]. A stupid question: can this have much to do with doctrines?

Presumably Benabou has something to say about sub-bicategories.

Posted by: David Roberts on October 12, 2006 3:38 AM | Permalink | Reply to this

Re: Intersections of Subobjects

So to think of the category 1 as a subcategory of the category with 2 objects, each with 2 automorphisms, you map one of the objects into the empty set, and make sure that the non-identity automorpism of the other object goes to a map with no fixed point.

Posted by: David Corfield on October 12, 2006 10:10 AM | Permalink | Reply to this

subobject classifier

pullback of this subobject classifier

I need to learn this.

How do we deal with subobjects of objects with extra structure this way?

For GG a group, not every morphism in Hom Set(G,{0,1})\mathrm{Hom}_\mathrm{Set}(G,\{0,1\}) describes a subgroup of GG.

For the particular case of groups, I could of course regard the group as a category and look at Hom Cat(Σ(G),Set)\mathrm{Hom}_\mathrm{Cat}(\Sigma(G),\mathrm{Set}). (Does that really give me subgroups? I need to think about that. But have to run now.)

But how does it work in general?

Posted by: urs on October 12, 2006 11:56 AM | Permalink | Reply to this

Re: subobject classifier

Is this right? For a subgroup HH of GG, you need a GG-set XX, so that the action of HH fixes an element of XX, while the elements of GG\HH don’t fix it.

Hmm. Something makes me think that this sort of process wouldn’t produce a unique pullback - the stabilizer of other points would feature. Perhaps all the points of XX have to be fixed by HH, like if XX is the set of cosets.

Posted by: David Corfield on October 12, 2006 12:23 PM | Permalink | Reply to this

Re: Intersections of Subobjects

I guess that depends on what one is trying to achieve. I could turn that argument around and say that one should not care so much about the incidence relation, but just about the category of spans.

Precisely. Having thought some more about the existence of pullbacks, I would now say we do need them. I was getting confused with the definition of the incidence relation, which is `the existence of a non-initial (=nonempty) span’. The mere existence of spans of any sort is transitive, since any two figures are at least `incident in the empty set’. I’m just matching the new definition to the old when talking incidence relations.

Posted by: David Roberts on October 12, 2006 4:40 AM | Permalink | Reply to this

Re: Intersections of Subobjects

[…] no literature on notions of sub-2-objects. Or is there?

Maybe this one

Marco Grandis Weak subobjects and weak limits in categories and homotopy categories Cah. topol. géom. differ. 1997, vol. 38, no4, pp. 301-326 (25 ref.) (inist)

Posted by: urs on October 12, 2006 12:56 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Each vector 2-space is equivalent to a skeletal vector 2-space (did we ever decide that that was better than ‘2-vector space’?), and one of these we thought could be characterised by two integers corresponding to the 2-term chain complex, k pk qk^{p}\rightarrow k^{q}, the map sending all of k pk^{p} to 0. A cell complex which would generate this chain complex is a set of qq points, with a set of pp loops, each based at one of the points.

You now introduce categorified ordinals in the shape of Batanin’s 2-trees, but note that

The major problem with this idea is that it matters how the branches of the tree are organised

This would seem to correspond to there being a difference between the (2,2) vector 2-space generated as the chain complex of a pair of points each having a loop based there, and the one generated by a pair of points, one of which has two loops based there, the other none. Intuitively, you’d think they were different.

Finally, the issue of ordinal or cardinal. In the last case I mentioned of a cell complex, you wouldn’t think that it would matter which order the points came in, i.e., whether the point with 2 loops was point 1 or point 2.

Posted by: David Corfield on October 11, 2006 8:22 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

David C wrote:

Finally, the issue of ordinal or cardinal. In the last case I mentioned of a cell complex, you wouldn’t think that it would matter which order the points came in, i.e., whether the point with 2 loops was point 1 or point 2.

The ordinal is to keep track of the lower dimensional things, not the things of the same dimension. For regular vector spaces (length 1 chain complex of a collection of points), the we’d use an ordinal, and subspaces would be mapped to their dimension. Having drawn some diagrams for k 2,2k^{2,2} it looks like only one tree is not enough per 2-space. There is more structure present, though, so maybe it will work.

Posted by: David Roberts on October 12, 2006 8:42 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

So let’s get this straight. We seem to be saying that John was wrong when he said:

Up to equivalence, 2-vector spaces are classified by two natural numbers - the “first and second Betti numbers”. (7 August)

And that this, of course, carries over to constructions like the projective 2-space associated with a vector 2-space, in that there are non-equivalent categories of injective maps of, say, (1,0) into the two different forms of (2,2). In other words the (p,q) notation had best be dropped.

Posted by: David Corfield on October 12, 2006 4:41 PM | Permalink | Reply to this

skeletal 2-vector spaces and cohomology

Up to equivalence, 2-vector spaces are classified by two natural numbers - the “first and second Betti numbers”.

This is correct.

Proof.

Let

(1)0V 1δV 00 0 \to V_1 \stackrel{\delta}{\to} V_0 \to 0

be the chain complex representing an arbitrary BC 2-vector space V\mathbf{V}.

Let

(2)0ker(δ)0V 0/im(δ)0 0 \to \mathrm{ker}(\delta) \stackrel{0}{\to} V_0/\mathrm{im}(\delta) \to 0

be the cohomology of the above complex, representing a skeletal 2-vector space H(V)H(\mathbf{V}).

In order to demonstrate the equivalence

(3)VH(V) \mathbf{V} \simeq H(\mathbf{V})

I construct weakly inverse functors going back and forth as follows:

first, I choose on both V 1V_1 and on V 0V_0 a scalar product. I use this to decompose

(4)V 1=ker(δ)(ker(δ)) V_1 = \mathrm{ker}(\delta) \oplus (\mathrm{ker}(\delta))^\perp

and

(5)V 0=im(δ)(im(δ)) . V_0 = \mathrm{im}(\delta) \oplus (\mathrm{im}(\delta))^\perp \,.

Given this choice, there is an obvious functor

(6)f:VH(V) f : \mathbf{V} \to H(\mathbf{V})

represented by the chain map

(7)0 V 1 δ V 0 0 0 ker(δ) 0 V 0/im(δ) 0, \array{ 0 &\to& V_1 &\stackrel{\delta}{\to}& V_0 &\to& 0 \\ && \downarrow && \downarrow \\ 0 &\to& \mathrm{ker}(\delta) &\stackrel{0}{\to}& V_0/\mathrm{im}(\delta) &\to& 0 } \,,

where the vertical arrows are the obvious projections (and using the obvious identification (im(δ)) V 0/im(δ)(\mathrm{im}(\delta))^\perp \simeq V_0/\mathrm{im}(\delta)).

Similarly, the obvious functor

(8)g:H(V)V g: H(\mathbf{V}) \to \mathbf{V}

is represented by the chain map

(9)0 ker(δ) 0 V 0/im(δ) 0 0 V 1 δ V 0 0, \array{ 0 &\to& \mathrm{ker}(\delta) &\stackrel{0}{\to}& V_0/\mathrm{im}(\delta) &\to& 0 \\ && \downarrow && \downarrow \\ 0 &\to& V_1 &\stackrel{\delta}{\to}& V_0 &\to& 0 } \,,

where now the vertical maps are the obvious embeddings.

(It is clear that both maps above are indeed chain maps: the square in the middle always commutes.)

Now, the composite functor

(10)H(V)gVfH(V) H(\mathbf{V}) \stackrel{g}{\to} \mathbf{V} \stackrel{f}{\to} H(\mathbf{V})

equals the identity on V\mathbf{V} on the nose, as one sees by composing the respective chain maps.

On the other hand,

(11)VfH(V)gV \mathbf{V} \stackrel{f}{\to} H(\mathbf{V}) \stackrel{g}{\to} \mathbf{V}

is not equal to the identity. Instead, it corresponds to the chain map

(12)0 V 1 δ V 0 0 0 V 1 δ V 0 0, \array{ 0 &\to& V_1 &\stackrel{\delta}{\to}& V_0 &\to& 0 \\ && \downarrow && \downarrow \\ 0 &\to& V_1 &\stackrel{\delta}{\to}& V_0 &\to& 0 } \,,

where the vertical arrow

(13)V 1 V 1 \array{ V_1 \\ \downarrow \\ V_1 }

deletes the (ker(δ)) (\mathrm{ker}(\delta))^\perp-component of every vector, and where

(14)V 0 V 0 \array{ V_0 \\ \downarrow \\ V_0 }

deletes the (im(δ))(\mathrm{im}(\delta))-component.

So that’s not the identity map. But it differes from the identity map by an invertible chain homotopy.

Let

(15) V 0 V 1 \array{ && V_0 \\ &\swarrow& \\ V_1 }

be the lift of every element in V 0V_0 to its unique preimage under δ\delta sitting in (ker(δ)) (\mathrm{ker}(\delta))^\perp.

Then one sees that

(16)V 1 V 1+V 1 δ V 0 V 1=V 1 Id V 1 \array{ V_1 \\ \downarrow \\ V_1 } \;\;\; + \;\;\; \array{ V_1 &\stackrel{\delta}{\to}& V_0 \\ &\swarrow& \\ V_1 } \;\;\; = \;\;\; \array{ V_1 \\ \;\;\downarrow \mathrm{Id} \\ V_1 }

and

(17)V 0 V 0+ V 0 V 1 δ V 0=V 0 Id V 0. \array{ V_0 \\ \downarrow \\ V_0 } \;\;\; + \;\;\; \array{ && V_0 \\ &\swarrow& \\ V_1 &\stackrel{\delta}{\to}& V_0 } \;\;\; = \;\;\; \array{ V_0 \\ \;\;\downarrow \mathrm{Id} \\ V_0 } \,.

This proves that ff and gg are weak inverses, hence

(18)VH(V). \mathbf{V} \simeq H(\mathbf{V}) \,.

In fact, I expect that every nn-term chain complex is equivalent to its cohomology (inside the (n+1)(n+1)-category of nn-terms chain complexes). But I haven’t tried to check that.

In the end, the reason why cohomology of (co)chain complexes is a “good idea” should be exactly because it gives the skeletal version of the corresponding nn-vector spaces.

And it’s nicely visible in the above diagrams why this is so:

Cohomology is all about looking at

(19)ker(δ)/im(δ). \mathrm{ker}(\delta)/\mathrm{im}(\delta) \,.

Why?

Because chain homotopy tells you that elements not in ker(δ)\mathrm{ker}(\delta) are “pure gauge”, since you may always add

(20)V 1 δ V 0 V 1. \array{ V_1 &\stackrel{\delta}{\to}& V_0 \\ &\swarrow& \\ V_1 } \,.

Similarly, elements in im(δ)\mathrm{im}(\delta) are pure gauge, since we may always add

(21) V 0 V 1 δ V 0. \array{ && V_0 \\ &\swarrow& \\ V_1 &\stackrel{\delta}{\to}& V_0 } \,.
Posted by: urs on October 12, 2006 6:30 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

Right. So where does that leave us? Either it was a different form of 2-vector space, i.e., not BC, or it was just plain wrong. And, I guess there was a small worry that there’d be no (0,nn) 2-spaces for n>0n \gt 0.

Where’s the Wizard when you need him? Off saving the planet, I suppose.

Posted by: David Corfield on October 12, 2006 7:49 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

Either it was a different form of 2-vector space […]

You have to help me here, somehow I must have missed something. Is this about David Roberts’ notes?

I’d be grateful for a quick intro to the notation used there. Is it right that David Roberts is drawing the graphs which serve as a basis for 2-vector spaces, the way Tim Silverman has been elaborating on?

Is the issue which you find disturbing maybe the fact that David Roberts draws two different kinds of graphs (either two disjoint loops or one figure eight) that both are supposed to span (in the ordinary sense of spans of a family of vectors) a k 2,2k^{2,2}?

If so, that shouldn’t say anything else but that these two different spans are different but equivalent 2-vector spaces. As one would expect.

And, I guess there was a small worry that there’d be no (0,n)(0,n) 2-spaces for n>0n\gt 0.

Where did that worry come from? Here is one:

(1)0k n0k 00. 0 \to k^n \stackrel{0}{\to} k^0 \to 0 \,.

Or am I not aware of something?

Posted by: urs on October 12, 2006 8:19 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

I’d be grateful for a quick intro to the notation used there. Is it right that David Roberts is drawing the graphs which serve as a basis for 2-vector spaces, the way Tim Silverman has been elaborating on?

I don’t know if it relates to what Tim’s being doing. From my perspective it arose from this comment, where David Roberts raised the possibility that Batanin’s plane trees of height 2 should provide a notion of 2-ordinal. That chimed with something I’d been thinking about, so I replied with this. You can then follow the two comments we made, when you joined in.

The worry about there being no (o,n) 2-spaces came from the thought that there are no non-empty plane trees of height 2, with no vertices at what David R calls level 1, i.e., the middle level on his diagrams.

It appeared to us (or was that only me?) that in the case of (2,2) 2-spaces, they behaved quite differently. But perhaps that is no more exciting than 1+3 and 2+2 might appear different although each is 4.

Posted by: David Corfield on October 12, 2006 8:38 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

All right, thanks.

Now I have taken a look at section 2.1 of the paper David Roberts mentioned.

So that’s nice:

we think of an ordinary ordinal nn as the poset

(1)n>(n1)>>2>1, n \gt (n-1) \gt \cdots \gt 2 \gt 1 \,,

and hence as a category.

(2)[n]:={n(n1)21}, [n] := \{ n \to (n-1) \to \cdots \to 2 \to 1 \} \,,

Noticing that a functor

(3)[n][m] [n] \to [m]

is then nothing but an order-preserving map of these posets, we can sort of categorify this idea by passing to ordinals internal to ordinals, i.e. chains of functors

(4)[n r][n r1][n 1]. [n_r] \to [n_{r-1}] \to \cdots \to [n_1] \,.

These can be thought of a planar trees. There are [n s][n_s] leaves at height ss and the order-preserving maps are the branches connecting a leave to the tree.

Right. So now I guess the game we want to play is to see if 2-ordinals of this sort of height 2, i.e. functors

(5)[n 2][n 1] [n_2] \to [n_1]

are to 2-vector spaces equivalent to k n 1,n 2k^{n_1,n_2} as ordinary ordinals

(6)n n

are to ordinary vector spaces equivalent to k nk^n.

(I like saying “ordinary ordinal”).

Yes, I guess so!

Maybe the following is what David Roberts has been saying all along, but anyway, it doesn’t hurt then to say it again:

Given such a tree of height 2

(7)[n 1][n 0] [n_1] \to [n_0]

we use the n 0n_0 elements downstairs as basis elements 1, 2,, n 0\bullet_{1}, \bullet_2, \cdots, \bullet_{n_0} of a kk-vector space

(8) 1, 2,, n 0 kk n 1. \langle \bullet_{1}, \bullet_2, \cdots, \bullet_{n_0} \rangle_k \simeq k^{n_1} \,.

(The brackets are supposed to denote the ordinary lin-alg notion of kk-linear span, just the collection of all linear combinations.)

That’s what will be our space of objects

(9)V 0:= 1, 2,, n 0 k. V_0 := \langle \bullet_{1}, \bullet_2, \cdots, \bullet_{n_0} \rangle_k \,.

Similarly for the space of morphisms:

(10)V 1:= 1, 2,, n 1 k. V_1 := \langle \circ_{1}, \circ_2, \cdots, \circ_{n_1} \rangle_k \,.

We still need to define a linear map

(11)δ:V 1V 0. \delta : V_1 \to V_0 \,.

And the obvious move is to take δ\delta to be the linear extension of the map that underlies [n 1][n 0][n_1] \to [n_0].

In words: The morphism vector i\circ_i has target the object vector j\bullet_j precisely if the iith leaf of our tree is connected to the jj-th root by a branch.

So we can linearly generate a 2-vector space which deserves to be called

(12)k ([n 1][n 0]). k^{([n_1]\to [n_0])} \,.

It must be equivalent to some

(13)k r,s k^{r,s}

(which, by definition, has δ=0\delta = 0) form some rr and some ss.

I am guessing that there is a notion of equivalence on our ordinals of ordinals which is compatible with the equivalence of the 2-vector spaces spanned by them. But, on the other hand, the above construction never yields δ=0\delta = 0 directly.

I think my conclusion here is that these ordinals of ordinals are an interesting way to construct specific 2-vector spaces. But not a way to directly denote their equivalence classes.

Posted by: urs on October 12, 2006 9:08 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

Sorry, is the idea here to construct 2-posets as a way to construct 2-lattices? Is that what David Roberts is aiming for?

The dimension issue seems to be separate, and easier.

Posted by: Tim Silverman on October 12, 2006 10:26 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

You shouldn’t believe everything I say, particularly when it’s clearly wrong! At the level of equivalence, it doesn’t make the slightest difference which graph we use, so long as it has the right homotopy groups to match the 2-dimension.

Posted by: Tim Silverman on October 12, 2006 10:16 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

The whole point of the 2-ordinals is to keep track of the dimensions of subobjects, but I see now that they correspond to chain complexes of length 2, not skeletal vector 2-spaces (I take the nomenclatorial hint!).

As a side note it appears we need to use only those trees whose rightmost tip is on level 2. I have a reason for this, by an analogy, but perhaps I am wrong. I guess I really should show that equivalent chain complexes, when translated into 2-ordinals (the name seems less appropriate now) give the same dimension using the algorithm above, but I’m out of time

Addendum: I think I have inadvertantly strayed away from plane trees as Batanin 2-ordinals, and twisted them to my own use. They are certainly good for keeping track of the cellular comlex from which we get the chain complex. I cede the point to David C about the ordering - I now think of my old `2- ordinals’ as being ordered by inclusion of subtrees, where everything has to be based at the same level. Thus the tree with two branches of length 2 is the composition at dim 0 (see Batanin’s article, section 2.1 link above) of two trees with one branch of length two, corresponding to disjoint union of pointed circles. The disjoint union of a point and a pointed circle is the tree with one branch of length 2 and one of length 1.

Posted by: David Roberts on October 13, 2006 8:47 AM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

I very much like the general idea here, that dimension of a 2-vector space should be some sort of 2-ordinal.

There should be an equivalence relation on 2-ordinals that is compatible with the equivalence relation on the 2-vector spaces obtained by forming their linear span.

I think you (David Roberts) in the last message # move in this direction by arguing how to extract the “skeletal 2-dimension” from a 2-ordinal.

I think, hoewever, that the current definition of 2-ordinal is not yet quite the right one that we need for dimension of 2-vector spaces.

With the right notion of 2-ordinal, there should be equivalence classes of 2-ordinals, which are automatically labeled by the “skeletal dimension”.

To see that the definition Batanin gives in not quite suitable for our purposes, notice for instance that when forming the linear span (the way I described), you can never get δ=0\delta = 0 from a 2-ordinal.

For 2-vector spaces this is only slightly inconvenient, but for (n>2)(n\gt 2)-vector spaces it becomes plain wrong (for our purposes), because there we need

(1)δ n+1δ n=0. \delta_{n+1} \circ \delta_n = 0 \,.

So we should think about slightly modifying the definition.

I haven’t fully thought about it yet, but a quick hack that might work is to use pointed posets.

Then, when forming the linear span of a 2-ordinal of pointed posets, send the basepoint not to a sepertate basis element, but to the 0-vector.

This way, for instance, we could represent the dimension of the skeletal k p,qk^{p,q} by the 2-ordinal

(2){*, 1, 2,, q}{*, 1, 2,, p} \{*, \circ_1,\circ_2, \cdots, \circ_q\} \to \{*, \bullet_1,\bullet_2, \cdots, \bullet_p\}

which maps everything on the left to the basepoint ** on the right, thus yielding δ=0\delta = 0.

But I haven’t checked yet if equivalence classes of 2-ordinals defined this way behave the way we want.

Posted by: urs on October 13, 2006 10:09 AM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

For anyone confused by Urs’ commment re my last post, I had to remove some embarrassingly incorrect points. One thing that Urs pointed out was that we need dimension to work with respect to direct sum. How does this work at the level of chain complexes? Termwise? I would like to relate this to `horizontal composition’ # 0\text{#}_0 as in Batanin section 2.1, where we join plane trees at their base. It certainly works for ordinary ordinals thought of as height 1 plane trees.

Please note the email address that is connected with my name on these posts seems to have become defunct recently, but I use it to protect by real email address, which can be found on my web page. If you really need it.

Posted by: David Roberts on October 13, 2006 10:49 AM | Permalink | Reply to this

comments and email addresses

I should maybe add that the email address which is requested when submitting a comment is not displayed on the weblog. It is visible only internally to the blog’s hosts.

What is visible, and connected by a hyperlink to the name of the comment’s author, is the homepage url - if any was provided.

I should maybe also add that it is perfectly fine if anyone wants to remain anonymous when submitting comments. Interesting anonymous comments are preferred over no comments at all.

Posted by: urs on October 13, 2006 2:33 PM | Permalink | Reply to this

Re: skeletal 2-vector spaces and cohomology

So here is a problem # we have come up against: We cannot seem to represent (0,n)(0,n)-dimensional 2-vector spaces as the homology of a cell complex. But what if we use reduced homology? Then k 0,1k^{0,1} is the reduced homology of a circle.

But leaving that aside for now, a clarification needs to be made re: ordinals and dimensions. Dimension is a cardinal number, even in 2-vector spaces (where it is a pair of such). It’s just that incidence geometry wants to keep track of subobjects, and so we need to keep track of their dimensions, hence ordinals. Unfortunately things have gotten bogged down in this train of thought, and haven’t progressed to the definition I’m after, which categorifies this definition. AFAISI vector spaces aren’t interesting incidence geometries, As everything intersects in a subspace, even if it is the zero subspace. Far different from, say, affine 3-space, where lines can be skew. So here’s a step backwards - what do we do about affine (p,q)(p,q)-space? It’s a lot more homogenous than a vector space ;)

Posted by: David Roberts on October 16, 2006 4:47 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

I’ll post this before I reply to various people about things they’ve said or asked.

I’ve made two very stupid mistakes stemming from one source, and I am still kicking myself vigorously over this, because, out of everything I have worked on so far, this one is perfectly simple and clear, and I should have understood it right from the beginning. I just didn’t bother to do a simple calculation properly or think through the consequences of what I was saying. I will do it properly here with painful explicitness so that even idiots like me can understand it.

We need the boundary map to commute with chain maps. This means we need to actually understand what it does, not sort of vaguely guess.

Consider two edges and their vertices:

D:      aba\rightarrow b

E:      pqp\rightarrow q

We have a chain map from EDE\rightarrow D.

Let’s suppose that the edge function on pqp\rightarrow q in E has a value ff. Let’s suppose the effect of the edge map to D is to multiply this by xx. Then the edge function on aba\rightarrow b in D gets the value xfx f.

Suppose the vertex map sends pap\rightarrow a and qbq\rightarrow b. Then the boundary map on D will assign the value xfx f to bb and xf-x f to aa.

Doing it the other way round, the boundary map on E will assign the value ff to qq and f-f to pp. If the chain map on vertices multiplies the value on qq by yy, then the image of the value ff on qq gets sent to yfy f on bb. If the boundary map is to commute with the chain map, this is only possible if y=x.y=x. Similarly for pp and aa. So there is just one multiplying factor for the edge and both vertices.

Alternatively, suppose instead that the vertex map sends pbp\rightarrow b and qaq\rightarrow a. The boundary map on D will still assign the value xfx f to bb and xf-x f to aa.

But on the other branch of the commutative square, the boundary map on E will assign the value ff to qq and f-f to pp. If the chain map on vertices multiplies the value on qq by yy, then the image of the value ff on qq gets sent to yfy f on aa. This is only possible if y=xy=-x. Similarly for pp and aa. So the multiplying factor for the edge is minus that for both vertices.

What about edges sharing a vertex? Suppose we have two graphs

F:      abca\rightarrow b\rightarrow c

G:      pqrp\rightarrow q\rightarrow r

Considering the chain map GXFG\overset{X}{\rightarrow}F, sending pap\rightarrow a, qbq\rightarrow b and rcr\rightarrow c. Assume the edge pqp\rightarrow q has the value ff and gets multiplied by xx and the edge qrq\rightarrow r has the value gg and gets multiplied by yy. Finally, suppose the vertex qq gets multiplied by zz. Looking at the value of X\partial\circ X on bb, we get xfygxf-yg. Looking at the value of XX\circ\partial on bb, we get z(fg)z(f-g). Obviously, setting these things equal requires x=y=zx=y=z.

Instead of F, consider H:

abca\rightarrow b\leftarrow c

Consider the chain map GYHG\overset{Y}{\rightarrow}H, sending pap\rightarrow a, qbq\rightarrow b and rcr\rightarrow c. Assume once again that the edge pqp\rightarrow q has the value ff and gets multiplied by xx and the edge qrq\rightarrow r has the value gg and gets multiplied by yy, also that the vertex qq gets multiplied by zz. Looking at the value of Y\partial\circ Y on bb, we get xf+ygxf+yg. Looking at the value of YY\circ\partial on bb, we get z(fg)z(f-g). Obviously, setting all these three things equal requires x=y=zx=-y=z.

Conclusion: in a single component of the source graph, all vertices get multiplied by the same factor zz, and each edge gets multiplied either by zz, if its orientation matches that of the target, or z-z, if its orientation is reversed. I can’t believe I got this so wrong before. Grr! <Kicks self again. Take that, dolt!> It’s not like I’ve never calculated a boundary or coboundary before. Just apparently never with my brain switched on. <Sigh!> Ah well, this too will pass.

I don’t think it does any actual harm to restrict the chain maps to those which preserve (or indeed reverse) orientation on all arrows, but this under-represents the contents of the 2-vector space. (It’ll violate one of the kinds of surjectivity, I guess; maybe a little later I’ll work out which one.) After all, the 2-vector space is characterised by its dimension, and the homotopy underlying this sure doesn’t care about the direction of the arrows.

We also need to think carefully about multiple components in the source graph. I had a moment of panic here, but I think it’s OK.

Suppose we are trying to map

      pqp\rightarrow q      rsr\rightarrow s

to

      abcda\rightarrow b\rightarrow c\rightarrow d

sending pap\rightarrow a, qbq\rightarrow b, rcr\rightarrow c and sds\rightarrow d.

Let’s say the map is called XX, that the edge pqp\rightarrow q has the value ff and the edge rsr\rightarrow s has the value g,g, and that the pqp\rightarrow q component gets multiplied by xx while the rsr\rightarrow s component gets multiplied by yy. Let’s suppose the value on bcb\rightarrow c in the image is hh.

Then, looking at the value of Z\partial\circ Z on aa, bb, cc and dd, we get

Z(a)=xf\partial\circ Z(a)=-xf
Z(a)=xfh\partial\circ Z(a)=xf-h
Z(a)=yg+h\partial\circ Z(a)=-yg+h
Z(a)=yg\partial\circ Z(a)=yg

On the other hand,

Z(a)=xfZ\circ\partial (a)=-xf
Z(b)=xfZ\circ\partial (b)=xf
Z(c)=ygZ\circ\partial (c)=-yg
Z(d)=ygZ\circ\partial (d)=yg

So the only requirement is that h=0h=0, which we expected anyway. Phew!

We also need to remember that two different chain maps may look identical on vertices (e.g. if more than one edge connects a given pair of vertices), and we should only count these once when looking at 0-chains.

Also, two chain maps which differ only in the orientation of an edge reduce to the same map on edges only, apart from the multiplicative factor, which disappears when we go to projective space.

There are two conclusions:

1) We need to include maps from source to target irrespective of the matching of orientation.
2) For each component ww of the source graph, if there are |w| 0\vert w\vert_0 ways of mapping it to the target which are distinct on vertices, then it contributes a factor of kP |w| 01kP^{\vert w\vert_0-1} to the space of 0-chain maps; if there are |w| 1\vert w\vert_1 ways of mapping it to the target which are distinct on edges, then it contributes a factor of kP |w| 11kP^{\vert w\vert_1-1} to the space of 1-chain maps.

The boundary map also maps factor by factor as before, but I need to think about this a bit.

OK, the mistake, though stupid, does not seem to have been quite as disastrous as I feared. Sometimes when I clear up a mistake, all the mathematically interesting behaviour evaporates.

I guess chain homotopies now work as before again, but I need to think this through properly!

Once again, if the source graph has non-trivial automorphisms, these will contribute higher Grassmannians.

Posted by: Tim Silverman on October 9, 2006 7:45 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Has this answered my worry? After John taught us about evil on August 10, we seemed to decide that sub vector 2-spaces of a (1,1) 2-space would have to have dimensions no greater. So I’m a bit dubious about (2,0) sub-2-spaces. Unless John was evilly misleading us about evil.

Later on August 10 I suggested a form of monic adapted for functors. There must be something like this in the literature. You can see though that no functor from the discrete category on two objects to the product groupoid on two objects (i.e., one arrow between each pair of objects) would be ‘monic’, left cancelable up to natural isomorphism.

Posted by: David Corfield on October 10, 2006 9:37 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

David asked:

Has this answered my worry?

I don’t know. I wrote a reply to your “worry” post yesterday but it seems to have got lost. It began (after quoting your paragraph beginning One technical point:

That’s not a technical point, it’s a moral point! And yes, I am being extraordinarily naughty! But, mad wild hedonist that I am, I don’t care, so long as I’m having fun, even though I know there’ll be hell to pay later!

After more in the same vein, basically suggesting we take the whole “good/evil” business lightly until we are halted by confusion or boredom, I attempted to address the technical point anyway, but without thinking too much or doing anything really technical.

Since I haven’t thought about this since then, I can only attempt to repeat what I think I said then: I think, by forcing injectivity on vertices, I’ve cut down on the allowed homotopies (i.e. natural transformations) thereby reducing the need for a large supply of morphisms (i.e. 1-chains). Whether this metaphorical take on things actually makes sense remains to be seen.

You also said:

By the way, we haven’t been introduced. You can see my details from our front page. What’s your background?

How do you do. I’m Tim! :-)

I don’t do this stuff for a living! I guess I’m a dilettante, but maybe a hardcore dilettante, working far from academia, but wandering aside to take a large bite out of whatever intellectual matter happens to catch my interest, chew it over, then wander elsewhere, sometimes to return later, sometimes not. You’ll notice strange holes in my knowledge where I happen never to have become interested in some basic area of some subject that everybody else is supposed to have learned at their mother’s knee.

Posted by: Tim Silverman on October 10, 2006 8:00 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Before I go on, I want to do one example of a boundary map on the space of chain maps, hopefully not completely trivial.

Consider the maps into the following graph:

abca\rightarrow b\rightarrow c

from the following graph:

pqp\rightarrow q.

There are four maps. In this case we can distinguish them by where they send the vertices:

(p,q)(a,b)(p,q)\rightarrow(a,b)
(p,q)(b,a)(p,q)\rightarrow(b,a)
(p,q)(b,c)(p,q)\rightarrow(b,c)
(p,q)(c,b)(p,q)\rightarrow(c,b)

Let’s call the multiplying factors of these respectively x abx_{a b}, x bax_{b a}, x bcx_{b c} and x cbx_{c b}.

Although the restriction of the maps to vertices is injective, the restriction to edges isn’t, since the aba b and bab a maps go to the same edge, as do the bcb c and cbc b maps.

The 0-chain map, represented in the basis (p,q)(a,b,c)(p,q)\rightarrow(a,b,c) gives

(h,k)(x abh+x bak,x abk+x bah+x bch+x cbk,x bck+x cbh).(h,k)\rightarrow(x_{a b} h+ x_{b a} k,x_{a b} k + x_{b a} h + x_{b c} h + x_{c b} k, x_{b c} k + x_{c b} h).

The 1-chain map, represented in the basis (pq)(ab,bc)(p\rightarrow q)\rightarrow(a\rightarrow b,b\rightarrow c) gives

(f)((x abx ba)f,(x bcx cb)f).(f)\rightarrow((x_{a b}- x_{b a})f,( x_{b c}- x_{c b})f).

The boundary then gives

(f)(f,f)\partial(f)\rightarrow(-f,f), which then gets sent to

(x abf+x baf,x abfx bafx bcf+x cbf,x bcfx cbf)(-x_{a b} f+ x_{b a} f, x_{a b} f- x_{b a} f- x_{b c} f+ x_{c b} f, x_{b c} f- x_{c b} f)

=((x ab+x ba)f,(x abx ba)f+(x bc+x cb)f,(x bcx cb)f)=((-x_{a b}+ x_{b a}) f, (x_{a b}- x_{b a}) f+(- x_{b c}+ x_{c b}) f, (x_{b c} - x_{c b}) f)

Comparing this to the expression for the 1-chain map, we see that in the image, we get

:(u,v)(u,uv,v)\partial:(u,v)\rightarrow(-u,u-v,v) as expected.

Of course, all this is done projectively, so really we are interested in the projective space of 0-chain maps kP 3kP^3 and the projective space of 1-chain maps kP 1.kP^1. (I’m ignoring the graph automorphism (abba)(a b\rightarrow b a) for the moment.)

The boundary map is still simple to describe, since we mod out symmetries of the source and therefore are left with all the interesting stuff happening in the image. If we go over to projective space by dividing throughout by vv, and setting w=uvw=\frac{u}{v}, we get

(w)(w,w1,1)\partial(w)\rightarrow(-w,w-1,1) as the map of the projective line of projective 1-chain maps into the projective 3-space of projective 0-chain maps.

I guess one projective line looks much like another, so this isn’t a particularly exciting result. When we get to look at homotopies, things might get slightly more interesting.

First, though: Grassmannians. I think I’ll put them in the next post.

Posted by: Tim Silverman on October 10, 2006 8:16 PM | Permalink | Reply to this

Quasi Isomorphisms and Equivalences

I have asked the following elsewhere already, but maybe I should ask again:

Given an abelian category CC, form the category K(C)K(C) of complexes in CC. Objects are complexes, morphisms are chain maps.

Here we are talking about 2-term complexes in C=VectC = \mathrm{Vect}.

However, we realize that 2-term complexes actually live in a 2-category, with 2-morphisms being chain homotopies between chain maps.

In general, nn-term chain complexes form an nn-category.

But people who write K(C)K(C) usually regard it just as a 1-category #. They do however invent another trick to weaken things.

A chain map of chain complexes is said to be a quasi isomorphism if it becomes an isomorphism after one passes to the cohomology of the chain complex.

It’s a popular move to pass from K(C)K(C) to the category D(K(C))D(K(C)) (the derived category of complexes #) by declaring all quasi isomorphisms to be true isomorphisms.

I wonder: How is this related to regarding K(C)K(C) as an ω\omega-category and considering weakyl invertible morphisms?

There is probably a simple answer to this. If so, we should get a universe of interesting examples for applications of nn-vector spaces and all things related by substituting all occurences of quasi isomorphism in the K(C)K(C)-literature by the appropriate notion of weak invertibility.

Posted by: urs on October 10, 2006 8:25 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Before thinking about how 2-Grassmannians arise from graph maps, let’s think about how 1-Grassmannians arise from set maps.

In fact, before I do that, I just want to think of Grassmannians as spaces of subspaces in the most boring set-theoretical way.

We want to think about the space of mm-dimensional subspaces of an nn-dimensional space, mn.m\le n. We can do this by picking a basis of the nn-dimensional space, ordering it and picking e.g. the first mm members of it to represent the mm-dimensional subspace. Pick two such bases to represent two mm-dimensional subspaces. Given the first basis, we can get to any other by a member of GL(n)GL(n). However, the whole thing works just as well if we stick an arbitrary metric on the space and use orthonormal bases, in which case we can get from one to another with a member of O(n).O(n). Which I find a little easier to imagine for some reason, so I will use that if I can.

However, we don’t care about internal basis changes of the subspace, so we need to quotient by O(m).O(m). And we also don’t care about basis changes that don’t affect the subspace, so we also need to quotient by O(nm).O(n-m). So the Grassmannian is the homogeneous space O(n)O(m)×O(nm)\frac{O(n)}{O(m)\times O(n-m)}. In fact, I guess we can probably cancel factors of Z 2Z_2 and get SO(n)O(m)×SO(nm).\frac{SO(n)}{O(m)\times SO(n-m)}. For instance, with m=1m=1, we have SO(n)O(1)×SO(n1).\frac{SO(n)}{O(1)\times SO(n-1)}. Since SO(n)SO(n1)\frac{SO(n)}{ SO(n-1)} is geometrically S n1S^{n-1} and O(1)O(1) is Z 2Z_2, we get kP n1kP^{n-1} as expected.

OK, so far, so elementary, this is just me thinking aloud, sorry to be so slow and cumbersome.

Next, let’s consider maps from a set MM with mm elements into a set NN with nn elements, mnm\le n.

The space of functions to a field kk on MM is an m-dimensional vector space over kk, isomorphic to k mk^m. Likewise, the space of functions on NN is isomorphic to k nk^n.

OK, consider the space of maps from k mk nk^m\rightarrow k^n.

Well, as far as set maps are concerned, we only care about which subset of NN we end up with. We can get from any mm-element subset to any other by an automorphism (i.e. permuatation) of NN, but we don’t care about the internal structure of the subset, so we mod out by permutations of that, and we don’t care about the internal structure of its complement, so we mod out by permuations of that.

So far, so good. Now as far as the set of maps MNM\rightarrow N is concerned, we can get from any map to any other by applying an automorphism to both MM and an automorphism to NN. The automorphisms on MM handle automorphisms of the image subset, but the automorphisms on NIm(M)N-Im(M) need to be done separately.

OK, so I guess this carries over to the vector spaces of functions from M,Nk.M, N\rightarrow k.

So we need to try and translate this to 2-vector spaces and 2-Grassmannians. We conclude that we need to mod out not only by automorphisms of the source chain generated by automorphisms of the source graph, but also by ones generated by automorphisms of the complement of its image.

Or something like that. OK, that’s enough for now.

Posted by: Tim Silverman on October 10, 2006 8:25 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Before pressing on, I want to pause to take stock of what we’ve done so far.

As JB said, I’ve been dealing with a 2-functor from one 2-category to another. The target 2-category is the well-known one of 2-term chain complexes, chain maps between them, and chain homotopies of chain maps. On the other hand, I’ve been more than slightly vague about the source 2-category, so I should now be a bit more precise.

The objects of this category are directed graphs. Curiously, although the directions on the graph edges are used in forming the 2-functor to the 2-category of 2-term chain complexes, the directions don’t actually do anything very much in the graph category, since two graphs which differ only in the directions of their arrows are always isomorphic.

The morphisms are maps between graphs which map vertices to vertices, edges to edges, and agree about (i.e. commute with) the assignment of vertices as the boundaries of edges.

There are various possible choices one can make about which maps are legitimately morphisms of this category. I’ve been working with the choice that the maps are injective on both vertices and edges.

One could also allow them to merge vertices under certain circumstances. I think this will work OK if there is always an edge from each vertex to itself, and we require any two points joined by an edge in the source to still be joined by an edge in the target. We could also allow edges to merge if they have the same boundary vertices (either only in the target or in the source as well)

We could also allow an in-between set of morphisms which can merge vertices only if they are “isomorphic” in the sense of being linked to exactly the same other vertices (and, optionally, in exactly the same way, i.e. via isomorphic sets of edges).

But I’ll stick with the purely injective maps.

A directed graph generates a category in the obvious way, with vertices as objects and catenations of edges as morphisms, and in this case the morphisms between graphs generate functors between categories.

In addition, I want 2-morphisms. There are various ways to do this, but I’ll pick what seems to me to be one of the simplest. We start by sticking a graph structure on the set of morphisms. Two morphisms ff and gg are connected by an edge if, x:f v(x)~g v(x).\forall x:f_v(x)~g_v(x). where f vf_v and g vg_v are the vertex maps of ff and gg and the relation ~~ indicates that two vertices are connected by an edge. In fact, we have an edge between ff and gg for every combination of ways of putting an edge between two of their target points, so we get a potentially huge cartesian product. (I don’t think there’s an obvious way of assigning a direction to these edges.)

Then this graph of morphisms generates a category of morphism whose morphisms are the 2-morphisms of the original category. We could stick extra conditions on this but I think this would make things a bit boring.

Just as the original graphs generate categories, and their morphisms generate functors between those categories, so these 2-morphisms generate natural transformations between the functors.

I’m more familiar with the lower-level case where, instead of a set (or a, um, 0-tangle? i.e. set of oriented points) of edges between two points, we have just a truth value of edges, i.e. a symmetric relation (possibly also reflexive, if we want to abandon the requirement of injectivity on vertices), so I’ll mention a couple of things I know about this.

In this case, the automorphisms of a graph form a group which is also a graph in a compatible way, namely that multiplication on the left and the right are graph automorphisms. (There ought to be a way to define this group in terms of a map from a product of a symmetric relation with itself, to itself. But when I tried this, my definition of product gave rise to a more boring sort of relation on the group, which had to be transitive as well as symmetric and reflexive. This arises if we use the stiffer condition on morphism relations that f~gf~g iff xy:x~yf(x)~g(y)\forall x \forall y: x~y \Rightarrow f(x)~g(y).)

This group has some properties reminiscent of topological groups, e.g. the identity component is a normal subgroup and is generated by a ‘neighbourhood of the identity’, i.e. any set containing everything linked to the identity. I guess that it gives rise to a (strict) 2-group. There’s probably some way of weakening this, e.g. by considering maps that are only automorphisms ‘up to the relation ~~’, but I don’t know what the exact right definition is, if there is one.

Posted by: Tim Silverman on October 11, 2006 8:31 PM | Permalink | Reply to this

(n+1)-morphisms of n-graphs

John Baez wrote:

Do graph theorists ever think about those homotopies? They should! #

Now Tim Silverman does:

In addition, I want 2-morphisms. #

If I had to do this, I’d model everything pertaining to nn-graphs after nn-categories.

So, as Tim has already indicated, I’d think of any nn-graph GG in terms of the nn-category C GC_G it generates.

Then 1,2,,(n+1)1,2,\cdots,(n+1)-morphisms between nn-graphs I’d define as the respective morphisms on nn-categories, restricted to the generating nn-edges.

Possibly this is not the desirable way to set up morphisms of nn-graphs for all imaginable applications. But for getting started, it might save one from reinventing the wheel.

Posted by: urs on October 11, 2006 8:54 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

[Grr, I thought I’d posted this but it hasn’t appeared. Apologies if it now appears twice.]

Of course, our graph is not our space. The points (or “0-points”) of our space are functions from the graph vertices to kk, while the “1-points” are pairs consisting of a function from the vertices to kk and a function from the edges to kk, which acts as a morphism from the 0-point.

So, I think it’s time to look at the automorphisms groups and automorphism 2-groups of these things.

Let’s consider first automorphisms which do not arise from internal graph automorphisms. If there is only one component, then the automorphism group is simply the multiplicative group of k,k, for both vertices and edges, and the homotopy group is trivial. If there are nn multiple isomorphic components, with only one isomorphism between any given pair, the automorphism group is GL k(n)GL_k(n) for both vertices and edges, and again the homotopy group is trivial. If the set of components is partitioned into mm isomorphism classes, F iF_i, each of size |F i|\vert F_i\vert, 1<i<m1\lt i\lt m, then the automorphism group will be iGL k(|F i|).\prod_i GL_k(\vert F_i\vert). The only 2-automorphisms are, again, trivial.

Now we need to think about internal automorphisms of a 1-component graph.

If the automorphisms induce |A v|\vert A_v\vert distinct vertex automorphisms, we get a factor of GL k(|A v|)GL_k(\vert A_v\vert) in the vertex automorphisms. Likewise, if the automorphisms induce |A e|\vert A_e\vert distinct edge automorphisms, we get a factor of GL k(|A e|)GL_k(\vert A_e\vert) in the edge automorphisms

In this case, the graph homotopies induce chain map homotopies. Maybe we can simply form the automorphism group graph of the graph, and consider chain map automorphisms over this …

Maybe some examples would help.

I seem to be walking around these 2-Grassmannians looking at them from every angle without actually constructing them. I’d like to think I’m circling my prey before moving in for the kill, but possibly I’m just wandering around in circles because I’m lost. There seems to be a lot going on in this example, with generalisations stretching as far as the eye can see in multiple directions. I feel a long, careful think is about due.

Posted by: Tim Silverman on October 11, 2006 10:48 PM | Permalink | Reply to this

Re: Klein 2-Geometry VI

So where has our projective 2-geometry arrived at? We were going to form the projective 2-space associated to a vector 2-space, find the 2-group of projective linear transformations, then look for interesting sub 2-groups. We seem to have that the projective 2-space associated to a (p,q)(p,q) 2-space has kP p1kP^{p-1} worth of objects each with k nk^n automorphisms, for some nn that I’m not sure we quite worked out correctly, but which relates to the chain homotopies.

This gave rise to a worry that we had only a trivial modification of ordinary geometry. It would be nice to know if that’s right. How would the axioms of ordinary projective geometry need to be modified?

A) Given two distinct points, there exists a unique line that both points lie on.

B) Given two distinct lines, there exists a unique point that lies on both lines.

C) There exist four points, no three of which lie on the same line.

D) There exist four lines, no three of which have the same point lying on them.

Would the 2-group PGL(V)PGL(V) be uninteresting? Is the trouble that there’s no twisting going on in the bundles? What if we’d looked at (1,1) sub 2-spaces instead?

Posted by: David Corfield on October 14, 2006 9:36 AM | Permalink | Reply to this

2-(general linear transformations)

[…]find the 2-group of projective linear transformations […]

Have we even identified the 2-group of ordinary linear transformations, i.e. the 2-group

(1)Aut BC2Vect(V), \mathrm{Aut}_{ BC2\mathrm{Vect}}(\mathbf{V} ) \,,

for V\mathbf{V} a given 2-vector space?

(Quite possibly you have somewhere, and I missed it.)

Let’s see. It’s easiest for skeletal

(2)Vk n 1,n 2, \mathbf{V} \simeq k^{n_1,n_2} \,,

of course.

Objects of the automrophism group then are weakly invertible chain maps

(3)k n 1,n 2k n 1,n 2. k^{n_1,n_2} \stackrel{\sim}{\to} k^{n_1,n_2} \,.

But since in the skeletal case δ=0\delta = 0, weakly invertible here should mean strictly invertible.

So the automorphism 2-group is actually strict. It must be some crossed module.

Here, evidently the group of objects is

(4)GL(n 1,k)×GL(n 2,k). GL(n_1,k) \times GL(n_2,k) \,.

What is a morphism between two such objects?

It’s a chain homotopy of the associated chain maps. Again, since δ=0\delta = 0, all these chain homotopies have same source as target and are given by arbitrary linear maps

(5)k n 1k n 2. k^{n_1} \to k^{n_2} \,.

So the group of morphism is isomorphic to the additive group k n 1n 2k^{n_1 n_2}.

How does the group of objects act on the group of morphisms?

One should draw a diagram to illustrate this. I don’t even have pen and paper available at the moment. But I think left pre-composition by a chain map acts on the object part, and right post-composition on the morphism part of the chain homotopy.

So, I think I am saying that the action of

(6)GL(n 1,k)×GL(n 2,k) GL(n_1,k)\times GL(n_2,k)

on

(7)Hom Vect k(k n 1,k n 2) \mathrm{Hom}_{\mathrm{Vect}_k}( k^{n_1},k^{n_2})

is by “conjugation”

(8) (k n 1f 1k n 1,k n 2f 2k n 2)×(k n 1rk n 2) k n 1f 1k n 1rk n 2f 2 1k n 2. \begin{aligned} &(k^{n_1}\stackrel{f_1}{\to} k^{n_1}, k^{n_2}\stackrel{f_2}{\to} k^{n_2}) \times (k^{n_1} \stackrel{r}{\to} k^{n_2}) \\ \mapsto& k^{n_1}\stackrel{f_1}{\to}k^{n_1} \stackrel{r}{\to} k^{n_2}^ \stackrel{f_2^{-1}}{\to} k^{n_2} \end{aligned} \,.

Hm, interesting. Had we noticed this 2-group before? (Or did I make a mistake?)

Posted by: urs on October 14, 2006 1:48 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

Had we noticed this 2-group before?

We haven’t met it so far on the Klein 2-geometry threads. Might it have appeared in HDA5?

Or did I make a mistake?

Looks right to me. Now, why did we never go about forming Euclidean 2-geometry by imposing a categorified inner product on a 2-vector space? I suppose because we were following the Kleinian route of seeing Euclidean geometry as a subgeometry of projective geometry.

Inner products of 2-vectors have to do with the Hom operator, don’t they?

Posted by: David Corfield on October 14, 2006 2:21 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

When John worked out categorified inner products in a 2-Hilbert space, a key example kept in mind was Hilb, the category of Hilbert spaces. Then the inner product between HH and HH' could be Hom(HH,HH'), and its relationship to Hom(HH',HH) shown.

How can we put an inner product on our 2-vector spaces? As Hom(vv,ww) is empty for vwv \ne w in our skeletal 2-vector spaces, wouldn’t the parallel construction to the previous paragraph only work with a different notion of 2-vector space, perhaps Kapranov-Voevodsky (see August 07, 4:56 entry of Klein 2-geometry IV)?

Posted by: David Corfield on October 15, 2006 1:32 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

Might it have appeared in HDA5?

I don’t recall having seen it there. What does appear is the slightly similar Poincaré 2-group which has O(n)O(n) (GL(n)GL(n) would work just as well) as group of objects, and n\mathbb{R}^n as group of morphisms.

imposing a categorified inner product on a 2-vector space?

Right, sounds like a good idea.

Maybe the cleanest way to define this is as an isomorphism of a 2-vector space to its dual 2-vector space.

What is the dual of a 2-vector space V\mathbf{V}?

As we have discussed elsewhere, I like to think of the Baez-Crans 2-vector spaces as Disc(k)\mathrm{Disc}(k)-module categories, where Disc(k)\mathrm{Disc}(k) is the discrete category on the field kk.

Now, if we forget the monoidal structure on Disc(k)\mathrm{Disc}(k) we find

(1)Disc(k)k 1,0. \mathrm{Disc}(k) \simeq k^{1,0} \,.

This would suggest that the 2-vector space V *\mathbf{V^*} dual to V\mathbf{V} is the category

(2)Hom BC2Vect(V,k 1,0). \mathrm{Hom}_{BC2Vect}( \mathbf{V},k^{1,0}) \,.

Hm, but that has dimension (dim(V 0),0)(\mathrm{dim}(V_0),0). That’s not what we want (because it cannot be isomorphic to V\mathbf{V}).

Hm. What’s the right notion of V *\mathbf{V}^*, then?

On the other hand, it seems that indeed

(3)VHom BC2Vect(k 1,0,V), \mathbf{V} \simeq \mathrm{Hom}_{BC2Vect}( k^{1,0},\mathbf{V}) \,,

so that looks right.

Well, maybe we should not require VV *\mathbf{V} \simeq \mathbf{V}^*. It is not necessary to define a (possibly degenerate) innner product.

All we need for that is a choice of monomorphism

(4)VsV *:=Hom(V,k 1,0), \mathbf{V} \stackrel{s}{\to}\mathbf{V}^* := \mathrm{Hom}(\mathbf{V},k^{1,0}) \,,

because that always allows to produce the number

(5)(v 1,v 2):=s(v 2)(v 1). (v_1,v_2) := s(v_2)(v_1) \,.
Posted by: urs on October 15, 2006 4:42 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

All we need for that is a choice of monomorphism…

But if VV has non-zero first Betti number, there won’t be any such monomorphism.

Posted by: David Corfield on October 15, 2006 5:51 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

I like to think of the Baez-Crans 2-vector spaces as Disc(k)-module categories, where Disc(k) is the discrete category on the field k.

Would another choice of CC to form CC-modules help, one not to far from Disc(kk)? Perhaps what we call a (1,1) 2-vector space?

Posted by: David Corfield on October 16, 2006 6:55 AM | Permalink | Reply to this

Re: 2-(general linear transformations)

Urs said that the strict automorphism 2-group of a skeletal 2-vector space V given by 2-term chain k qδk pk^q\overset{\delta}{\rightarrow}k^p has, as objects, GL k(p)×GL k(q)GL_k(p)\times GL_k(q) acting on objects and morphisms respectively, and, as morphisms, linear maps k pk qk^p\rightarrow k^q, (assigning one of its automorphisms to each object). Then he suggested the action of 2-group objects on 2-group morphisms given by (I guess) pulling back natural transformations across functors, so the action of (k pf 1k p,k qf 2k q)(k^p\overset{f_1}{\rightarrow}k^p, k^q\overset{f_2}{\rightarrow}k^q) on k prk qk^p\overset{r}{\rightarrow}k^q produces k pf 1k prk qf 2 1k qk^p\overset{f_1}{\rightarrow}k^p \overset{r}{\rightarrow}k^q \overset{f_2^{-1}}{\rightarrow}k^q (which looks fine to me).

It isn’t very much more complicated to use non-skeletal 2-vector spaces, is it? We have k q+dδk p+dk^{q+d}\overset{\delta}{\rightarrow} k^{p+d}. We express C 1C_1 as a direct sum of Ker(δ)Ker(\delta) (representing automorphisms of any object) and C 1/Ker(δ)C_1/Ker(\delta) (representing isomorphisms in one direction between a pair of distinct isomorphic objects), of dimensions qq and dd respectively, and likewise divide C 0C_0 into C 0/Im(δ)C_0/Im(\delta) (representing components of the groupoid) and Im(δ)Im(\delta) (representing objects within one component of the groupoid), of dimensions pp and dd respectively. Since we have to respect δ\delta in chain automorphisms, we need GL k(p)GL_k(p) to act on C 0/Im(δ)C_0/Im(\delta) and GL k(q)GL_k(q) to act on Ker(δ)Ker(\delta). In addition, we get all endomorphisms End k(d)End_k(d) to act on C 1/Ker(δ)C_1/Ker(\delta) and Im(δ)Im(\delta) (with either action determining the other via δ\delta). These give ways of shuffling around or merging the different objects in any component of the groupoid, and the corresponding shuffling of morphisms. Then natural transformations are given by maps k pk q+dk^p\rightarrow k^{q+d}.

That seems to work, but it seems awfully restrictive. The space is a groupoid, or rather a family (indeed a 2-category) of 2-isomorphic groupoids. In each member of the family (i.e. each object of the category), each component is constrained to be strictly isomorphic to every other component, i.e. to be a space of dimension dd (using the terminology above). 1-symmetries (i.e. auto-functors of a space) are allowed to act as, inter alia, automorphisms of the automorphism group of an object, but must do so in the same way not only on all objects in a single component of the groupoid (which is constrained to be true by the composition of morphisms within the component) but also across all components. In addition, although, in a general groupoid, there is, between any two objects aa and bb, an Aut(a)Aut(a)-torsor of morphisms from aba\rightarrow b, which can potentially be shuffled by a family of functors isomorphic under composition to Aut(a)Aut(a), and independently so for every object other than aa in that component, there is no sign of this freedom in the automorphisms of 2-vector spaces.

I guess this is because of the need to preserve the group structure, which makes everything very homogeneous.

Posted by: Tim Silverman on October 14, 2006 6:56 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

I want to think a little more about this business of homogeneity.

Suppose we have some kind of space, call it an XSpace, and we have the category of these spaces, call it Spc XSpc_X, and we are forming an internal category in Spc XSpc_X, i.e. a 2-XSpace.

So we want our 2-XSpace to be homogeneous, so that all the points in it look alike and their neighbourhoods of morphisms look alike – in some sense.

A morphism is supposed to take us from aa to bb, but because the morphisms in a homogeneous 2-XSpace look basically the same from every point, they can be lazy and delegate some of this work to what we might call a morphism template. Each morphism consists of a pair (s,m)(s, m) where ss is the source object and mm is a morphism template which, given any source object, knows how to work out what its target is and how to send it there. I guess this (usually?) gives us an XSpace TT of morphism templates, though maybe this is not true in general.

A morphism template can also afford to be somewhat lazy. Every morphism specified by a morphism template has its target related to its source in the same way, which will be some automorphism of the XSpace of objects. So we get an XSpc morphism from the space of morphism templates into the automorphism group of the space of objects, Aut(O)Aut(O) (where OO is the space of objects).

Morphism templates need to compose, so that their parent morphisms can compose, which makes them into a group; their composition should map into the composition (i.e. group operation) of Aut(O).Aut(O).

What about the kernel and image of this map, let’s call it δ\delta?

The kernel is the space of morphism-templates for the automorphisms of the individual objects (i.e. of the points).

The image of δ\delta in Aut(O)\Aut(O) will act on each object in OO to send it to the set of objects to which it has a morphism. Homogeneity means these are all isomorphisms, so basically the 2-XSpace must be a groupoid, and Im(δ)Im(\delta) is the automorphism group of each component of the groupoid. (By this, I mean the automorphism group of that component qua groupoid.) So Aut(O)/Im(δ)Aut(O)/Im(\delta) is the automorphism group of the components among themselves, modulo their internal automorphisms.

If OO is itself a group, then it acts on itself by left multiplication. Of course, this action is not an automorphism of OO qua group, but we’d expect it to be an automorphism of OO qua XSpace. If we demand that the group action of the space of morphism templates TT be compatible with the group action of OO, then I guess we get a homomorphism from TO.T\rightarrow O. This is of course what happens in the 2-vector space case.

Since the 2-XSpace SS is a groupoid, the automorphism 2-group of SS is just the 2-group of functors and natural transformations of the groupoid. (Is this true? I don’t think there’s any extra structure to obstruct this.)

What kind of structure do we want automorphisms of this space to preserve?

We can demand that it be surjective, up to natural isomorphism, on objects (is that essentially surjective?). That seems on the face of it like a natural generalisation of the surjectivity we require on 1-space automorphisms.

That means that we are surjective on the space of groupoid components.

To what extent can we cut down on morphisms? After we have disposed of some objects, along with all morphisms of which they are the source and target, we are left with a quite limited choice of what to do.

Consider the automorphism group of a single object aa, call it Aut(a)Aut(a). If there is a morphism from that object to another object bb, then, by composing that morphism with all elements of Aut(a)Aut(a), we must get all morphisms from ab.a\rightarrow b. They automatically come with inverses, which gives all morphisms from bab\rightarrow a. Composing morphisms bab\rightarrow a followed by morphisms aba\rightarrow b gives all members of Aut(b).Aut(b). So the question which morphisms to preserve, assuming we want to keep homogeneity, boils down to deciding which members of Aut(a)Aut(a) we want to keep. This in turn boils down (again assuming we maintain homogeneity) to working out how much of Ker(δ)Ker(\delta) we want to keep. If we keep all of it, then we will be as surjective on morphisms as is possible given that we may be throwing away some objects.

So, what are the automorphisms and 2-automorphisms of our 2-XSpace?

Erm, let me see …

We want an endomorphism of the space of morphism-templates TT, which is a homomorphism of its group structure as well as its XSpace structure, and which is an automorphism on Ker(δ).Ker(\delta).

If we are cutting down the space of morphisms, we are forced to cut down the space of objects. So we need an endomorphism of Aut(O)Aut(O) which preserves the quotient Aut(O)/Im(δ)Aut(O)/Im(\delta), letting Im(δ)Im(\delta) itself shrink along with T/Ker(δ).T/Ker(\delta). Correspondingly, we need to squash OO so that the orbits of objects under Im(δ)Im(\delta) shrink along with Im(δ)Im(\delta) itself. (I guess this is reasonably straightforward, since these will be Im(δ)Im(\delta)-torsors, except there will be a choice of which points to preserve. All choices will be isomorphic of course.)

Then I guess a 2-symmetry is a morphism (in Spc XSpc_X) from OO to TT.

Oh, yeah, I think morphisms of TT preserving a given coset of Ker(δ)Ker(\delta) shuffle the morphisms between a given pair of objects among themselves, so I was wrong about that in my last post.

Hmm, this is all rather confusing. Does it make sense? As it stands, it suggests that the basic boringness of 2-vector spaces is somewhat generic.

Maybe it will get more interesting if we attack 2-subgroups head-on.

Posted by: Tim Silverman on October 14, 2006 10:05 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

Oh yeah, and I think this implies that if we want those 2-figures of 2-XSpaces which are themselves 2-XSpaces, then one gets them all simply by taking a sub-XSpace of the morphisms, a sub-XSpace of the objects, and restricting the δ\delta map accordingly.

Posted by: Tim Silverman on October 14, 2006 10:27 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

An example might also be helpful. Consider a chain with p=q=d=1p=q=d=1, i.e. of dimension (1,1) with both Ker(δ)Ker(\delta) and Im(δ)Im(\delta) of dimension 1, so both C 1C_1 and C 0C_0 are 2-dimensional.

Pick bases so that δ:(k a)(0 a).\delta:{\left(\array{k\\a}\right)} \rightarrow{\left(\array{0\\a}\right)}.

Then morphisms on the morphism templates TT are (x 11 x 12 0 x 22),x 110.\left(\array{x_{11}&x_{12}&\\0&x_{22}} \right), x_{11}\ne0.

Broadly speaking, the x 11x_{11} term rearranges the automorphisms of a single object among themselves, and must be non-zero to ensure that the transformation acts as an automorphism on Ker(δ)Ker(\delta);
the x 12x_{12} term rearranges the morphisms between two particular different objects among themselves (by pre-applying an automorphism);
the bottom left term must be 00 to keep Ker(δ)Ker(\delta) from rotating to some other line;
and x 22x_{22} changes which target a particular morphism points to.

The morphisms on the objects OO are (y 11 0 y 21 x 22),y 110.\left(\array{y_{11}&0&\\y_{21}&x_{22}} \right), y_{11}\ne0.

The y 11y_11 term rearranges the components of the groupoid amongst themselves;
the top right term must be 00 to prevent the transformation reassigning objects of the groupoid to different components;
y 21y_{21} performs a component-dependent reshuffling of objects within a component;
and the bottom right term performs a global rearrangement of objects within the components. It has to be equal to the bottom right component of the morphism template transformation to ensure that when the morphism moves to point to a different target location, the old target moves to the new location along with it.

So all the naively expected types of rearrangement do appear.

Posted by: Tim Silverman on October 15, 2006 10:14 AM | Permalink | Reply to this

Re: 2-(general linear transformations)

[…] he suggested the action […]

I think it is clear, but maybe I should emphasize it anyway:

there is no freedom of choice in writing down the automorphism 2-group of a 2-vector space. In as far as I suggested anything, I suggested that my derivation of an eternal fact was correct.

In particular, in order to derive the action of GL(n 1,k)×GL(n 2,k)GL(n_1,k)\times GL(n_2,k) on morphisms, one has to work out the natural isomorphism obtained by taking any fixed natural isomorphism and “whiskering” it from the left by an object and from the right by the inverse object.

The formula I presented was the result of performing this computation mentally. I didn’t write it out carefully on a piece of paper (and I still haven’t. It’s weekend. :-).

But there can only be one correct answer.

Posted by: urs on October 15, 2006 4:20 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

Urs said:

there is no freedom of choice in writing down the automorphism 2-group of a 2-vector space. In as far as I suggested anything, I suggested that my derivation of an eternal fact was correct.

Yes, sorry, I didn’t mean to suggest otherwise.

Posted by: Tim Silverman on October 15, 2006 10:32 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

There’s something I’m not getting here.

Given a group G, we’re used to a strict 2-group GAut(G),G\rightarrow Aut(G), with the δ\delta map sending an element of GG to the corresponding inner automorphism, and the obvious action of Aut(G)Aut(G) on GG.

Now we’ve got two abelian groups, BB and GG, and we’re trying to set up a strict 2-group Hom(B,G)Aut(G)×Aut(B)Hom(B, G)\rightarrow Aut(G)\times Aut(B), where Hom(B,G)Hom(B, G) inherits its group operation from GG. The action of Aut(G)×Aut(B)Aut(G)\times Aut(B) on Hom(B,G)Hom(B, G) is obvious, but what’s the δ\delta map?

Posted by: Tim Silverman on October 16, 2006 8:14 AM | Permalink | Reply to this

Re: 2-(general linear transformations)

but what’s the δ\delta-map

For V\mathbf{V} skeletal also Aut BC2Vect(V)\mathrm{Aut}_{BC2Vect}(\mathbf{V}) will be skeletal.

This is to be derived from looking at the definition of Aut BC2Vect(V)\mathrm{Aut}_{BC2Vect}(\mathbf{V}):

its objects are invertible functors VV\mathbf{V} \to \mathbf{V}, and its morphisms are natural isomorphisms between these.

Unless I am mistaken, for V\mathbf{V} skeletal every natural isomorphism between functors from V\mathbf{V} to itself will be an automorphism of such a funtor, hence will have same source and target.

(By the way, it might be a litte dangerous to call the target map for the automorphism group “δ\delta”. The automorphism group will not be a 2-vector space, I think, hence we shouldn’t use the same symbols for it that we use for 2-vector spaces.)

Posted by: urs on October 16, 2006 10:10 AM | Permalink | Reply to this

2-spaces as bundles of bundles

Urs said:

Unless I am mistaken, for V skeletal every natural isomorphism between functors from V to itself will be an automorphism of such a functor, hence will have same source and target.

I’m sure it is. But what I was really asking was, given a particular element in Hom(B,G)Hom(B, G), what particular element of Aut(B)×Aut(G)Aut(B)\times Aut(G) corresponds to it?

However, thinking this through, I guess everything in Hom(B,G)Hom(B, G) goes to the identity. We already know that the object maps of the two functors have to agree, so we get the identity action on Aut(B).Aut(B). Morphism templates act on morphisms by conjugation, and since GG is abelian, these are trivial, so act as the identity on Aut(G).Aut(G).

Maybe this is what you were telling me when you said this:

For V skeletal also Aut BC2Vect(V)Aut_{BC2Vect}(V) will be skeletal.

Sorry for being obtuse.

Urs also warned me:

(By the way, it might be a litte dangerous to call the target map for the automorphism group “δ\delta”. The automorphism group will not be a 2-vector space, I think, hence we shouldn’t use the same symbols for it that we use for 2-vector spaces.)

I take your point. I wrote that a bit too fast. I was half-recalling that HDA V refers to a review paper by Forrester-Barker which I think uses yet another variant on the letter “d”! Or possibly “b”. 8-o

On the other hand, I think there is something analogous going on in the two cases, so I think I want to reserve the right to use the same symbol in both cases on occasion.

On the subject of greater generality, I think I want to revisit my “sort-of homogeneous sort-of 2-spaces” post and look over that material with less vagueness and from a bundle point of view, also relating it to the 2-vector space example. (I’ve no idea at all how familiar this stuff is to other people here. Probably very. :-( I’m guessing Urs does this in his sleep. But maybe there is someone out there for whom this stumbling progress is helpful.)

All the bundles will be completely trivial, and I’ll still be a bit vague about what category the various spaces live in, so I can focus on the bundle-y aspects.

So, here is one way to look at it.

We have a base space B.B. The points of this are the components of our 2-space seen as a groupoid. We have a fibre, actually a 2-fibre, FF, giving the internal structure of the components as connected groupoids, and our 2-space is just B×FB\times F for some suitable product. (That’s like instructions in a medieval cookbook – “cook it for a suitable length of time”.)

Now, just to momentarily review ordinary principal bundles as a jumping off point. Here we have a group A,A, and an ordinary 1-fibre is a torsor of A.A. An automorphism consists of

1) A member of Aut(B)Aut(B) (an automorphism of the base space);

2) An action of AA on the fibres, dependent on where you are in BB. In other words, a kind of (active!) gauge transformation;

3) An automorphism of the gauge group AA, together with an appropriate corresponding automorphism of the fibres as AA-torsors. (Of course, there’s one way to perform the automorphism on a fibre for each point in the fibre – but these are all related to each other by actions of A,A, so we don’t get anything new here.)

I suppose we need something analogous to a connection to ensure that an automorphism of BB knows how to send fibres to each other in the bundle over B.B.

Now for our 2-space.

In this case, instead of just one group A,A, and a 1-fibre which is a set that is a torsor of A,A, we have two groups, AA and GG, and a 2-fibre which is a category, and is a torsor of both groups, with AA acting on morphisms and GG on objects. In fact, we want AA to act separately on the automorphisms of each object in FF (this then determines the morphisms between them). So in fact, FF is itself a principal bundle, with base space the objects, and gauge group the automorphisms of the objects. (Again, I think we need something like a connection so that GG knows how to map morphisms.)

The morphisms of the fibre groupoid aren’t the actual morphisms of the 2-space; they’re just “morphism templates”. An actual morphism consists of (I think) a point in (B×F)×(A×G).(B\times F)\times(A\times G). Considering the 2-spaces as an internal category in some category, the source morphism is the projection (B×F)×(A×G)(B×F)(B\times F)\times(A\times G)\rightarrow(B\times F)

and the target morphism is the twisted projection to (B×G(F))(B\times G(F)). If that makes sense.

Comparing this with the 2-vector space, BB corresponds to C 0/Im(δ),C_0/Im(\delta), the objects of FF correspond to Im(δ),Im(\delta), AA corresponds to Ker(δ),Ker(\delta), and GG corresponds to C 1/Ker(δ).C_1/Ker(\delta).

What are the automorphisms of this space? We have, first of all, 1-automorphisms. These consist of the following:

1) a member of Aut(B)Aut(B);

2) for each point in BB, an action of GG on the objects of the fibres FF;

3) for each object, an action of AA on the AA-torsor over the object;

4) a member of Aut(A),Aut(A), together with corresponding maps on the automorphisms of the objects, so as to preserve the action;

5) an endomorphism of G, with corresponding effects on (all copies of) FF as a GG-torsor.

Obviously, if two elements of GG get sent to the same place by the endomorphism in (5), we need to make sure the effects of the action of AA in (3) match on the destinations of corresponding objects in F.F.

Also, I think that the existence of the two gauge operations in (2) and (3) means it is only modulo inner automorphisms that we care about the endo/automorphisms in (4) and (5).

In the terminology of this example, the first corresponds to y 11,y_11, the second to y 21,y_21, the third to x 12,x_12, the fourth to x 11,x_11, and the last to x 22.x_22.

As for the 2-automorphisms of the space, they relate the 1-automorphisms which agree on element (1) above (the member of Aut(B))Aut(B)) and also agree on element (4) (the member of Aut(A)Aut(A)) up to an inner automorphism. We therefore we need the following in a 2-morphism:

For each (b,f)B×Obj(F)(b, f)\in B\times Obj(F), a member of GG to send the object to its destination, and a member of AA to determine which morphism to use.

1) This acts trivially on Aut(B).Aut(B).

2) It acts on the gauge action of GG pointwise on B,B, by multiplication in G.G.

3) I think it acts on the gauge action of AA by multiplication in AA, pointwise on B×Mor(F)B\times Mor(F) (I think … not sure about this).

4) It acts by composition with the corresponding inner automorphism on the elements of Aut(A).Aut(A).

5) I think the effect on endomorphisms is handled automatically through the action of GG given in (2) here.

That seems awfully complicated …

We seem to have skipped straight from 0-fibre bundles (where the fibre is a structureless point with no non-trivial automorphisms) to 2-fibre bundles, (where the fibre is already a (1-)fibre bundle in its own right). The intermediate stage, an ordinary 1-fibre bundle, is given by a skeletal 2-fibre bundle, where GG is trivial. In this case, it looks as though the automorphism 2-group reduces to the strict 2-group

(1)Hom(B,A)Aut(B)×Aut(A)×Hom(B,A).Hom(B,A)\rightarrow Aut(B)\times Aut(A)\times Hom(B,A).

The 1-morphisms in Aut(B)×Aut(A)×Hom(B,A)Aut(B)\times Aut(A)\times Hom(B,A) do the following:

An element of Aut(B)Aut(B) acts in the obvious way on the base space BB. The element of AA corresponding to a point bBb\in B acts on the AA-torsor over bb in the obvious way. The element of Aut(A)Aut(A) acts globally on AA and the AA-torsors everywhere.

What does map down (the “δ\delta-map”) do?

First, it sends us to Id BId_B in Aut(B).Aut(B).

Second, for each bb, we get an element aa, which does two things. It gets us from one AA-torsor action of AA (over bb) to another, by multiplying by aa on the left. It also gets us from one action of Aut(A)Aut(A) on AA to another by performing a conjugation by a.a. Hmm. I think this just makes sure that the automorphisms of the torsor and the automorphism of its group stay in line …

The action “up” is the obvious one with Aut(B)Aut(B) acting on the source of homomorphisms BB and Aut(A)Aut(A) acting on the target AA, as described by Urs.

At least, I think that’s broadly in the right ballpark.

Posted by: Tim Silverman on October 16, 2006 9:23 PM | Permalink | Reply to this

groups of morphisms

But what I was really asking was, given a particular element in Hom(B,G)\mathrm{Hom}(B,G), what particular element of Aut(B)×Aut(G)\mathrm{Aut}(B)\times \mathrm{Aut}(G) corresponds to it?

Oh, sorry, I misunderstood you then.

So BB and GG here denote the vector space of objects and the vector space of morphisms starting at the identity, respectively.

Hom(B,G)\mathrm{Hom}(B,G) is the (additive) group of morphisms that start at the identity in Aut(V)\mathrm{Aut}(\mathbf{V}).

As you know, a strict two group consists of two ordinary groups K 1K_1 and K 2K_2, say (together with some structure on them). K 1K_1 is the group of objects. K 2K_2 is the group of morphisms that start at the identity object.

The group of all morphism then is the semidirect product group

(1)K 1K 2. K_1 \ltimes K_2 \,.

I am frequently being sloppy and address K 2K_2 itself as the “group of morphisms”. Strictly speaking, it’s just the group of morphisms that start at the identity.

But once you know this, you get every other morphisms by multiplying (from the left, say) by the identity morphism on a given object. This way the semidirect product structure of the group of all morphisms appears.

Same here. The group of all morphisms is

(2)(Aut(B)×Aut(G))Hom(B,G), (\mathrm{Aut}(B)\times \mathrm{Aut}(G)) \ltimes \mathrm{Hom}(B,G) \,,

where the semidirect product is formed with respect to the action that we discussed before.

Posted by: urs on October 17, 2006 9:40 AM | Permalink | Reply to this

Re: groups of morphisms

Urs, thanks for this summary of strict 2-groups. That’s all clear. What you call the “group of morphisms from the identity” is effectively the same as what I was calling the “group of morphism templates”.

And we can generalise this, I think, to non-Abelian GG, (after throwing away some extraneous rubbish that got into my previous post), to get the 2-group Hom(B,G)Aut(B)(Out(G)Hom(B,Inn(G)))Hom(B,G)\rightarrow Aut(B)\rtimes (Out(G)\rtimes Hom(B,Inn(G))) (where Out(G)Out(G) and Inn(G)Inn(G) are the outer and inner automorphism groups of GG). The arrow takes us to the identity on Aut(B)Aut(B) and Out(G)Out(G), and composes Hom(B,G)Hom(B,G) with the usual GInn(G)G\rightarrow Inn(G) to get into the Hom(B,Inn(G))Hom(B,Inn(G)) factor. The action is the usual action by conjugation on morphisms, with Aut(B)Aut(B) acting on BB, and Aut(G)Out(G)Inn(G)Aut(G)\cong Out(G)\rtimes Inn(G) acting on GG pointwise over B.B. Obviously, with an abelian GG, the Hom(B,Inn(G))Hom(B, Inn(G)) factor becomes kind of invisible.

In fact, looking at this, I guess we could have as objects Aut(B)Hom(B,Aut(G)).Aut(B)\rtimes Hom(B, Aut(G)). Would that work? Since everything is done pointwise over B,B, I don’t see why it wouldn’t. A quick attempt to work through the calculations looks basically OK, so long as we make sure that the actions of Aut(B)Aut(B) are used consistently – which is rather easier said than done!

Of course, we also don’t need BB to be a group, just an object in some category in which GG is a group object.

Posted by: Tim Silverman on October 17, 2006 6:55 PM | Permalink | Reply to this

Re: 2-(general linear transformations)

Above # I talked about the structure of the 2-group of linear automorphisms of a skeletal BC 2-vector space (the 2-group of 2-general linear transformations).

I should spell out a couple more details.

We have a single object, a given skeletal 2-vector space (2-term chain complex)

(1)V. \mathbf{V} \,.

An automorphism

(2)VfV \mathbf{V} \stackrel{f}{\to} \mathbf{V}

of that is an invertible chain map, hence a pair of invertible linear maps

(3)f=(f 1,f 2). f = (f_1,f_2) \,.

The square

(4)V 1 0 V 0 f 1 f 0 V 1 0 V 0 \array{ V_1 &\stackrel{0}{\to}& V_0 \\ f_1\downarrow\; && f_0 \downarrow \\ V_1 &\stackrel{0}{\to}& V_0 }

commutes trivially, since we assumed V\mathbf{V} to be skeletal.

A 2-morphism between two such morphisms

(5) f V h V g \array{ &\nearrow \searrow^{f} \\ \mathbf{V} &\Downarrow h& \mathbf{V} \\ &\searrow \nearrow_g }

is a chain homotopy, i.e a map

(6) V 0 r V 1. \array{ && V_0 \\ &\swarrow_r \\ V_1 } \,.

Again, since V\mathbf{V} is skeletal this has to satisfy no nontrivial condition. But it follows that for hh to exist at all, we need f=gf = g.

Now, the question was how 1-morphisms act on 2-morphisms by conjugation. In other words, what is the 2-morphism obtained by whiskering as such:

(7) Id V f V r V f 1 V Id. \array{ && &\nearrow \searrow^{\mathrm{Id}} \\ \mathbf{V} &\stackrel{f}{\to}& \mathbf{V} &\Downarrow r& \mathbf{V} &\stackrel{f^{-1}}{\to}& \mathbf{V} \\ && &\searrow \nearrow_{\mathrm{Id}} } \,.

But the result of this is simply the chain map obtained by composing everything in the only possible way:

(8) V 0 f 0 V 0 r V 1 f 1 1 V 1. \array{ && V_0 \\ && f_0\downarrow\; \\ && V_0 \\ & \swarrow_r \\ V_1 \\ f_1^{-1}\downarrow\; \\ V_1 } \,.

Up to the fact that I changed the notation for no good reason, this should be the result that I stated before #.

Posted by: urs on November 8, 2006 10:48 AM | Permalink | Reply to this

Re: 2-(general linear transformations)

There’s a paper just out on symmetries of 2-term vector spaces, The General Linear 2-Groupoid. I guess we arrived at a 2-group rather than their 2-groupoid because we were considering the skeletal 2-vector space case.

Posted by: David Corfield on July 1, 2017 8:44 AM | Permalink | Reply to this

Re: Klein 2-Geometry VI

Time once more to take stock, stop throwing out half-cooked ideas, and be a bit more careful about what I say.

I want to discuss various special cases of 2-spaces and their 2-groups.

1) A skeletal 2-space.

Here we take an ordinary space of points, B,B, in some category CC of spaces, with an automorphism group of spatial transformations, Aut(B),Aut(B), and an endomorphism monoid of spatial transformations, End(B),End(B), and then we turn each point of this space into a group. The members of the latter group are to count as the morphisms of the 2-space SS. No two points are isomorphic to one another under the 2-space morphisms, although of course they may be isomorphic as groups.

Typically we want the totality of morphisms, M,M, to be a space of the same type as B.B. Moreover, we’d like the group of morphisms over any individual point pp, which we can call A p,A_p, to be a subspace of the space MM of all morphisms, that is, to be isomorphic to some group object in C.C. Thus, among other requirements, we want left and right multiplication in one of the groups A pA_p to be members of Aut(A p),Aut(A_p), that is an automorphism of that space in the category C.C.

Functors from the 2-space to itself are endomorphisms of BB (as an object in C)C), which also act as homomorphisms from the group over any point in BB to the group over its image, and which are also morphisms (in the category CC) of the groups over the points considered as spaces (i.e. as objects in C.)C.) Since no two objects are isomorphic as points, a natural transformation between two functors exists only if the functors have the same action as each other when considered as endomorphisms of BB, and differ by an inner automorphism in their action on the group over each point, A p.A_p.

We’d like to consider a functor from SS to itself to be an automorphism of SS iff it’s invertible up to equivalence, which requires it to act as an automorphism of BB, i.e. as a member of Aut(B)Aut(B) in CC, and also as an isomorphism from each A pA_p to the group over the image of p,p, (that is, A f(p),A_{f(p)}, where ff is the function given by the action of the functor on objects of S,S, i.e. on points in B.)B.) Since SS is skeletal, the automorphisms of SS, together with their equivalences, form a strict 2-group.

If all the groups A pA_p are isomorphic (i.e. for all pp) then the automorphism 2-group is relatively straightforward. It is Hom(B,A)Aut(B)Hom(B,Aut(A)).Hom(B,A)\rightarrow Aut(B)\rtimes Hom(B,Aut(A)). Here, the arrow (the “δ\delta” arrow) takes a map ff in Hom(B,A)Hom(B, A) to the action which is the identity on BB, and is the composition innfinn \circ f on Hom(B,Aut(A))Hom(B, Aut(A)), where inninn takes a member aa of AA to conjugation by a.a. The action of Aut(B)Hom(B,Aut(A)Aut(B)\rtimes Hom(B,Aut(A) on Hom(B,A)Hom(B,A) involves the natural action of Aut(B)Aut(B) on BB as the source of maps in Hom(B,A),Hom(B, A), and the natural action of Aut(A)Aut(A) on the target AA, after composition with Aut(B)Aut(B) acting on the location of the point.

Now, as far as automorphisms are concerned, we have a kind of fibre bundle over B,B, in which the fibre is an Aut(A)Aut(A) torsor (not an AA-torsor as I erroneously said elsewhere). Any member of Aut(B)Aut(B) that takes a point pp to a point qq constitutes a path between pp and qq. The Aut(A)Aut(A) part of the automorphism, over p,p, (or qq; I think either convention works) gives the value of a connection over this path. As the match of Aut(A)Aut(A)-torsor to the group Aut(A)Aut(A) is a matter of convention, we can twiddle the latter to our heart’s content, producing passive gauge transformations.

Confusingly, gauge changes match up functors that agree on the base space, by conjugating their action at each point by a member of Aut(A)Aut(A). So the (trivialisations of the) functors are related by inner automorphisms in Aut(A).Aut(A). This is not the same as equivalence of functors, which relates functors by composition with an inner automorphism in A.A.

This is so confusing, I’m not 100% sure I believe it, but there doesn’t seem to be a mistake that I can see.

We should also consider the case where the groups A pA_p over points pp are not all the same as each other. Since automorphisms of the 2-space SS have to act as isomorphisms of the groups A p,A_p,, this causes the base space BB to fall apart into isomorphism classes, that is, sets of points that are isomorphic as groups. This will cut down the choice of automorphisms of the base space BB to some subgroup XX of Aut(B),Aut(B), namely the subgroup which preserves each of the isomorphism classes of points. In turn, this subgroup may not act transitively on each isomorphism class, causing these classes to fall apart further into smaller classes. However, each of these smaller classes will then be a skeletal 2-space in its own right. Its base space automorphisms will form the subgroup XX of Aut(B),Aut(B), or possibly some subgroup of this if XX multiply covers the automorphism group of the base space of the subspace (i.e. the homomorphism of XX to this group has non-trivial kernel).

In other respects, therefore, in these circumstances, the space SS can simply be considered a rather heterogeneous collection of skeletal 2-spaces, on which the same strict 2-group happens to act. As far as automorphisms are concerned, I don’t think there is anything new here that isn’t already present in the case that all points are isomorphic as groups. Endomorphisms in these circumstances are a great deal more complicated, and possibly not very enlightening, so I shall avoid talking about them for the moment (or possibly ever).

We now need to consider sub-2-groups of the 2-group Aut(S).Aut(S). Since we want to think of these as “injective” (in some sense) morphisms (2-functors) into Aut(S),Aut(S), I want first of all to consider automorphisms of Aut(S).Aut(S).

Once again, I want to consider an automorphism of Aut(S)Aut(S) to be a functor from Aut(S)Aut(S) to itself which is invertible up to equivalence.

Let us consider momentarily the case where the base space BB of the 2-space SS has just one point, which reduces the automorphism 2-group of SS to the familiar case AAut(A).A\rightarrow Aut(A).

In this case, an automorphism of the 2-group consists of an endomorphism of Aut(A)Aut(A) with various properties, such as that its kernel is a subgroup of Inn(A).Inn(A). (That ensures that two objects only get sent to the same object if they are equivalent.) To track down these properties, let us start by thinking about strict automorphisms, that is compatible automorphisms of AA and Aut(A).Aut(A).

The 2-group AAut(A)A\rightarrow Aut(A) is not merely a category but of course a groupoid. In this groupoid, the objects of the component containing the identity of Aut(A)Aut(A) are precisely the elements of Inn(A),Inn(A), the subgroup of inner automorphisms of A.A. Each arrow in the groupoid gets labelled with a member of A,A, such that the target of the arrow is obtained from its source by multiplying the latter on the left (in Aut(A))Aut(A)) by the inner automorphism corresponding to the label (from A)A) on the arrow. Each member of Aut(A)Aut(A) acquires a group of automorphisms by virtue of being an object in a groupoid. This group of automorphisms is isomorphic to the kernel of the δ\delta map inn,inn, in other words to the centre of A,A, Z(A).Z(A).

Strict automorphisms of AAut(A)A\rightarrow Aut(A) are functorial isomorphisms from this groupoid to itself which are also group automorphims of Aut(A)Aut(A) (the set of objects) and of AA (the set of morphism templates). In particular, since the identity component of the groupoid must be preserved, the group of inner automorphisms Inn(A)Inn(A) must be sent to itself.

Inner automorphisms of Aut(A)Aut(A) (not inner automorphisms of A!)A!) have the right properties to give automorphisms of AAut(A),A\rightarrow Aut(A), since Inn(A)Inn(A) is a normal subgroup of Aut(A),Aut(A), (hence preserved by inner automorphisms…) and since conjugation of the object group Aut(A)Aut(A) by a member aa of Aut(A)Aut(A) lifts nicely to the automorphism of the arrow group AA consisting precisely of the action a.a.

Since this construction uses up all possible automorphisms of the morphism group A,A, I guess it exhausts the possible actions of automorphisms of Aut(A)Aut(A) on Inn(A).Inn(A). There is also the potential for automorphisms of Aut(A)Aut(A) not to preserve Inn(A),Inn(A), and hence not to be functors, so obviously these need to be excluded. However, I don’t see any obstacle to including outer automorphisms of Aut(A)Aut(A) which act like inner automorphisms when restricted to Inn(A),Inn(A), but do something non-trivial to Out(A).Out(A). I haven’t thought too much about whether this makes sense, though.

Two of these strict automorphisms will be equivalent if they agree on how they permute the components of the groupoid among themselves, which I think amounts to differing by an inner automorphism of Aut(A)Aut(A) (together with its lift to A).A).

These strict automorphisms of AAut(A)A\rightarrow Aut(A) will induce corresponding automorphisms of AA, the fibre of our skeletal 2-space S,S, in order to preserve the action. Of course, these don’t select any exciting figures or anything like that.

Now, so far we’ve only considered “strict” automorphisms of AAut(A).A\rightarrow Aut(A). However, although AAut(A)A\rightarrow Aut(A) is a strict 2-group, it isn’t (in general) a skeletal 2-group, i.e. the components of its groupoid structure have more than one object. We therefore have the possibility of “weak” automorphisms, which are not injective on objects. The crucial requirement, I think, is that endomorphisms of Aut(A)Aut(A) have to send Inn(A)Inn(A) to itself, endomorphically if not automorphically. This ensures that the map is a functor on the groupoid. If objects are merged, then of course morphism templates will be merged as well. In the extreme case, our “essentially surjective 2-subgroup” will be a skeletal strict 2-group, and strictly isomorphic to Z(A)Out(A),Z(A)\rightarrow Out(A), with δ\delta-map zero. However, depending on which endomorphism we choose, we may end up with different possible realisations of this skeletal 2-group as actual automorphisms of A.A.

In intermediate cases, where Inn(A)Inn(A) is reduced but not shrunk down to 0,0, the details of what endomorphisms are and are not allowed can get quite tricky, I think, because Out(A)Out(A) still needs to have a valid action on the image of Inn(A),Inn(A), which won’t be the case for every endomorphism of Inn(A).Inn(A). I haven’t examined these restrictions in detail, though it might be interesting to work out what is going on here – particularly since this is where key parts of the “2-ness” of 2-spaces would appear to reside.

The cut-down automorphism 2-group can be seen as defining a “figure” in the group AA. I’m not clear how one should best view these figures. One can of course construct the orbit of each member of AA under the action of a given 2-group of automorphisms. A more “groupy” way of thinking about this would be to consider the subgroups of AA generated by these orbits. Or maybe one should think about sublattices of the subgroup lattice of A.A.

One possible rather 2-spacey way to think about it is to consider that, given any subgroup of Aut(A)Aut(A), the subgroups of AA form a groupoid, with the subgroups of AA as objects, and isomorphisms existing between two subgroups iff they are isomorphically mapped to one another by some action of the subgroup of Aut(A)Aut(A) that we are considering. Different subgroups of Aut(A)Aut(A) will give different subgroup subgroupoids of the complete groupoid of Aut(A)Aut(A) (which is itself a subgroupoid of the groupoid of subgroups within Grp). We can lump subgroups of Aut(A)Aut(A) together if they give rise to the same groupoid, and think about the partial ordering of groupoids by inclusion.

Anyway, that’s all I have to say for the moment about the case of a base space with just one point.

If we now move to a base space BB with more than one point, we have to consider the possibility that we might be allowed to assign different essentially surjective 2-subgroups of AAut(A)A\rightarrow Aut(A) to different points of B.B. I don’t think there is any actual obstacle to doing this (the group AA over each point is still the same) but it does lead to the odd possibility that one may be able to perform an automorphism on the group at a given point by taking it around a path over B,B, even though one cannot perform the same automorphism keeping the group in place.

Well, I guess this isn’t too odd: one can invert a frame by taking it around a path in a non-orientable manifold, even if one can’t invert it locally, and similarly for other phenomena in twisted bundles.

It’s at this point that I ought to begin talking about actual non-essentially-surjective 2-subgroups of, at least, AAut(A).A\rightarrow Aut(A). However, at 2000 words, I think this post is more than long enough already, and I am growing weary. The only remark I have ready to hand to make at this point is that subgroups which are strictly surjective on the object group Aut(A),Aut(A), but cut down the morphism template group A,A, seem rather straightforward. The real interest arises from the interaction between the proper subgroups of each of these groups. For instance, when removing objects, one has to be careful that they are not accidentally “regenerated” by the action of the morphism templates, springing back like some infuriating weed.

One other remark I have to make, on a matter that I haven’t really thought about too hard, is that if we define 2-subgroups in terms of 2-equivalence classes of 2-groups that can be 2-functorially mapped into a given 2-group, then one finds oneself allowing arbitrarily large object groups to map into the target object group! This is because injectivity is allowed to fail whenever two objects in the source are isomorphic (under morphisms of the source). This is rather a peculiar situation. Whatever interest it may hold, I suspect lies again in the interaction between subgroups and homomorphisms of morphisms, and subgroups and homomorphisms of objects. Perhaps this is a rather trite observation, but it suggests an area to focus my attention.

This discussion could do with an extended example. I have been working one out as an adjunct to writing this post, so perhaps in my next post I shall work this up into something readable, maybe even with pretty pictures if I can work out how to make this happen.

Then maybe a bit about various kinds of non-surjective 2-subgroups, and then I’ll look at another kind of 2-space.

Posted by: Tim Silverman on October 24, 2006 8:19 PM | Permalink | Reply to this
Read the post Hopkins Lecture on TFT: Chern-Simons
Weblog: The n-Category Café
Excerpt: Basic and advanced concepts in Chern-Simons theory.
Tracked: October 27, 2006 4:44 PM
Read the post Klein 2-Geometry VII
Weblog: The n-Category Café
Excerpt: Continuing to categorify the Erlanger Program
Tracked: November 1, 2006 9:48 AM
Read the post Local Transition of Transport, Anafunctors and Descent of n-Functors
Weblog: The n-Category Café
Excerpt: Conceps and examples of what would be called transition data or descent data for n-functors.
Tracked: December 8, 2006 2:22 PM
Read the post Cocycle Category
Weblog: The n-Category Café
Excerpt: Bruce Bartlett reports on Rick Jardine's concept of 'cocycle categories' and their relation to anafunctors.
Tracked: January 24, 2007 11:01 AM
Read the post Grandis on Collared Cobordisms and TQFT
Weblog: The n-Category Café
Excerpt: Marco Grandis discusses the arrow-theory of collared cobordisms and topological quantum field theory.
Tracked: May 2, 2007 3:35 PM
Read the post A Small Observation
Weblog: The n-Category Café
Excerpt: A duality in 2-vector spaces
Tracked: July 3, 2008 4:40 PM
Read the post A Small Observation
Weblog: The n-Category Café
Excerpt: A duality in 2-vector spaces
Tracked: July 3, 2008 4:43 PM

Post a New Comment