Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 5, 2008

Geometric Representation Theory (Lecture 19)

Posted by John Baez

In the penultimate lecture of last fall’s Geometric Representation Theory seminar, James Dolan lays the last pieces of groundwork for the Fundamental Theorem of Hecke Operators.

I’ll actually state the Fundamental Theorem in the next lecture. But for those of you who can’t stand the suspense, there’s some supplementary reading below, where the theorem is actually stated — and in a much more detailed way! Right now this is just a rough draft, not containing proofs. It’ll seem awfully dry if you haven’t been following the seminar so far, because it lacks all the fun examples we’ve been talking about all along. I’ll fix that later.

  • Lecture 19 (December 4) - James Dolan on the Fundamental Theorem of Hecke Operators. Answer to last week’s puzzle. A new puzzle: find an interesting groupoid with cardinality e ee^e. A harder one: find an interesting groupoid with cardinality π\pi.

    Degroupoidification turns a finite groupoid GG into a finite-dimensional vector space, its zeroth homology H 0(G)H_0(G). It turns a span of finite groupoids

    GjSkH G \stackrel{j}{\leftarrow} S \stackrel{k}{\rightarrow} H into the linear operator defined as the composite

    Gj !Sk *H G \stackrel{j^!}{\rightarrow} S \stackrel{k_*}{\rightarrow} H

    where k *k_* is the pushforward (defined in an obvious way) and j !j^! is the transfer (defined in a clever way using groupoid cardinality, as explained here).

    Degroupoidification is a weak monoidal 2-functor

    D:FinSpanFinVect D: FinSpan \to FinVect

    where

    FinSpan=[finitegroupoids,spansoffinitegroupoids,equivalencesbetweenspans] FinSpan = [finite groupoids, spans of finite groupoids, equivalences between spans ]

    and

    FinVect=[finitedimensionalvectorspaces,linearoperators,equationsbetweenlinearoperators] FinVect = [finite-dimensional vector spaces, linear operators, equations between linear operators]

    The latter is really just a category in disguise. So, we can use degroupoidification to obtain a weak 3-functor

    D¯:[bicategoriesenrichedoverFinSpan][categoriesenrichedoverFinVect] \overline{D}: [bicategories enriched over FinSpan] \to [categories enriched over FinVect]

    For us, the key example of a bicategory enriched over FinSpan is the Hecke bicategory of a finite group GG, Hecke(G)Hecke(G). This has finite G-sets as objects, and for any pair of finite G-sets A and B it has

    hom(A,B)=(A×B)//Ghom(A,B) = (A × B)//G

    Composition in the Hecke bicategory involves a “trispan”.

    Future directions: following the plan outlined on page 400 of Daniel Bump’s book Lie Groups, in the chapter “The Philosophy of Cusp Forms”.



Posted at January 5, 2008 3:08 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1563

26 Comments & 1 Trackback

Re: Geometric Representation Theory (Lecture 19)

The Fundamental Theorem of Hecke Operators says that D¯(Hecke(G))\overline{D}(Hecke(G)) is the category of permutation representations of GG.

Hold on - I’m confused here. I have the same confusion with the supplementary reading. Don’t you have to tensor the hom-sets with the ground field kk? In the category of permutation representations of GG, the hom-sets are kk-vector spaces.

Posted by: Bruce Bartlett on January 6, 2008 2:31 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Bruce wrote:

Hold on - I’m confused here.

Good!

I’m glad you’re reading this stuff, Bruce! You claimed to be dying of suspense, wanting to know how the Fundamental Theorem of Hecke Operators really worked. Well, now you can see how it really works! And, I hope you’ll find it as mindbending and cool as I did. I’m dying to explain it to you.

The first step is to be completely confused, just like I was.

You see, the correct statement of the Fundamental Theorem is quite different from my original failed attempt. It’s just similar enough for the resemblance to be really confusing! Going from the incorrect statement to the correct one is like waking up from a dream.

Don’t you have to tensor the hom-sets with the ground field kk?

No! For one thing, there aren’t really hom-sets. There are hom-groupoids. And what you do is degroupoidify these hom-groupoids. That turns them into kk-vector spaces.

In the category of permutation representations of GG, the hom-sets are kk-vector spaces.

Right. And in the Hecke bicategory of GG, the hom-thingies are groupoids. The Hecke bicategory of GG is a bicategory enriched over

FinSpan=[finitegroupoids,spansoffinitegroupoids,equivalencesbetweenspans]FinSpan=[finite groupoids, spans of finite groupoids, equivalences between spans]

So, for any pair of finite GG-sets AA and BB, we have a hom-groupoid

hom(A,B)=(A×B)//Ghom(A,B) = (A \times B)//G

And, if we ‘degroupoidify’ this groupoid — take the free vector space on the set of isomorphism classes — we get the vector space of intertwiners from the permutation representation associated to AA, to the permutation representation associated to BB!

Never mind all the fancy machinery for a minute, like enriched bicategories. The important thing is: do you see why the last paragraph is true? Do you see why an isomorphism class in (A×B)//G(A \times B)//G is just an orbit for the action of GG on A×BA \times B, and these orbits give a basis of intertwining operators between the permutation representations?

(The last is something Jim explained way back in lecture 3 — we called the elements of this basis ‘Hecke operators’. Of course, I’ll need to flesh out my little paper a lot more, to make it self-contained.)

Posted by: John Baez on January 6, 2008 4:34 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Hi John,

As you know, the reason I’m interested in this stuff is firstly, of course, because I’m a big fan of the HDA series, but also because many of the patterns which crop up in your and Jim’s course on Geometric Representation Theory also crop up in 2-representations; one just has to sprinkle the words “U(1)U(1)-central extension” here and there.

But unless I am still confused it seems to me that, without that Doplicher-Roberts reconstruction stuff (which sounds very interesting), the current version of the Fundamental Theorem of Hecke Operators has no actual content!

Of course that’s a very strange thing to say, so let me explain myself. I’m going to break up the theorem into two parts.

(a) The “intertwiners and groupoids” part. John wrote:

So, for any pair of finite G-sets A and B, we have a hom-groupoid

hom(A,B)=(A×B)//Ghom(A,B)=(A\times B)//G

And, if we `degroupoidify’ this groupoid — take the free vector space on the set of isomorphism classes — we get the vector space of intertwiners from the permutation representation associated to AA, to the permutation representation associated to BB!

The important thing is: do you see why the last paragraph is true?

Yes… though I didn’t see it at first. Now that I think I understand it, wouldn’t most mathematicians consider this as sort of trivial though?

Here’s my (perhaps flawed) understanding of it. For any two representations σ\sigma and ρ\rho of GG, the space of intertwiners between them computes as the space of invariants of the tensor product (using the dual of the first one),

(1)Hom(σ,ρ)Hom(1,σ ρ). Hom(\sigma, \rho) \cong Hom(1, \sigma^\vee \otimes \rho).

So for permutation representations, one computes: Hom([A],[B]) Hom(1,[A] [B]) Hom(1,[A][B]) Hom(1,[A×B]) =linearcombinationsoforbitsin(A×B)//G, \begin{aligned} Hom(\mathbb{C}[A], \mathbb{C}[B]) &\cong Hom(1, \mathbb{C}[A]^\vee \otimes \mathbb{C}[B]) \\ &\cong Hom(1, \mathbb{C}[A] \otimes \mathbb{C}[B]) \\ &\cong Hom(1, \mathbb{C}[A \times B]) \\ &= linear combinations of orbits in \,\, (A \times B)//G, \end{aligned} which is precisely what you are saying (though it seems to be an elementary computation).

(b) The “degroupoidification/topos” part. In your notes, the final theorem is stated as:

(2)D¯(K¯(Hecke(G)))Perm(G). \overline{D}(\overline{K}(Hecke(G))) \simeq Perm(G).

Then you say:

The point of this theorem is that the familiar category Perm(G)Perm(G) is obtained from the Hecke bicategory Hecke(G)Hecke(G) by a ‘degroupoidification’ process, namely D¯\overline{D}, after a slight change in viewpoint, namely K¯\overline{K}.

For any others reading this, let me explain what is being stated here. Hecke(G)Hecke(G) is the bicategory with:

  • objects are finite GG-sets
  • given GG-sets AA, BB, the hom-category hom(A,B)hom(A,B) is the category of functors from (A×B)//G(A \times B)//G into SetSet.

The Fundamental Theorem of Hecke Algebras then states that if we apply “Process D¯K¯\overline{D} \circ \overline{K}” to Hecke(G)Hecke(G), then we gets Perm(G)Perm(G). What are these processes?

Well the first one, “Process K¯\overline{K}”, is the machine which takes Hecke(G)Hecke(G) and forms a new bicategory, which has

  • the same objects, namely finite GG-sets
  • and hom-groupoids given by Hom(A,B)=(A×B)//GHom(A,B) = (A \times B)//G.

But without any Doplicher-Roberts-style input, “Process K¯\overline{K}” has no content! It simply transforms hom-categories into groupoids,

(3)Fun((A×B//G),Set)(A×B)//G, Fun((A \times B // G), Set) \mapsto (A \times B)//G,

but it doesn’t do it intrinsically , it just does it by definition!

So the theorem doesn’t start with the cool bicategory Hecke(G)Hecke(G), perform some abstract intrinsic manipulations, and then output Perm(G)Perm(G). Rather, it sort of cheats - it converts Hecke(G)Hecke(G) by hand into a bicategory where the hom-sets are (A×B)//G(A \times B)//G… and then states that if we apply “Process D¯\overline{D}” to this new bicategory - in other words, if we degroupoidify the hom-groupoids - we’ll get out Perm(G)Perm(G). But this last step is basically trivial - it is precisely part (a) above!

In other words, in the way I am understanding it, “Process K¯\overline{K}” has no content, while the degroupoidification “Process D¯\overline{D}” is elementary, so the theorem looks empty. On the other hand, I know that there is definitely something to this theorem… so what is it?

Posted by: Bruce Bartlett on January 6, 2008 5:19 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Bruce wrote:

Now that I think I understand it, wouldn’t most mathematicians consider this as sort of trivial though?

Sure, if they understood it. Most mathematicians think anything they really understand is trivial. I think that’s why we usually explain things so badly: to keep other mathematicians — and even ourselves — from really understanding things, and then labelling them ‘trivial’.

If I’d just told you that there was a systematic way to take the category of permutation representations of a finite group and boost it up to a bicategory, you might have been impressed. Or maybe not — after all, there are some very trivial ways to do this, e.g. by throwing in identity 2-morphisms.

If I’d told you that no, I wasn’t doing anything silly like that, and by the way: this bicategory makes no mention of the complex numbers or any other ground field, you might have been impressed.

I could probably even have revealed more details without letting you understand what was going on. I could have said: I’m going to take any category Perm(G)Perm(G) of finite-dimensional representations of a finite group, and obtain it from a bicategory Hecke(G)Hecke(G) consisting of

  • finite GG-sets
  • spans of finite GG-sets
  • maps of spans of finite GG-sets

by means of a systematic process that turns certain nice bicategories into VectVect-enriched categories. That sounds sort of impressive.

But, since I explained it well enough for you to understand how it really works, you think it’s trivial.

Actually, you only discussed the easiest part in your comment: how the hom-groupoids

hom(A,B)=(A×B)//Ghom(A,B)=(A \times B)//G

turn into the hom-spaces

hom([A],[B])hom(\mathbb{C}[A], \mathbb{C}[B])

when we degroupoidify. We also have to check that composition gets preserved! We start with a composition like this

:hom(A,B)hom(B,C)hom(A,C) \circ : hom(A,B) \otimes hom(B,C) \to hom(A,C)

which really means

:(A×B)//G × (B×C)//G (A×C)//G \array{ \circ : (A \times B)//G &\times& (B \times C)//G &\to &(A \times C)//G }

But notice, the arrow here is not a functor — it’s a span! Jim explained this span at the end of his lecture. I think it’s sort of cool.

And then, we need to check that this span gets sent to the usual composition of intertwining operators

:hom([A],[B])hom([B],[C])hom([A],[C])\circ : hom(\mathbb{C}[A], \mathbb{C}[B]) \otimes hom(\mathbb{C}[B], \mathbb{C}[C]) \to hom(\mathbb{C}[A], \mathbb{C}[C])

For this, of course, we need to use the recipe for turning spans of groupoids into linear operators.

There’s also the part about getting an instrinsic characterization of ‘nice topoi’, which will allow the reconstruction of a finite groupoid XX from the nice topos Set XSet^X. As you note, that should obscure things enough to make the result more impressive.

Okay — it’ll actually be interesting, too. But not as interesting as you seem to think. After all, this part is bound to work, if the outline I’ve sketched so far is correct. As soon as we know

J:[finitegroupoids,spans,equivalencesofspans] [topoi,functors,naturalisomorphisms] X Set X \array{ J : [finite groupoids, spans, equivalences of spans] &\to& [topoi, functors, natural isomorphisms] \\ X &\mapsto & Set^X }

is ‘one-to-one’ (in the suitable bicategorical sense), then we can characterize its ‘image’

Nice=[nicetopoi,nicefunctors,nicenaturalisomorphisms]Nice = [nice topoi, nice functors, nice natural isomorphisms]

and get an equivalence

J:[finitegroupoids,spans,equivalencesofspans]Nice J : [finite groupoids, spans, equivalences of spans] \to Nice

which will then have an ‘inverse’

K:Nice[finitegroupoids,spans,equivalencesofspans] K : Nice \to [finite groupoids, spans, equivalences of spans]

which is our ‘reconstruction theorem’.

Indeed, all these ‘reconstruction theorems’ have a certain stage magic quality to them: you make a big deal of pulling a rabbit out of the hat, and it impresses everyone tremendously — but all you did was invert the process of putting the rabbit in. All these theorems do is characterize the image of a 1-1 function and then construct its inverse — or some categorified version thereof!

But let me explain why I actually like the ‘Fundamental Theorem of Hecke Operators’.

Most of all, it suggests how to take big wads of finite group representation theory and categorify it! For example, those ‘Hecke algebras’ people like to talk about — those qq-deformed versions of the group algebras of symmetric groups — are really algebras naturally sitting inside Perm(G)Perm(G) where G=GL(n,F q)G = GL(n,F_q). And using the ‘Fundamental Theorem’, it turns out these Hecke algebras can be categorified! An algebra is a one-object category enriched over VectVect; when we categorify the Hecke algebra we’ll get a one-object bicategory enriched over NiceNice — some sort of monoidal category. And since the Hecke algebra is a quotient of the braid group algebra, closely related to the Jones polynomial, this means we’ll have categorified some aspects of ‘quantum topology’.

Jim already explained this back in lecture 13 — that’s what all those braid diagrams were about. The ‘Fundamental Theorem’ just puts this in its proper context.

Next quarter we’ll say more about this and a bunch of other applications…

Posted by: John Baez on January 6, 2008 8:08 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Sure, if they understood it. Most mathematicians think anything they really understand is trivial. I think that’s why we usually explain things so badly: to keep other mathematicians — and even ourselves — from really understanding things, and then labelling them ‘trivial’.

Indeed, you’re right. Very depressing thought.

In this spirit, let me admit : I actually still don’t really understand why the original statement of the theorem failed, though at the time I fooled myself into believing I did! Gulp.

We also have to check that composition gets preserved.

As you stress nicely in your notes, talking about ‘spans between thingy AA and thingy BB’ is the same as talking about “setwise representations of A×BA \times B ”… by definition, basically. As a quirky personal preference, I prefer to work exclusively in this latter language; everything is nice and explicit, and its also the language you used in TWF.

So, a morphism σ:AB\sigma : A \rightarrow B between GG-sets is just a set-representation of B×AB \times A (nicer to swap arguments),

(1)σ:B×ASet. \sigma : B \times A \rightarrow Set.

Essentially as you taught us in TWF, I think of this as a collection of “equivariant transition amplitudes”. In other words, for each aAa \in A and bBb \in B, we have the set

(2)b|σ|a \langle b | \sigma | a \rangle

which simply tells us the collection of ways to go from aAa \in A to bBb \in B. Said in this way, it’s clear how to compose them. If ρ:BC\rho : B \rightarrow C is another such gismo, then the composite ρσ:AC\rho \circ \sigma : A \rightarrow C will have transition amplitudes

(3)c|ρσ|a= bBc|ρ|b×b|σ|a. \langle c | \rho \circ \sigma | a \rangle = \bigsqcup_{b \in B} \langle c | \rho | b \rangle \times \langle b | \sigma | a \rangle.

As Feynman and Zeno taught us, “The collection of ways to go from aa to cc equals the sum over all intermediate points bb of the ways of going from aa to bb times the ways of going from bb to cc”. Of course, this is just the definition of composition of spans, but somehow it seems more primeval and quantum-y written this way.

There’s also the part about getting an instrinsic characterization of ‘nice topoi’, which will allow the reconstruction of a finite groupoid XX from the nice topos Set XSet^X…. this part is bound to work, if JJ is one-to-one in the appropriate bicategorical sense… we characterize its ‘image’…which gives us our ‘reconstruction theorem’ [warning : severe distortion of John’s original!]

I think I disagree with this. That’s not a reconstruction theorem…that’s basically no more than a play on words!

If I understand you correctly, you are saying that (once we have sorted out the one-to-one thing, which isn’t hard) all we need to do is define the 2-category Nice to have, as objects, morphisms and 2-morphisms, the thingies in the image of our functor. Then we get reconstruction for free. You are surely the first to agree this is an empty procedure (though I sadly agree so many ‘reconstruction theorems’ nowadays have this flavour); indeed you say so in your notes,

We would also like intrinsic characterizations of nice functors and nice natural transformations.

An intrinsic reconstruction theorem would be like Doplicher-Roberts and say something like, “Want to recover the groupoid XX from the category Set XSet^X ? No problem. All you have to do is take the groupoid of fiber functors, i.e. the monoidal thingy-preserving functors from Set XSet^X into SetSet.” In other words, a categorified Gelfand-Naimark theorem… for GG-sets. But I’d be interested to see a precise version of that!

Okay, let me summarize my thoughts. I like the bicategory Hecke(G)Hecke(G) - objects are finite GG-sets, and the hom-categories are Hom(A,B)=[A×B,Set]Hom(A,B) = [A \times B, Set] - I like it a lot. But I’d like a cleaner, intrinsic mechanism for getting from Hecke(G)Hecke(G) to the bicategory which has finite GG-sets for objects, and hom-groupoids given by Hom(A,B)=(A×B)//GHom(A,B) = (A \times B)//G.

Posted by: Bruce Bartlett on January 6, 2008 11:30 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Bruce wrote:

n this spirit, let me admit: I actually still don’t really understand why the original statement of the theorem failed, though at the time I fooled myself into believing I did! Gulp.

Don’t feel bad — I’m really glad to have someone out there paying enough attention to get into serious discussions of these things.

In the originally failed version, here’s how I tried to build the vector space of intertwiners between permutation representations, namely what you’re calling

hom([A],[B])hom(\mathbb{C}[A], \mathbb{C}[B])

where AA and BB are finite GG-sets.

First, I took the action groupoid:

(A×B)//G(A \times B)//G

Then, I looked at actions of this on finite sets:

hom((A×B)//G,FinSet)hom((A\times B)//G, FinSet)

This is a finitely generated free FinSetFinSet-module. Decategorifying, it gives a finitely generated free \mathbb{N}-module. Tensoring with \mathbb{C}, it becomes a finite-dimensional vector space.

Unfortunately this vector space is too big! We want a vector space that has one basis element for each orbit of the action of GG on (A×B)/G(A \times B)/G. These orbits correspond to components of the groupoid (A×B)//G(A \times B)//G. But the vector space we get has lots of basis elements for each component of (A×B)//G(A \times B)//G.

Why?

Well, our vector space has one basis vector for each ‘indecomposable’ action of (A×B)//G(A \times B)//G. (There’s an obvious concept of the ‘direct sum’ of actions of a groupoid on finite sets, and every such action is a direct sum of ‘indecomposable’ ones.) What are these indecomposable actions like? We get one by first choosing a connected component of the groupoid (A×B)//G(A \times B)//G, which is equivalent to some group… and then choosing a transitive action of that group on some finite set.

The italicized phrase is the reason we’re getting a vector space that’s too big.

There are various ways to fiddle with this idea and get it to work, but pretty soon Jim Dolan saw a better thing to do. Instead of taking

hom((A×B)//G,FinSet)hom((A\times B)//G, FinSet)

and trying to beat it down into the correct vector space, just take the groupoid

(A×B)//G(A \times B)//G

and form the free vector space on its set of components!

That much is obvious. But that much alone would be unsatisfying. We wanted

hom((A×B)//G,FinSet)hom((A\times B)//G, FinSet)

to be in the picture, because objects in here are spans of GG-sets from AA to BB, and these are very nice things.

The trick is to realize that

(A×B)//G(A \times B)//G

and

hom((A×B)//G,FinSet)hom((A\times B)//G, FinSet)

are just different ways of talking about the same thing: a groupoid, and its category of actions on finite sets. We can go back and forth between these two! That’s how the ‘nice topos’ idea got into the act.

(Right now I’m guessing it may be technically a bit easier to characterize

hom((A×B)//G,Set)hom((A\times B)//G, Set)

than

hom((A×B)//G,FinSet)hom((A\times B)//G, FinSet)

That’s why my notes talk about the former.)

On a separate note, regarding this ‘reconstruction’ theorem:

If I understand you correctly, you are saying that (once we have sorted out the one-to-one thing, which isn’t hard) all we need to do is define the 2-category Nice to have, as objects, morphisms and 2-morphisms, the thingies in the image of our 2-functor. Then we get reconstruction for free. You are surely the first to agree this is an empty procedure…

Sure! But I think of this as like throwing a thin rope across a canyon, as the first step towards building a bridge. Once we’ve shown this 2-functor is sufficiently ‘one-to-one’, it’s just a matter of cleverness and hard work to find better and better characterizations of its image, and better and better ways to describe the inverse. Someone is bound to find really nice answers. It may take a while… but there’s no true suspense as to whether it will go through.

An intrinsic reconstruction theorem would be like Doplicher-Roberts and say something like, “Want to recover the groupoid XX from the category Set XSet^X ? No problem. All you have to do is take the groupoid of fiber functors, i.e. the monoidal thingy-preserving functors from Set XSet^X into SetSet.” In other words, a categorified Gelfand-Naimark theorem… for GG-sets. But I’d be interested to see a precise version of that!

Yes, I agree that it will be nice to see. That’s why I want to find it!

By the way, there’s already a well-known Tannaka-Krein reconstruction theorem in this context, at least in the case where our groupoid XX is a group. In fact it works for any algebraic theory, not just ‘the theory of GG-sets’. You start with the category of models of your algebraic theory, together with its ‘fiber functor’ down to SetSet, and recover the algebraic theory from that. Lawvere did it in his thesis, and you can probably almost guess how it goes.

So, at least in this case, we deserve to have a Doplicher-Roberts version too, where we deny ourselves the use of a fixed fiber functor to Set\Set.

In fact, this result may even be known already! For example when GG is a group, topos theorists know a lot of fun stuff about Set GSet^G. They call it the ‘classifying topos for GG-torsors’, meaning that geometric morphisms ESet GE \to Set^G (where EE is any topos over SetSet) correspond to ‘GG-torsors over EE’. I don’t know this stuff very well — I’m just reading it out of Mac Lane and Moerdijk. But, it’s the kind of thing that might help us recover GG from the topos Set GSet^G. And, there’s a lot of other stuff about topoi of presheaves on groupoids that could help, too.

(Another dirty trick would be to use the usual Doplicher–Roberts theorem to prove this set-based one. But, there must be something nicer.)

Posted by: John Baez on January 7, 2008 2:30 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

By the way, Bruce, I should warn you — and everyone else in the universe — of a slight difference between my notation in this blog entry and my notation in the supplementary reading.

In this blog entry — and also the forthcoming final lecture — I wanted to avoid mentioning topoi, because 9 out of 10 mathematicians suffer a brain seizure every time they see the word ‘topos’. (Whoops!) So, I described Hecke(G)Hecke(G) as a gadget whose hom-thingies are finite groupoids. This is actually the most useful viewpoint when we’re trying to ‘degroupoidify’ and get a gadget whose hom-thingies are vector spaces.

But, as Jim realized, it’s also very important to think of Hecke(G)Hecke(G) as a gadget whose hom-thinges are nice topoi. This is the outlook I take in the supplementary reading. A nice topos is just a category equivalent to one of the form Set XSet^X for some finite groupoid XX. You don’t need to know what a topos is to accept this definition. I could have used any other word equally well. It just happens that Set XSet^X is a topos.

If everything I believe turns out to be true, there’s a ‘Doplicher-Roberts reconstruction theorem’ that takes a nice topos and gives you back the groupoid XX. So, it’s only a matter of taste whether we think of Hecke(G)Hecke(G) as having hom-groupoids or hom-(nice topoi).

In the supplementary reading, I say Hecke(G)Hecke(G) has hom-(nice topoi), and use K¯(Hecke(G))\overline{K}(Hecke(G)) to stand for the corresponding thing that has hom-groupoids… the thing I’m calling Hecke(G)Hecke(G) here.

I’m too tired right now to explain just why the ‘nice topos’ perspective is so handy; to a zeroth approximation you can just ignore it! I just want to warn you about the difference in notation.

Posted by: John Baez on January 6, 2008 5:21 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Okay — I have a thought about how this ‘reconstruction’ process might work, where we recover a finite groupoid XX from the category of its actions on sets, Set XSet^X. I’m hoping some of the topos experts out there can say if I’m making sense. I don’t feel I understand this stuff very well.

I’ll limit myself to the case where our groupoid is a group GG… but I won’t need GG to be finite.

Someone hands us a category equivalent to the category of GG-sets for some group GG, but doesn’t tell us which GG. How do we recover GG?

First, we riffle through Mac Lane and Moerdijk and note that the category of GG-sets, Set GSet^G, is the classifying topos for GG-torsors. This apparently implies gives an equivalence

HOM(Set,Set G)GTorHOM(Set, Set^G) \simeq G Tor

where HOMHOM means the category of geometric morphisms between topoi, and natural transformations between these, and GTorG Tor is the usual category of GG-torsors.

So, we’ve recovered the category of GG-torsors. How do we recover GG itself?

Well, there’s a forgetful functor GTorSetG Tor \to Set. The natural isomorphisms from this functor to itself form a group, and I think this group is GG.

So, we’ve got GG.

More formally: I hope that if someone hands us a category CC equivalent to Set GSet^G for some GG, we can take HOM(Set,C)HOM(Set,C) and find that it’s automatically equipped with a forgetful functor to SetSet, say

F:HOM(Set,C)SetF : HOM(Set, C) \to Set

Then, we form the group Aut(F)Aut(F) of natural isomorphisms from FF to itself, and that’s our group GG.

Is this right? How does that “automatically” work, exactly? Does it generalize to groupoids?

Posted by: John Baez on January 7, 2008 7:01 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

I don’t know whether what you say is correct, but for finite groupoids (indeed, finite categories) the story is even simpler than the one you tell.

There are two crucial points:

  • For a small category CC, HOM(Set,Set C)Flat(C op,Set) HOM(Set, Set^C) \simeq Flat(C^{op}, Set) where the right-hand side is the category of flat functors and natural transformations between them. I talked about this here. I didn’t explain what ‘flat’ meant, but for groupoids it just means ‘torsor’.
  • Representables are always flat, and if CC is a finite category then the representables are the only flat functors C opSetC^{op} \to \Set. So CFlat(C op,Set)C \simeq Flat(C^{op}, Set).

Putting these together, if CC is a finite category then HOM(Set,Set C)C. HOM(Set, Set^C) \simeq C. This is how you can recover CC from Set CSet^C. In particular, you can do it for finite groupoids.

Isn’t it clear anyway that GTorGG Tor \simeq G, for any group GG? Or do you need finiteness?

There’s also a reassuring background fact that for any (small?) category CC, you can recover CC from Set CSet^C — up to Cauchy completion, anyway. Cauchy completion is the process of forcing idempotents to split (i.e. for every idempotent arrow ee in the category, adding in arrows pp and ii such that pi=1p i = 1 and ip=ei p = e). Groupoids are always Cauchy complete, since they have no idempotents except identities, so if GG is a groupoid then you can certainly recover it from Set GSet^G.

The method of recovery for general categories is slightly complicated. One has to find a way of characterizing the representables among all presheaves. There are a couple of ways to do this, and I can have a bash at explaining if anyone wants me to. But I never found it very enlightening.

Posted by: Tom Leinster on January 7, 2008 7:34 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Wow! This stuff sounds great! It’ll take me a while to absorb it, but let me start by explaining why I did something absurd:

Tom wrote:

Isn’t it clear anyway that GTorGG Tor \simeq G, for any group GG?

Yes, it’s clear: for any group GG, all left GG-torsors are isomorphic, so a skeleton of GTorG Tor has one object, which might as well be GG itself, viewed as a GG-torsor. The automorphism group of GG as a left GG-torsor is just GG itself, acting via right translations. So, GTorGG Tor \simeq G.

So, once I got my paws on GTorG Tor I was done getting GG. And I should have known this; I’m a big fan of this fact. It was absurd for me to do any further finagling to recover GG. Why did I bother getting GG as automorphisms of the forgetful functor GTorSetG Tor \to Set? It’s because I was in the mood for ‘Tannaka–Krein reconstruction’, where we recover an algebraic gadget from endomorphisms of a forgetful functor.

While it was absurd, it wasn’t wrong. Once we remember GGTorG \simeq G Tor, we see the automorphism group of the forgetful functor GTorSetG Tor \to Set is the same as the group of all transformations of GG that commute with left translations — and this is just GG, acting via right translations!

Posted by: John Baez on January 7, 2008 8:15 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Thanks Tom and John, this is very good to know! I don’t know anything about topoi and geometric morphisms… though luckily James my housemate does.

I’m quite surprised by the final form of the equation,

(1)CHOM(Set,Set C) C \simeq HOM (Set, Set^C)

I would have expected it to be more of the flavour of HOM(Set C,Set)HOM(Set^C, Set)…it’s not, and that’s interesting.

Posted by: Bruce Bartlett on January 7, 2008 11:18 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

It would have been helpful if I’d said what that equivalence CHOM(Set,Set C) C \simeq HOM(Set, Set^C) was.

Here goes. I’ll describe the functor from left to right. This functor exists no matter what CC is, though it’s not necessarily an equivalence unless CC is finite.

Given any cCc \in C, I have to describe a geometric morphism SetSet CSet \to Set^C. In other words, I have to describe a functor SetSet CSet \to Set^C with a finite-limit-preserving left adjoint.

In this case the left adjoint is easier to describe than the right. It’s simply evaluation at cc. This preserves all limits.

The right adjoint maps a set SS to Set(C(,c),S)Set C, Set(C(-, c), S) \in Set^C, which is itself the functor mapping dCd \in C to Set(C(d,c),S)Set(C(d, c), S).

You can check for yourself that this is an adjoint pair. The way I look it is that the left adjoint ev c:Set CSetev_c: Set^C \to Set (evaluation at cc) is isomorphic to tensoring with C(,c)C(-, c), whereas the right adjoint is homming out of C(,c)C(-, c). Think of covariant functors on CC as left CC-modules and contravariant functors on CC as right CC-modules, if you like.

Maybe the reason why this goes against your expectation is that there’s a conventional choice of direction for geometric morphisms, and sometimes it seems inconvenient. (I waffled about this before.)

Posted by: Tom Leinster on January 7, 2008 11:42 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Without attempting to justify why this conventional choice of direction is ‘natural’ (in the sense of Tom’s recent and very elegant post), it’s at least a ‘useful’ direction in a number of respects. For example,

  • If GG and HH are groups, then a group homomorphism GHG \to H induces a geometric morphism Set GSet HSet^G \to Set^H, and up to isomorphism every such geometric morphism is so induced.
  • If XX and YY are (sober ) topological spaces, then every continuous map XYX \to Y induces a geometric morphism Sheaves(X)Sheaves(Y)Sheaves(X) \to Sheaves(Y), and up to isomorphism every such geometric morphism is so induced.
Posted by: Todd Trimble on January 8, 2008 12:20 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Thanks Tom.

I waffled about this before.

You can rest assured that I have been studiously saving up all the posts you have made on topos-affairs in the last couple of months :-) I intend to hit them with a vengeance very soon… on the way to London for Chris Isham’s talk, for instance.

Posted by: Bruce Bartlett on January 8, 2008 12:35 AM | Permalink | Reply to this

Nice Toposes

John,

Here are some very preliminary thoughts on this issue you brought up in your notes, on characterizing ‘nice toposes’ in intrinsic terms (p. 4). I think it should be possible to state it all much more cleanly, but this is a first pass.

A ‘nice’ topos is the category of presheaves over some finite groupoid CC. One nice fact is

Theorem. CC is a groupoid iff Set C opSet^{C^{op}} is a Boolean topos.

(‘Boolean’ means that every subobject i:cdi: c \to d has a complementary subobject: one whose union with ii is all of dd, and whose intersection with ii is initial (empty). It’s easy to see that GG-Set, for GG a groupoid, is Boolean.)

So it would be enough to give an intrinsic characterization of presheaf toposes, and add the word ‘Boolean’.

I’ll start with a simple characterization theorem for presheaf toposes. I’ve talked about this a little elsewhere, but looking at what I said there it seems brutally abstract, and it would be better to say it again a little more nicely.

By the way, Tom has of course already stated some pertinent facts on presheaf toposes in the language of toposes and geometric morphisms. But I want to put geometric morphisms a little to the side for the time being, and talk more in the language of presheaf categories and cocontinuous maps, because that’s where nice toposes and nice functors actually live.

Let EE be a small-cocomplete category. Here is the key definition (due to Lawvere?):

Definition. An object ee of EE is tiny if the hom-functor E(e,):ESetE(e, -): E \to Set preserves small colimits.

(By the way: in email you were singling out the role of indecomposable = transitive GG-sets. These are those presheaves XX on GG [or on G opG^{op} if you wish] such that hom(X,)hom(X, -) preserves small coproducts. Such objects are sometimes called connected objects. Here I’m strengthening that condition to connected and projective: hom(X,)hom(X, -) also preserves coequalizers.)

Notice that when EE is a presheaf topos Set C opSet^{C^{op}}, the representable presheaves hom(,c)hom(-, c) are tiny, since

Set C op(hom(,c),F)F(c)=eval c(F)Set^{C^{op}}(hom(-, c), F) \cong F(c) = eval_c(F)

by Yoneda, and evaluation at an object cc preserves colimits of presheaves since colimits of presheaves are computed object-wise.

In fact, the Cauchy completion of CC is equivalent to the full subcategory of tiny presheaves on CC. This gives a reconstruction of a small category CC from Set C opSet^{C^{op}}, if CC is already Cauchy complete (as it will be in the case where CC is a groupoid, as Tom has already pointed out).

To characterize those cocomplete categories EE which are presheaf categories, it therefore makes sense to focus on the full subcategory of tiny objects of EE, and ask what else we need. Well, in the presheaf case, every presheaf can be expressed as a canonical colimit of representables (= tiny presheaves). We say that the representables are dense in the category of presheaves. Thus it makes sense to focus on the condition that the tiny objects form a dense subcategory.

(Returning to your email, you had observed that every GG-set is the coproduct of indecomposables, i.e., the coproduct of objects XX whose representables hom(X,)hom(X, -) are coproduct-preserving. Here, we are pursuing an analogous statement, that every GG-set is the colimit of objects XX whose representables hom(X,)hom(X, -) are colimit-preserving.)

More precisely, a full subcategory i:CEi: C \to E is dense if every object ee of EE is the colimit of the cone of CC over ee, i.e., the colimit of the composite

(ie)projCiE.(i \darr e) \stackrel{proj}{\to} C \stackrel{i}{\to} E.

In the language of weighted colimits, this says that the canonical map

E(i,e) Ci= cE(ic,e)iceE(i-, e) \otimes_C i = \int^c E(i c, e) \otimes i c \to e

is an isomorphism for every ee (the tensor inside the coend is a coproduct of copies of ici c indexed over the hom-set E(ic,e)E(i c, e)). This arrow is actually the counit of an adjunction between EE and Set C opSet^{C^{op}}:

E(X Ci,e)Set C op(X,E(i,e));E(X \otimes_C i, e) \cong Set^{C^{op}}(X, E(i-, e));

this adjunction simply restates the universal property of weighted colimits. I will call this adjunction the Kan adjunction induced by the functor i:CEi: C \to E.

Here then is the characterization or recognition theorem for presheaf toposes. I think this theorem goes pretty far back in the day; I have some dim trace memory that Lawvere told me it’s in Marta Bunge’s thesis.

Theorem. Let EE be small-cocomplete. EE is equivalent to a presheaf category iff the full subcategory of tiny objects, i:Tiny(E)Ei: Tiny(E) \to E, is essentially small and dense.

Let me just prove the “if” direction. The equivalence ESet Tiny(E) opE \simeq Set^{Tiny(E)^{op}} is given by the Kan adjunction of i:C=Tiny(E)Ei: C = Tiny(E) \to E. As we have seen, density of ii is equivalent to the fact that the counit of this adjunction is an isomorphism. It remains to see that the unit of the Kan adjunction is also an isomorphism. But for every weight X:C opSetX: C^{op} \to Set, the component of the unit at XX is given by the composite

X(c) c X(c)C(c,c) [bytheYonedalemma] cX(c)E(ic,ic) [theembeddingiisfull] E(ic, cX(c)ic) [bythefactthatcistiny] = E(ic,X Ci). \array{ X(c) & \cong & \int^{c^'} X(c') \otimes C(c, c') & [by the Yoneda lemma] \\ & \cong & \int^{c'} X(c') \otimes E(i c, i c') & [the embedding i is full] \\ & \cong & E(i c, \int^{c'} X(c') \otimes i c') & [by the fact that c is tiny] \\ & = & E(i c, X \otimes_C i). & }

(To be continued; this is probably not yet the most illuminating theorem to apply toward the problem of intrinsically characterizing nice toposes.)

Posted by: Todd Trimble on January 8, 2008 9:07 AM | Permalink | Reply to this

Re: Nice Toposes

Thanks, Todd — all this stuff is great! I would really like to keep talking about this until we find a good intrinsic characterization of nice topoi, nice functors, and nice natural transformations. Alas, right now it’s the first week of class, and I’m also trying to finish up two papers by mid-February. So, I will be rather slow to actually do anything — except for saying stuff like “Huh! Wow! Cool!” But, in the longer run, I think some sort of collaboration on this topic would be a real blast.

Posted by: John Baez on January 10, 2008 3:24 AM | Permalink | Reply to this

Re: Nice Toposes

I know I’ve come to this a bit late, but: is it true that every locally finite topos is equivalent to FSet C op\mathbf{FSet} ^ {C ^\mathrm{op}}, where FSet\mathbf{FSet} is the category of finite sets? This seems intuitively likely to me, but I can’t see how the condition Todd gives above trivilises in this case.

Posted by: Jamie Vicary on February 11, 2008 12:42 PM | Permalink | Reply to this

Re: Nice Toposes

I think this is not quite true. For instance if you have a pro-finite group GG, then the category of finite (discrete) GG-sets is probably a locally finite topos, but in general I doubt it will be of the form you want. For instance let GG be the additive group of pp-adic integers. Then the category is the category of finite sets equipped with an automorphism whose order is a power of pp. I bet this is not the category of actions of some category on finite sets.

On the other hand, I wouldn’t be surprised if pro-finite issues are really the only obstruction. I would expect that theory of the fundamental groupoid shows that any locally finite topos is the category of finite sets equipped with an action of a pro-finite groupoid.

Can someone who really knows these things can confirm this?

Posted by: James on February 12, 2008 4:03 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Tom wrote:

For a small category CC,

HOM(Set,Set C)Flat(C op,Set) HOM(Set, Set^C) \simeq Flat(C^{op}, Set)

where the right-hand side is the category of flat functors and natural transformations between them. I talked about this here. I didn’t explain what ‘flat’ meant, but for groupoids it just means ‘torsor’.

Cool!

In your previous comment about flat functors, you wrote:

OK… but what’s a flat functor? I won’t give the definition, but to a first approximation, a flat functor is a functor that preserves finite limits. This is actually true when CC has finite limits. It’s also true that flat functors preserve all finite limits that exist in CC. It’s somehow a bit daft to think about finite-limit-preserving functors on a category that doesn’t have all finite limits; flatness is the righteous concept.

Somehow this scared me off, as if you were saying “I could tell you what flat functors actually are, but that would drive you stark raving mad, so I’ll spare you…”

So, this time I looked at Mac Lane and Moerdijk (which I’d already been perusing) and was mildly relieved to discover that this concept of ‘flat functor’ is closely akin to the concept of ‘flat module’ for a ring — which is something I know from homological algebra: namely, a module such that tensoring with it is left exact.

More precisely: for any category CC, they define a way to ‘tensor’ a functor F:CSetF: C \to Set with a functor G:C opSetG : C^{op} \to Set and get a set. Then, they say a functor F:CSetF : C \to Set is flat if tensoring with FF is left exact.

Hey — is this ‘tensoring’ just what some people call a ‘coend’? If so, I actually understand it.

Anyway, while I don’t feel I understand this, it’s not driving me stark raving mad.

The main problem with me understanding this topos stuff is not that any piece of it seems devilishly tricky: it’s that there’s so much of it… it seems locally trivial, but globally nontrivial.

Posted by: John Baez on January 10, 2008 3:15 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Yeah, I know what you mean. I didn’t develop a taste for topos theory until recently, and even now it comes and goes.

However, I see the flatness story as part of category theory in general, not topos theory in particular.

I can see how I scared you off by mentioning flat functors but not defining them. It’s always a risky expository strategy: it seems so ominous. The reason is that although I could have given the definition very briefly, I wouldn’t really want to without saying some further things.

It’s like if someone asks you what it means for a topological space to be compact: you can tell them it means that every open cover has a finite subcover, but that definition requires so much digestion that it’s almost pointless to give it without saying something more. If you’re only allowed one sentence, it might be more useful to tell them what the compact subsets of n\mathbb{R}^n are.

If I was feeling a lot more energetic than I am, I’d go into a big explanation of flatness. But as it is, I’ll just respond to what you wrote.

and was mildly relieved to discover that this concept of ‘flat functor’ is closely akin to the concept of ‘flat module’ for a ring

Yes. You won’t be surprised to learn that both definitions are special cases of the definition of flat enriched functor. On the one hand, if CC is an ordinary category then you know what it means for a functor CSetC \to Set to be flat. On the other, if RR is a ring (== one-object AbAb-enriched category) then you know what it means for a left RR-module (== AbAb-enriched functor RAbR \to Ab) to be flat. So you can guess the next bit: if VV is a symmetric monoidal category satisfying suitable hypotheses, and if CC is a VV-enriched category, you can say what it means for a VV-enriched functor CVC \to V to be flat.

There are a whole lot of conditions equivalent to flatness, and there’s the important idea that flatness is the morally correct substitute for ‘finite-limit-preserving’, to be used in cases where the domain category doesn’t have all finite limits. But I’ll skip that.

Hey — is this ‘tensoring’ just what some people call a ‘coend’?

Yes, it’s an example of a coend. In ‘ordinary’ algebra you can construct the tensor product of two modules as a quotient, i.e. a colimit. And coends are the same kind of thing as colimits. Coends deserve their own explanation too! Hmm, I’m not sure whether you’re understanding tensor products by using what you already know about coends, or vice versa… In any case, the formula GF= cG(c)×F(c) G \otimes F = \int^c G(c) \times F(c) expresses the tensor product of functors G:C opSetG: C^{op} \to Set and F:CSetF: C \to Set as a coend.

You can also characterize tensor products via adjointness: FHom(F,)- \otimes F \dashv Hom(F, -), as one might guess.

All of this makes me wish, for the umpteenth time, that there was a good successor to Categories for the Working Mathematician: a second course in category theory. Borceux is good for many things, but more detailed than what I have in mind. I once tried to persuade someone to write it, and this person wasn’t totally opposed to the idea, so I may keep pushing them.

Posted by: Tom Leinster on January 10, 2008 4:05 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Tom wrote:

I can see how I scared you off by mentioning flat functors but not defining them. It’s always a risky expository strategy: it seems so ominous.

Yeah. I often do this sort of thing myself, in TWF — withholding information that’s not absolutely necessary, to keep the exposition from spiralling into an endless series of digressions. But, I should remember that this can make people think things are scarier than they actually are. I guess one trick is to say “It’s not really that hard, but I don’t feel like explaining…”

Coends deserve their own explanation too!

Right; luckily I don’t really need that.

Hmm, I’m not sure whether you’re understanding tensor products by using what you already know about coends, or vice versa…

Both. I keep having to remember the idea of coends, or at least these tensor products, whenever I try to explain how the geometric realization of a simplicial set F:Δ opSetF: \Delta^{op} \to Set is built using the obvious functor ΔSet\Delta \to Set. For some reason I need to do this about once a year, just enough time to completely forget it. But it gets a little easier each time.

Anyway, almost everything you’re saying now is discussed in Mac Lane and Moerdijk — so now that you’ve given me the initial nudge, I can probably get it out of there as needed.

I once tried to persuade someone to write it, and this person wasn’t totally opposed to the idea, so I may keep pushing them.

Do it… gently.

Posted by: John Baez on January 10, 2008 5:04 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Hey — is this ‘tensoring’ just what some people call a ‘coend’? If so, I actually understand it.

I’m a little surprised you asked; we had a big discussion of it almost a year ago, kicked off with comments by you, here.

Well, on second thought, you didn’t quite say ‘tensoring’ in that comment; you said ‘indexed (or weighted) colimit’. I guess ‘tensoring’ and ‘weighted colimit’ are so closely conjoined in my mind that I missed at first that you said one but not the other! (I said something on why they are so closely conjoined in my mind later in that discussion, here.)

[I was interrupted and came back to see that Tom pipped me a little, so I’ll just say a few more words.]

Anyway, yes, as Tom was saying: the typical formula for tensoring say a left module F:XSetF: X \to Set and a right module (or weight) G:X opSetG: X^{op} \to Set is the weighted colimit

G XF= xOb(X)G(x)×F(x)G \otimes_X F = \int^{x \in Ob(X)} G(x) \times F(x)

which you knew of course and which involves a coend. And, there’s homming a left module G:XSetG: X \to Set with a left module H:XSetH: X \to Set, which involves an end:

hom X(G,H)= xOb(X)H(x) G(x)hom_X(G, H) = \int_{x \in Ob(X)} H(x)^{G(x)}

and this is also called a weighted limit.

If you think about it, the formula for the weighted colimit is analogous to a formula for a general colimit in terms of sums and coequalizers. It is a particular (and particularly useful) presentation of a weighted colimit in terms of more specialized concepts (as a conical colimit is analogously presented in terms of special colimits, e.g., sums and coequalizers).

So we can also turn it around and say that taking a coend

x:Set X×X opSet\int^x: Set^{X \times X^{op}} \to Set

is tensoring (or taking a weighted colimit) with some special weight. Which one? Why, the hom-functor of course! That is:

xF(x,x)=hom X (X×X op)F.\int^x F(x, x) = hom_X \otimes_{(X \times X^{op})} F.

Speaking of tensoring and topos theory, a classic example of tensoring with a flat module is taking geometric realization of simplicial sets,

Set Δ opTop,Set^{\Delta^{op}} \to Top,

where you’re tensoring with a flat module σ:ΔTop\sigma: \Delta \to Top (the affine simplex functor). This sort of exemplifies what Tom was saying: flat functors don’t belong just to topos theory, but are part of a more general categorical landscape. (At the same time, it’s still fun to think of geometric realization as classifying a certain model of a theory, whose classifying topos is simplicial sets. What theory is that? The theory of the interval – TopTop has one of course! No, TopTop isn’t a topos, but the idea is still there, and in fact some people do replace TopTop by a spatial topos here.)

Posted by: Todd Trimble on January 10, 2008 5:12 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

John wrote:

Hey — is this ‘tensoring’ just what some people call a ‘coend’? If so, I actually understand it.

Todd replied:

I’m a little surprised you asked; we had a big discussion of it almost a year ago, kicked off with comments by you, here.

Well, on second thought, you didn’t quite say ‘tensoring’ in that comment; you said ‘indexed (or weighted) colimit’.

Yeah. I think I understand coends and weighted colimits, in a flickering intermittent way which flares into full flame once a year when I actually use them. But right now I was just reading about flat functors in Mac Lane and Moerdijk, and they defined these in terms of some mysterious thing called ‘tensoring’.

Flipping pages backwards to read about tensoring, I didn’t see them mention coends or weighted colimits… but when I stared at their formula for tensoring, it looked suspiciously familiar! It was sort of like seeing someone you know, but without any sign of recognition on their part: it makes you wonder if it’s the same person, or someone who just looks similar. So, my question was a kind of sanity check.

At the same time, it’s still fun to think of geometric realization as classifying a certain model of a theory, whose classifying topos is simplicial sets. What theory is that? The theory of the interval… !

I think it’s too late in the evening for this much fun. Earlier this afternoon Jim and I figured out how the trefoil knot, the discriminant of a cubic, the zeros of the 24th power of the Dedekind eta function, the cusp of the moduli space for elliptic curves, the ideal in the ring of modular forms generated by g 2 327g 3 2g_2^3 - 27g_3^2, and the image of the Weyl chamber walls for A 2A_2 under the map

2 2/S 3 2\mathbb{C}^2 \to \mathbb{C}^2/S_3 \cong \mathbb{C}^2

are secretly almost the same thing! That was fun, but suddenly adding all these new connections almost shorted out my central nervous system. Thinking about what you just said could do me in.

Posted by: John Baez on January 10, 2008 7:46 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

The fact you can reconstruct a groupoid from the category of its representations is just part of Galois theory (in the Grothendieck style; I think this appeared first in Exposé V of SGA 1, and it was achieved by Olivier Leroy at the end of the 1970’s, and the (unpublished) work of Leroy was independently rediscovered by Ieke Moerdijk at the begining of the 1980’s).

The basic result of the theory is that given any groupoid GG, you can always compute canonically GG as the category of points of the topos G^=Set G opG\hat{}=Set^{G^{op}}. In other words, there is a canonical equivalence of categories {geometricmorphismsSetG^}G.\{geometric morphisms Set\to \hat{G}\}\simeq G.

I propose a little recollection of this. I will tell the Galois theory just because I find beautiful, and finish with something that might help to understand the notion of `nice functors’ as well.

In a topos XX, you can define what it means for a sheaf FF (i.e. an object of XX) to be locally constant. It means that there is an epimorphism U1U\to 1 (we might call this a cover of XX) such that U×FU\times F is a constant sheaf on X/UX/U (equivalently, U×FU\times F is isomorphic to U×IU\times I over UU in XX, where II is a constant sheaf). We can then define the category LC(X)LC(X) as the full subcategory of XX made of locally constant sheaf. When everything is easy, LC(X)LC(X) is a topos, and it is the category of representations of the fundamental groupoid π 1(X)\pi_1(X) of XX. In the general case, there is a bigger category SLC(X)SLC(X) defined as the full subcategory of XX whose objects are the sums of locally constant sheaves. And the good news is that (under the mild assumption that XX is locally connected) SLC(X)SLC(X) is actually a topos and that the inclusion functor XSLC(X)X \to SLC(X) preserves finite limits and arbitrary small colimits. Whence we get a geometric morphism of topoi XSLC(X).X \to SLC(X). What have we done? To understand this, define a Galois topos to be a topos XX which is generated by its locally constant sheaves (i.e. such that X=SLC(X)X=\SLC(X)). One can characterize these as the topoi which are generated by their Galois covers (i.e. , in the case XX is connected, sheaves FF which are torsors under the discrete group Aut(F)Aut(F)). The construction SLC(X)SLC(X) is thus obviously the universal Galois topos associated to XX. But to really understand Galois theory, we have to track back some groupoids. So here they come!

There is also another notion we all know from topology: simply connected topoi. A topos XX is locally simply connected if there exists a sheaf UU on XX such that any locally constant sheaf on XX is constant on UU (in classical literature, in case XX is a good old topological space, such a UU is called a universal cover). Then one can check easily that a topos XX is locally simply connected if and only if LC(X)=SLC(X)LC(X)=SLC(X). Moreover, the category of points of LC(X)LC(X) is a groupoids, usually denoted by π 1(X)\pi_1(X), and there is an equivalence of topoi π 1(X)^LC(X).\pi_1(X)\hat{}\simeq LC(X). But that is not all: the 22-functor GG^G \mapsto G\hat{} induces an equivalence of 22-categories from the 22-category of groupoids to the 22-category of locally simply connected Galois topoi. Finally, any Galois topos is a filtering 22-limit of locally simply connected Galois topoi. In other words, the fundamental group of a general topos XX is a pro-group. The theory of fundamental groupoids is thus a mere 22-pro-adjoint of the inclusion of groupoids in topoi (you get Van Kampen theorem on the nose with this).

I will finish with a few words on `nice functors’. Given two small groupoids GG and HH, one can look at the cocontinuous functors (i.e. the colimit preserving functors) G^H^.G\hat{}\to H\hat{}. The theory of Kan extensions says to us that the category of such functors is equivalent to the category (G×H)^(G\times H)\hat{}. But given a presheaf FF on G×HG\times H, one can form its category of elements EFEF (sometimes called in the literature the translation groupoid associated to FF), which happends to be a discrete fibration over G×HG\times H. EFG×HEF\to G\times H Hence we obtain a span of groupoids. GEFHG \leftarrow EF \to H And what happends is that the `nice functor’ obtained from this span is canonically isomorphic to the cocontinuous functor we started from.

Conclusion: `nice functors’ are just cocontinuous functors :-)

We can use this correspondance to describe geometric morphisms H^G^H\hat{}\to G\hat{}: we have to understand under which restriction on EFEF the cocontinuous functor will preserve finite limits. But preserving finite limits implies in particular preserving the terminal object. This last property implies that the projection EFHEF\to H is an isomorphism. Hence geometric morphisms from H^H\hat{} to G^G\hat{} are just functors from HH to GG. This is how you prove the functor GG^G \mapsto G\hat{} is locally an equivalence of categories (i.e. is 22-fully faithful). The fact that GG is equivalent to the category of points of G^G\hat{} is now equivalent to that fact the category of functors from 11 to GG is GG itself, which is not too awfull to check…

Last remark: this description of `nice functors’ works for general small categories. This is often called the theory of distributors.

Posted by: Denis-Charles Cisinski on January 12, 2008 12:06 AM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

Your argument that nice functors = cocontinuous functors is very, how shall we say: nice! And it seems you are also suggesting the additional nice fact that nice transformations are just transformations.

So in other words, the 2-category NICE is biequivalent to the bicategory Bim gpdBim_{gpd} of groupoids and bimodules (= profunctors = distributors) between them. The 2-category GpdGpd of groupoids and functors sits inside of NICE, where a functor f:GHf: G \to H induces a nice functor

Set G opSet H op,Set^{G^{op}} \to Set^{H^{op}},

namely the Kan extension which is left adjoint to the pulling-back functor

Set f op:Set H opSet G opSet^{f^{op}}: Set^{H^{op}} \to Set^{G^{op}}

which is itself a nice functor. Or in other words, that we have a homomorphism

i:GpdBim gpdi: Gpd \to Bim_{gpd}

which maps 1-cells to left adjoint 1-cells. Conversely, since groupoids are Cauchy complete, every left adjoint 1-cell in Bim gpdBim_{gpd} arises in this way.

Bim gpdBim_{gpd} is an example of what is called a cartesian bicategory, a notion meant to capture a ‘generalized calculus of relations’. This would include the calculus of ordinary relations in a suitably nice category (a regular category), but is intended to cover also the 2-categorical calculus of spans in a finitely complete category, and of internal categories and bimodules in a finitely complete category, and other examples of similar ‘relational’ character. In recent years cartesian bicategories have undergone a renaissance, due to efforts of people like Wood, Carboni, Kelly, and Walters to properly address the technical monoidal bicategory aspects (which hadn’t been worked out when Carboni and Walters first introduced the notion).

Anyway, we now have a tie-in between Groupoidification and the theory of cartesian bicategories which is already pretty well-developed, and at this point I think I can give a good answer to the question ‘why groupoids?’ (or perhaps better, ‘why not categories?’). We can get a hint of this by looking at what Denis-Charles said above: one can play a similar game with nice functors between more general presheaf categories,

F:Set CSet D,F: Set^C \to Set^D,

where they basically amount to functors f:C op×DSetf: C^{op} \times D \to Set (that is, bimodules), or to discrete fibrations of the form

El(f)C op×DEl(f) \to C^{op} \times D

which gives a span in CatCat from C opC^{op} to DD. Notice the variance however! But in GpdGpd, we can straighten out the variance so as to consider spans from GG to HH.

But more to the point, the real reason that spans of groupoids work so well is that in Bim gpdBim_{gpd}, we have a Beck-Chevalley condition which allows us to compose spans – this Beck-Chevalley condition doesn’t hold in the bicategory of categories and bimodules! For example: when we want to compose a nice functor g !f *g_{!}f^* given by a span

HfGgJH \stackrel{f}{\leftarrow} G \stackrel{g}{\to} J

with another nice functor k !h *k_{!}h^* given by a span

JhKkL,J \stackrel{h}{\leftarrow} K \stackrel{k}{\to} L,

we have to work k !h g !f k_{!} h^{\star} g_{!} f^\star into the form () !() *(-)_! (-)^*, which we do by passing through the weak pullback

P q K p h G g J \array{ P & \stackrel{q}{\to} & K \\ p \darr & \swArrow & \darr h \\ G & \stackrel{g}{\to} & J }

and applying the Beck-Chevalley condition, which gives the invertibility of the canonical map q !p h g ! q_{!} p^\star \to h^\star g_{!}. This means we can write

k !h g !f (kq) !(fp) k_! h^\star g_! f^\star \cong (k q)_! (f p)^\star

and all is well. But, we need groupoids in order for Beck-Chevalley to work.

Over on the cartesian bicategories side, the Beck-Chevalley condition is closely tied to the fact that groupoids and bimodules form what is called a discrete cartesian bicategory (which is the special case of Beck-Chevalley where the weak pullback involved is the coassociativity square for the diagonal map δ:GG×G\delta: G \to G \times G). If you write out the Beck-Chevalley condition in that special case, it says that GG is a Frobenius monoid in Bim gpdBim_{gpd}, where the comultiplication is the bimodule δ !:GG×G\delta_!: G \to G \times G and the multiplication is the bimodule δ *:G×GG\delta^*: G \times G \to G.

I commented earlier on Beck-Chevalley for bimodules, but I didn’t see clearly how it ties in with Groupoidification until after seeing Denis-Charles’s comment. Thanks, Denis-Charles!

Posted by: Todd Trimble on January 13, 2008 7:15 PM | Permalink | Reply to this

Re: Geometric Representation Theory (Lecture 19)

We might still perform a good theory for small categories by considering Todd’s remarks about the Beck-Chevalley property. It means that we cannot take arbitrary spans, but something like `Beck-Chevalley spans’. For this we need some definitions. Further details can be found for example here.

Let’s call 00-equivalences the functors f:CDf:C\to D which induce a bijection at the level of connected components π 0(C)π 0(D)\pi_0(C)\simeq\pi_0(D).

A functor f:CDf:C\to D is 00-aspherical if for any object dd of DD, the comma category C/dC/d is 00-connected. Dually, a functor f:CDf:C\to D is 00-coaspherical if for any object dd of DD, the comma category d\Cd\backslash C is 00-connected.

We can define now what is a 00-proper functor. It is a functor f:CDf:C\to D such that for any object dd of DD, the functor C dC/dC_d\to C/d is 00-coaspherical. There is also the dual notion of 00-smooth functor: f:CDf:C\to D is 00-smooth if for any object dd of DD, the functor C dd\CC_d\to d\backslash C is 00-aspherical. Note that a functor f:CDf:C\to D is 00- proper (resp. 00-aspherical) if and only if f op:C opD opf^{op}:C^{op}\to D^{op} is 00-smooth (resp. 00-coaspherical). A baby version of Quillen’s Theorem A is that any 00-(co)aspherical map is a 00-equivalence.

Grothendieck’s observation (in Pursuing stacks) is that a functor f:CDf:C\to D (resp. v:DDv:D' \to D) is 00-proper (resp. 00-smooth) if and only if any pullback of shape

CfDC'\quad \overset{f'}{\to} \quad D' uvu\downarrow \quad \quad \quad \downarrow v CfDC \quad \overset{f}{\to} \quad D

has the Beck-Chevalley property, and if this is still verified after any base change.

A prototype example of a 00-proper (resp. 00-smooth) functor is a Grothendieck op-fibration (resp. fibration). 00-proper (resp. 00-smooth) functors are stable by composition and base change. Hence we have a good notion of `Beck-Chevalley spans’ defined as the spans CpEqDC \overset{p}{\leftarrow} E \overset{q}{\to} D such that pp (resp. qq) is a 00-proper (resp. 00-smooth) functor.

To make this working well enough, we have to be careful about the way we realize functors F:C op×DSetF : C^{op}\times D \to Set as Beck-Chevalley spans. In fact, we may consider functors F:C op×DCat.F : C^{op}\times D \to Cat . For each object cc of CC, we can consider the Grothendieck construction E(c)E'(c) associated to the functor dF(c,d),d \mapsto F(c,d), which gives rise to an opfibration E(c)D.E'(c)\to D . These opfibrations can be seen as a natural transformation from the functor EE' to the constant functor with value DD. We can thus take the Grothendieck construction EE of EE', and we get a functor EC×DE \to C\times D (the Grothendieck construction of the constant functor with value DD on CC is just C×DC\times D). It is an easy exercise to check that the projection ECE\to C is a fibration and the projection EDE\to D an opfibration. Hence we get a 22-functor Fun(C op×D,Cat){BeckChevalleyspans}Fun(C^{op}\times D,Cat)\to \{ Beck-Chevalley spans \} which is a good starting point.

Last remark: we can define notions of smooth or proper functors depending on different notions of `weak equivalences of categories’. If you replace 00-equivalences by \infty-equivalences (i.e. functors whose nerve is a weak equivalence of simplicial sets), then this story of Beck-Chevalley spans will still be meaningful in the setting of higher categories (you can consider SSet instead of Set, and derived Kan extensions).

These notions of smooth or proper functor has also been considered by Joyal for quasi-categories (details can be found in Lurie’s book on higher topoi).

Posted by: Denis-Charles Cisinski on January 14, 2008 3:00 PM | Permalink | Reply to this
Read the post Geometric Representation Theory (Lecture 20)
Weblog: The n-Category Café
Excerpt: At last: the Fundamental Theorem of Hecke Operators!
Tracked: January 10, 2008 7:40 AM

Post a New Comment