Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 9, 2007

Day on RCFTs

Posted by Urs Schreiber

The short note

Brian J. Day
Association schemes and classical RCFT’s as “graphic” Fourier transformations
math.CT/0702208

is apparently a message from a mathematician to physicist saying: “Notice this standard fact. It might be useful for what you are doing.” As Brian Day writes in the abstract:

Mathematically, this is a fairly straightforward observation, but may be worth pursuing from a physical viewpoint.

The main statement seems to concern a construction of something “realising” a fusion ring, i.e. a ring of equivalence classes in a semisimple braided tensor category. These fusions rings are a big deal in (rational) conformal field theory.

Unfortunately, the note is entirely internal to enriched category theory. I curse myself for still not having taken the time to learn how make ends and coends meet in enriched category theory.

On the train back home I will look at

G. M. Kelly
Basic Concepts in Enriched Category Theory

Hopefully after that I’ll be able to decode Brian Day’s message.

(Is he maybe just talking about reconstructing a tensor category from its fusion ring?)

Posted at February 9, 2007 3:28 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1156

48 Comments & 1 Trackback

Re: Day on RCFTs

Here are a few comments that might help you a little when it comes to ends, coends and enriched categories.

First, you’ll notice that Kelly’s book discusses ‘ends’ shortly before discussing ‘indexed limits’. This is because the two concepts, while distinct, are interchangeable, in the sense that they’re both special cases of each other.

Kelly tackles ends first, and then indexed limits. You’ll see on page 37 that he says: “… is the limit precisely when … is an end.” Here is reducing the theory of indexed limits to the theory of ends.

Personally I think it’s easier to go the other way: first learn about indexed limits, then, if necessary, learn about ends.

In fact, I prefer to dualize, and think about indexed colimits and coends instead of indexed limits and ends.

The reason is this: the theory of indexed colimits is a straightforward categorification of the theory of integrals.

If you’re trying to categorify inner products, path integrals and other integrals, this is good to know!

Here’s how it goes, in a nutshell.

If we have a finite set XX we can think of a measure on XX as just a map

w:Xw: X \to \mathbb{C}

assigning to each point its ‘weight’. Suppose LL is any vector space. We can integrate any vector-valued function

f:XLf: X \to L

with respect to the measure ww as follows:

Xfdw= xXw(x)f(x) \int_X f d w = \sum_{x \in X} w(x) f(x)

Kelly’s theory of indexed colimits — usually called the theory of weighted colimits — categorifies this idea!

The replacement for our set XX is a category XX.

The replacement for \mathbb{C} is a cocomplete symmetric monoidal category VV. The colimits of VV are the categorified version of ‘addition’, while the tensor product in VV is the categorified version of ‘multiplication’.

The replacement for our vector space LL is a category LL that’s enriched over VV and tensored over VV. ‘Enriched over VV’ says that given objects x,yLx,y \in L, we have hom(x,y)Vhom(x,y) \in V. ‘Tensored over VV’ means that given xLx \in L and vVv \in V, we can define vxLv \otimes x \in L. There is of course more to the definitions of these concepts that I’m giving here, but this is enough to get the idea.

The replacement for the vector-valued function we’re integrating is a functor

f:XLf: X \to L

The replacement for our measure is what Kelly calls an ‘indexing type’, but most people prefer to call a ‘weight’: it’s a functor

w:X opVw : X^{op} \to V

Note the all-important ‘op’!

The weighted colimit, when it exists, is an object

XfdwL \int_X f d w \in L

In the simple case where XX is a discrete category (basically just a set), we have

Xfdw= xXw(x)f(x) \int_X f d w = \sum_{x \in X} w(x) \otimes f(x)

See? It’s just like what we had before!

Another simple case is when we use the most boring possible weight

w:XVw : X \to V

namely the one with

w(x)=1w(x) = 1

for every object xXx \in X, and w(f)=1 1w(f) = 1_1 for every morphism ff in XX.

If we use this weight, our weighted colimit reduces to an ordinary colimit:

fdw=colim xXf(x) \int f d w = colim_{x \in X} f(x)

So, the theory of weighted colimits generalizes the theory of colimits much as the theory of integrals generalizes the theory of sums.

But, in the simple examples I just gave, we don’t see the need for the ‘op’ in the weight

w:X opVw : X^{op} \to V

If you already know and love coends, you can guess why this ‘op’ shows up. From our weight and our functor

f:XL f: X \to L

we can form a functor

X op×Xw×fV×LL X^{op} \times X \stackrel{w \times f}{\to} V \times L \stackrel{\otimes}{\to} L

where the second arrow uses the fact that LL is tensored over VV.

This kind of functor

X op×XL X^{op} \times X \to L

is precisely the sort of thing we take a coend of, getting an object of LL!

But, if you don’t already know and love coends, you now see why we need them: precisely to do weighted colimits.

By the way, you might complain my use of the term ‘integral’ is overblown, since the only integrals I’m really considering are weighted sums. That’s a valid complaint. So, you might prefer it if I changed my main message to this:

The theory of weighted colimits is a straightforward categorification of the theory of weighted sums.

Posted by: John Baez on February 12, 2007 10:11 PM | Permalink | Reply to this

Re: Day on RCFTs

The theory of weighted colimits is a straightforward categorification of the theory of weighted sums.

Thanks a lot for these hints!

I started reading Kelly’s book last weekend, and roughly understand the definition and the usage of ends and coends now.

What I still need to learn is what these p(a,b,c)p(a,b,c) are that Day uses a lot.

I recall that a while ago David Corfield pointed me to some references on that. But I cannot find these links anymore!

By the way: do you understand what Day is talking about on his three pages to the extent that you could say in one simple sentence what it is that he considers in the last paragraph?!

Posted by: urs on February 13, 2007 12:58 PM | Permalink | Reply to this

Re: Day on RCFTs

Did you mean the links here?

I find Google best for finding my way around this blog.

Posted by: David Corfield on February 13, 2007 1:22 PM | Permalink | Reply to this

Re: Day on RCFTs

Did you mean the links here?

Yes! Thanks for taking care of an internet illiterate…

Posted by: urs on February 13, 2007 1:30 PM | Permalink | Reply to this

Re: Day on RCFTs

Urs wrote:

By the way: do you understand what Day is talking about on his three pages to the extent that you could say in one simple sentence what it is that he considers in the last paragraph?!

No, alas. Luckily now the ‘pros’ are helping you out.

Posted by: John Baez on February 24, 2007 12:05 AM | Permalink | Reply to this

Re: Day on RCFTs

Luckily now the ‘pros’ are helping you out.

Yes, that was very helpful. A little probiotic category theory a day is said to be potentially beneficial.

How many people are there in the world, who might be able to appreciate the connection Day makes between rational conformal field theory and probicategories?

But we are working on it…

Currently I am strucggling with the following:

We know that 2-dimensional conformal field theories are extended QFTs with values in 𝒞\mathcal{C}-vector spaces (𝒞\mathcal{C}-module categories), where 𝒞\mathcal{C} is a modular tensor category – in particular simple.

By Ostrik’s theorem all these 𝒞\mathcal{C}-vector spaces are equivalent to categories of modules of algebras internal to 𝒞\mathcal{C}.

The question is: what happens to this statement when the CFT is no longer rational and 𝒞\mathcal{C} is no longer simple?

A natural guess would be that then we need categories of module of algebroids internal to 𝒞\mathcal{C}.

If 𝒞\mathcal{C} is closed (only case I know of where this is the case is 𝒞=Vect\mathcal{C} = \mathrm{Vect}), then there is a nice way to say this:

A 𝒞\mathcal{C}-vector space NN being equivalent to a category of internal modules means that it has a basis AA: N[A,𝒞]. N \simeq [A,\mathcal{C}] \,.

Here I am thinking of the algebra (or algebroid) AA internal to 𝒞\mathcal{C} as a 𝒞\mathcal{C}-enriched category, and denote by [.,.][.,.] the category of 𝒞\mathcal{C}-functors, which is hence nothing but the category of AA-modules.

This is a concept of “basis” for 2-vector spaces slightly more general than what is used sometimes. It categorifies that fact that an ordinary vector space has a basis iff there is a set SS such that V[S,K], V \simeq [S,K] \,, where KK is the ground field and [S,K][S,K] denotes the space of KK-valued functions on SS.

But how do I say that nicely for cases where 𝒞\mathcal{C} is not closed? Certainly I can then still consider algebroids internal to 𝒞\mathcal{C} (where I really mean 𝒞\mathcal{C}-enriched categories) and modules for them. But now I am no longer sure in which sense to write the category of all such modules for a given AA in the form [A,something][A,\mathrm{something}].

Posted by: urs on February 24, 2007 8:23 PM | Permalink | Reply to this

Re: Day on RCFTs

John wrote:

Another simple case is when we use the most boring possible weight w:XV w: X \rightarrow V namely the one with w(x)=1 w(x) = 1 for every object xXx \in X, and w(f)=1 1w(f) = 1_1 for every morphism ff in XX.

I can’t resist explaining why weighted (co)limits are not only good, but, if you want to do enriched category theory, necessary.

In Kelly’s general development, John’s category ‘XX’ is not a mere category, but a VV-enriched category. Let me write II instead of 11 for the unit object of VV. Now look at the end of the definition of ‘the most boring possible weight’: it says

w(f)=1 Iw(f) = 1_I for every morphism ff in XX.

Well, XX is an enriched category, so it doesn’t make sense to talk about the morphisms ff of XX.

Instead, to define an enriched functor w:XVw: X \rightarrow V which does xIx \mapsto I on objects, we have to define for each x,yXx, y \in X a map w x,y:X(x,y)V(I,I) w_{x, y}: X(x, y) \rightarrow V(I, I) in VV. Here VV is meant to be closed, so that it can be regarded as a category enriched in itself. Explicitly, V(p,q)V(p, q) is the ‘function space’ [p,q][p, q] for any p,qVp, q \in V, and in particular V(I,I)=IV(I, I) = I. So we have to define for each x,yXx, y \in X a map w x,y:X(x,y)I w_{x, y}: X(x, y) \rightarrow I in VV.

This is all very well if the unit object II of VV happens to be terminal - as in the familiar case of ordinary category theory, (V,,I)=(Set,×,1)(V, \otimes, I) = (\Set, \times, 1).

But for a general VV, there’s simply no way to do this! Hence there’s no ‘most boring possible weight’, hence there are no ordinary (co)limits. There are only weighted (co)limits.

Posted by: Tom Leinster on February 13, 2007 1:48 PM | Permalink | Reply to this

Re: Day on RCFTs

I ended the above entry with the question:

(Is Brian Day maybe just talking about reconstructing a tensor category from its fusion ring?)

On second reading, I think the answer is yes.

A promonoidal category AA is supposed to be like an ordinary monoidal category, but VV-enriched and with functors replaced by profunctors.

So in particular, the tensor product functor :A×AA \otimes : A \times A \to A is taken to be a VV-profunctor, i.e. a VV-functor p:A op×A op×AV p : A^\mathrm{op} \times A^\mathrm{op} \times A \to V (where “×\times” is the VV-tensor product).

Just like monoidal categories are just bicategories with a single object, promonoidal categories are just probicategories with a single object.

Hence, more generally, we have lots of objects x,y,z,x,y,z,\cdots and VV-Hom\mathrm{Hom}-objects A(x,y) A(x,y) etc. and horizontal composition is a VV-profunctor :A(x,y)×A(y,z)A(x,z) \circ : A(x,y) \times A(y,z) \to A(x,z) hence an ordinary VV-functor p:A(x,y) op×A(y,z) op×A(x,z)V. p : A(x,y)^\mathrm{op} \times A(y,z)^\mathrm{op} \times A(x,z) \to V \,.

Next, when considering morphisms (2-functors) F:ABF :A \to B between promonoidal categories, we want composition be respect ed by enforcing something like F(a)F(b)F(c) F(a)\circ F(b) \simeq F(c) for cc the composite of aa and bb.

In the probiotic language – er, I mean: in the probicategory language, this amounts to demanding cp(a,b,c)F(c)F(a)F(b), \int^c p(a,b,c) \otimes F(c) \simeq F(a) \circ F(b) \,, where the integral sign denotes a coend.

As far as I am beginning to understand the point of Brian Day’s paper here, he emphasizes that this composition law has a very important and familiar incarnation in the context of Moore-Seiberg data of RCFTs, namely in the context of semisimple braided tensor categories.

There, we consider abelian monoidal (=”tensor”) categories with a finite number of isomorphism classes of simple objects U iU_i, such that U iU j kU k N ij k. U_i \otimes U_j \simeq \oplus_k U_k^{\oplus N_{i j}^k} \,.

Brian Day notes that if we set V=Vect KV = \mathrm{Vect}_K for KK our ground field and moreover set p(i,j,k)=K N ij k p(i,j,k) = K^{\oplus N_{i j}^k} then this is a special case of the above probiotic composition law U iU j kp(i,j,k)U k. U_i \otimes U_j \simeq \oplus_k p(i,j,k)\otimes U_k \,.

As far as I understand. (So somehow the coend reduces to a mere coproduct here.)

Moreover, Day indicates how we can, this way, understand every such semisimple tensor category for given fusion data N ij kN_{ij}^k as the image of a suitable probifunctor from something like the universal such category to a suitable codomain.

Roughly, at least. Somehow Kan extensions play a role, too, and we don’t just take the domain to be AA, but [A,Vect][A,\mathrm{Vect}].

I need to better understand this. But so much for now.

Posted by: urs on February 13, 2007 2:50 PM | Permalink | Reply to this

Re: Day on RCFTs

Grrr… I just spent a long time composing a comment, hit preview, and was told there was an internal server error and lost all this work. Time to follow Jeff Morton’s advice, and write out posts in advance on a word editor!

In the meantime, there have been a number of posts and it looks like Urs has got things mostly or maybe completely sorted out by now, but I’ll throw out a few comments anyway, and hope some of them stick.

Most (probably all) of the coends which appear in Brian Day’s note are really tensor products of modules – that really is a prototypical application of coends, and gives a great way of understanding how weighted colimits work generally.

Let’s first consider modules over a ring RR: let AA be a right RR-module and BB a left RR-module. Their tensor product is a coequalizer of a pair of maps

ARBABA \otimes R \otimes B \stackrel{\to}{\to} A \otimes B

where one arrow is AA tensor the action on BB and the other is the action on AA tensor BB. Now think of RR as a one-object category enriched in Ab (I’ll call its object ‘1’). A left module amounts to an enriched functor B:RAbB: R \to Ab, and a right module to an enriched functor A:R opAbA: R^{op} \to Ab. In other words, these functors are tantamount to actions of the hom-object R=hom(1,1)R = hom(1, 1), and the tensor product can be expressed as the coequalizer of a diagram written

A(1)hom(1,1)B(1)A(1)B(1).A(1) \otimes \hom(1, 1) \otimes B(1) \stackrel{\to}{\to} A(1) \otimes B(1).

Much more generally now, take any category XX enriched in any symmetric monoidal closed category VV (assume VV complete and cocomplete), and let F:XVF: X \to V be a covariant action of XX, meaning we have maps

F(x)hom(x,y)F(y)F(x) \otimes \hom(x, y) \to F(y)

expressing functoriality of FF (I should say ‘enriched’, but I get tired of saying that – please insert as appropriate!). Similarly, let G:X opVG: X^{op} \to V be a contravariant action. Then the tensor product F XGF \otimes_X G is the coequalizer of the evident pair of maps built out of the actions:

x,yOb(X)F(x)hom(x,y)G(y) xF(x)G(x)\sum_{x, y \in Ob(X)} F(x) \otimes \hom(x, y) \otimes G(y) \stackrel{\to}{\to} \sum_x F(x) \otimes G(x)

and this is also called the colimit of FF weighted by GG, also denoted

xF(x)G(x).\int^x F(x) \otimes G(x).

Similarly, if one has two modules F,G:XVF, G: X \to V, one can form their module-hom hom X(G,F)\hom_X(G, F) as an object of VV, as an equalizer of a pair of maps again built from the actions, again directly generalizing the case of modules over a ring. This is a limit of FF weighted by GG.

More generally still, given a VV-category CC, a VV-functor F:XCF: X \to C, and a weight G:X opVG: X^{op} \to V, one can define the notion of colimit of FF weighted by GG, as an object of CC. The trick is to define it universally, by stipulating that all contravariant hom-functors hom(,c)\hom(-, c) carry this weighted colimit to the appropriate weighted limit in VV. [It is not generally correct to try to define it as a colimit of a diagram in the underlying (ordinary) category of CC – instead, bring it back to hom-base (“home base”) VV, where you are safe.]

Day’s note is written in the well-known style of hardcore Australian category theory: powerful medicine, but not always gentle on the system. But here’s a way of understanding this probicategory and p(a, b, c) business: a probicategory enriched in VV is really a bicategory internal to the bicategory of VV-categories and VV-profunctors between them. (A VV-profunctor from CC to DD is just a bimodule from CC to DD, i.e., a covariant-CC contravariant-DD functor CD opVC \otimes D^{op} \to V. These are composed by taking tensor products of bimodules, as in the discussion above.) Thus, a probicategory consists of a collection of objects xx, yy, zz, …, and hom-categories hom(x,y)\hom(x, y) enriched in VV, and compositions

hom(x,y)hom(y,z)hom(x,z)\hom(x, y) \otimes \hom(y, z) \to \hom(x, z)

which are bimodules, and so forth and so on. If you unravel all this in nuts-and-bolts terms, you find yourself looking at all these coend thingies that Day writes down.

Posted by: Todd Trimble on February 13, 2007 9:00 PM | Permalink | Reply to this

Re: Day on RCFTs

Then the tensor product F XGF \otimes_X G is the coequalizer of the evident pair of maps built out of the actions x,yOb(X)F(x)hom(x,y)G(y) xF(x)G(x) \sum_{x,y\in \mathrm{Ob}(X)} F(x)\otimes \mathrm{hom}(x,y) \otimes G(y) \stackrel{\to}{\to} \sum_x F(x)\otimes G(x) and this is also called the colimit of FF weighted by GG, also denoted xF(x)G(x) \int^x F(x) \otimes G(x)

Thanks for saying that! Now that you said it, I recall having read and thought about this once (in the context of Costello’s work), but apparently mostly forgotten it.

So I am entitled to think of a weighted colimit as a many-object generalization of the tensor product of bimodules. Great.

Where do coends come in from this point of view. Is this already equivalently a coend?

Posted by: urs on February 13, 2007 9:38 PM | Permalink | Reply to this

Re: Day on RCFTs

To check if I got this right, I’ll try to spell out one of my favorite examples in a pro way.

That favorite example of mine is the inclusion Bim VectMod \mathrm{Bim} \hookrightarrow {}_{\mathrm{Vect}}\mathrm{Mod} of the 2-category of algebras, bimodules and bimodule homomorphisms in the 2-category (I say 2-category for bicategory) of Vect\mathrm{Vect} module categories.

This sends an algebra AA to the category Mod A\mathrm{Mod}_A of its modules (on which Vect\mathrm{Vect} acts by tensoring each module from the left by a vector space), sends every AA-BB bimodule NN to the Vect\mathrm{Vect}-linear functor Mod A Mod B M M AN \array{ \mathrm{Mod}_A &\to& \mathrm{Mod}_B \\ M &\mapsto& M \otimes_A N } and similarly sends every bimodule homomorphism to a natural transformation of these.

Okay. Now I should reformulate this in Vect\mathrm{Vect}-enriched language.

So I take V=Vect V = \mathrm{Vect} and regard all my algebras AA as 1-object VV-enriched categories Σ(A)\Sigma(A). To be explicit: Hom Σ(A)(,)=AObj(Vect). \mathrm{Hom}_{\Sigma(A)}( \bullet,\bullet) = A \in Obj(\mathrm{Vect}) \,.

In particular, there is the algebra \mathbb{C}, which is the ground field (I take the ground field to be the complex numbers, just for definiteness so that I feel at home) regarded as an algebra over itself. This is the tensor unit in V=VectV = \mathrm{Vect}, so I should write I:=. I := \mathbb{C} \,. The corresponding 1-object VV-enriched category is, in the above notation Σ(). \Sigma(\mathbb{C}) \,.

A module SS for an algebra AA is now the same thing as a VV-functor S:Σ(A)V, S : \Sigma(A) \to V \,, namely a vector space S()Obj(V) S(\bullet) \in \mathrm{Obj}(V) and a VV-morphism (a linear map) AEnd V(S())=S() *S() A \to \mathrm{End}_V(S(\bullet)) = S(\bullet)^* \otimes S(\bullet) that is compatible with composition.

This I can regard as a profunctor S:Σ()Σ(A). S : \Sigma(\mathbb{C}) \to \Sigma(A) \,.

Analogously, an AA-BB bimodule NN is now a VV-functor N:Σ(A) opΣ(B)V, N : \Sigma(A)^\mathrm{op} \otimes \Sigma(B) \to V \,, which I may think of as a profunctor N:Σ(A)Σ(B). N : \Sigma(A) \to \Sigma(B) \,.

In particular, an ordinary \mathbb{C}-module, i.e. an ordinary vector space WW, is now nothing but a profunctor W:Σ()Σ(). W : \Sigma(\mathbb{C}) \to \Sigma(\mathbb{C}) \,.

So if I write VCat V-\mathrm{Cat} for the 2-category whose objects are VV-categories, and whose 1-morphisms are profunctors between these, then we can regard the category of vector spaces as the endomorphism category of Σ()\Sigma(\mathbb{C}) in VCatV-\mathrm{Cat} Vect Hom Vect Cat(Σ(),Σ(). \mathrm{Vect}_\mathbb{C} \simeq \mathrm{Hom}_{ \mathrm{Vect}_\mathbb{C}-\mathrm{Cat} }(\Sigma(\mathbb{C}),\Sigma(\mathbb{C}) \,.

This is nice, because it makes manifest my statement from above, that the category Mod A\mathrm{Mod}_A of (right) AA-module for some algebra AA is itself a category with a (left) Vect\mathrm{Vect}-action. This action is here seen to be nothing but the composition in VectCat\mathrm{Vect}-\mathrm{Cat}:

The action Vect×Mod AMod A \mathrm{Vect} \times \mathrm{Mod}_A \to \mathrm{Mod}_A corresponds to Hom VectCat(Σ(),Σ())×Hom VectCat(Σ(),Σ(𝔸))Hom VectCat(Σ(),Σ(𝔸)). \mathrm{Hom}_{\mathrm{Vect}-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(\mathbb{C})) \times \mathrm{Hom}_{\mathrm{Vect}-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(\mathbb{A})) \stackrel{\circ}{\to} \mathrm{Hom}_{\mathrm{Vect}-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(\mathbb{A})) \,.

Unless I mixed things up, this must be painfully tautologous for the experts.

On the other hand, I feel like spelling out this tautology in even more detail: the composition \circ in the above is composition of profunctors by weighted colimits/coends.

So, given a vector space WW, regarded as a profunctor W:Σ()Σ() W : \Sigma(\mathbb{C}) \to \Sigma(\mathbb{C}) which is an ordinary functor Σ() opΣ()Vect \Sigma(\mathbb{C})^\mathrm{op} \otimes \Sigma(\mathbb{C}) \to \mathrm{Vect} and given a right AA-module, SS, which is a profunctor S:Σ()Σ(A) S : \Sigma(\mathbb{C}) \to \Sigma(A) hence an ordinary functor Σ() opΣ(A)Vect \Sigma(\mathbb{C})^\mathrm{op} \otimes \Sigma(A) \to \mathrm{Vect} their composite profunctor Σ()WΣ()SΣ(A) \Sigma(\mathbb{C}) \stackrel{W}{\to} \Sigma(\mathbb{C}) \stackrel{S}{\to} \Sigma(A) is that which is given by the functor cW(,c)S(c,):Σ() opΣ(A)V. \int^c W(-,c)\otimes S(c,-) : \Sigma(\mathbb{C})^{\mathrm{op}} \otimes \Sigma(A) \to V \,.

And hopefully, unless I am misunderstanding something that is, the symbol c\int^c \cdots here is the weighted colimit the way Todd explained above #, which should hence reduce, in the present case, to the ordinary tensor product of bimodules.

Which, in the above case, is just the tensor product of \mathbb{C}-bimodules hence just the tensor product of vector spaces.

So, in general, given an AA-BB bimodule NN, regarded as a profunctor N:Σ(A)Σ(B) N : \Sigma(A) \to \Sigma(B) hence an ordinary V=VectV = \mathrm{Vect}-functor N:Σ(A) opΣ(B)V N : \Sigma(A)^\mathrm{op}\otimes \Sigma(B) \to V and a BB-CC bimodule NN', their composition as profunctors Σ(A)NΣ(B)NΣ(C) \Sigma(A) \stackrel{N}{\to} \Sigma(B) \stackrel{N'}{\to} \Sigma(C) should be given by the functor cN(,c)N(c,):Σ(A) opΣ(C)V \int^c N(-,c)\otimes N'(c,-) : \Sigma(A)^\mathrm{op} \otimes \Sigma(C) \to V which in turn encodes the AA-CC bimodule that is nothing but N BNN \otimes_B N'.

Now, in these terms the statement that tensoring the category of AA-modules from the right with a bimodule is an operation that respect the left Vect\mathrm{Vect}-action on Mod A\mathrm{Mod}_A is nothing but associativity of the composition in VCatV-\mathrm{Cat} Hom VCat(Σ(),Σ())×Hom VCat(Σ(),Σ(A))×Hom VCat(Σ(A),Σ(B))Hom VCat(Σ(),Σ(B)). \mathrm{Hom}_{V-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(\mathbb{C})) \times \mathrm{Hom}_{V-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(A)) \times \mathrm{Hom}_{V-\mathrm{Cat}}( \Sigma(A),\Sigma(B)) \stackrel{\circ \circ}{\to} \mathrm{Hom}_{V-\mathrm{Cat}}( \Sigma(\mathbb{C}),\Sigma(B)) \,.

All right. So the upshop of all these trivialities now is that without further effort this generalizes to the case where I allow all my algebras to be algebroids, namely general Vect\mathrm{Vect}-enriched categories.

(Hm, now I seem to recall that when Simon Willerton visitied a while ago, he was telling me exactly this. Took a while to sink in…)

Posted by: urs on February 14, 2007 7:11 AM | Permalink | Reply to this

Re: Day on RCFTs

I think you’ve got it. The calculus of enriched bimodules is kind of fun, and you can do a lot with it. By the way, a paper that I personally found very illuminating when I was first grappling with this stuff is Lawvere’s paper on metric spaces and generalized logic; he has some material on enriched bimodules. I particularly liked the section on Cauchy completion, and seeing how this relates to Karoubi envelopes and Morita equivalence in your favorite example.

I’d just add one comment, perhaps not a big deal but sufficiently bothersome to me to point it out. What I call a profunctor Σ(A)Σ(B)\Sigma(A) \to \Sigma(B) is where AA acts covariantly (traditionally, on the left) and BB acts contravariantly. I think you have it the other way; you call a profunctor Σ(A)Σ(B)\Sigma(A) \to \Sigma(B) a module of the form Σ(A) opΣ(B)V\Sigma(A)^{op} \otimes \Sigma(B) \to V.

One reason I prefer my way is that a functor f:ABf: A \to B induces a profunctor from AA to BB: just compose ff with the Yoneda embedding of BB to get

y Bf:AV B op.y_B \circ f: A \to V^{B^{op}}.

(Here we hear a faint echo of the fact that profunctors are secretly about the free cocompletion V B opV^{B^{op}}, as touched upon in my other comment; what I’d really like to say is that the bicategory of categories and profunctors is the Kleisli bicategory of the free cocompletion monad, but alas, there are size issues there.) BTW, profunctors which come from functors in this way are left adjoints in the bicategory of profunctors, and if we stick to Cauchy complete categories, all such left adjoints arise in this way. But I’m rambling on some pet topics here; I’ll stop.

Posted by: Todd Trimble on February 14, 2007 2:26 PM | Permalink | Reply to this

Re: Day on RCFTs

Thanks for all these comments. I very much appreciate it!

Maybe one clarification:

when I write cW(,c)S(c,) \int^c W(-,c)\otimes S(c,-) above, do I have to address this is a weighted colimit or as a coend. Or is it always both? Or just under some extra conditions?

I’d just add one comment, perhaps not a big deal but sufficiently bothersome to me to point it out. What I call a profunctor Σ(A)Σ(B)\Sigma(A) \to \Sigma(B) is where AA acts covariantly (traditionally, on the left) and BB acts contravariantly. I think you have it the other way;

Yes, I am aware of that. I think here I was trying to follow Day’s conventions. But I agree that from another point of view the other convention is more natural.

the bicategory of categories and profunctors is the Kleisli bicategory of the free cocompletion monad

[…]

BTW, profunctors which come from functors in this way are left adjoints in the bicategory of profunctors

Hm, okay, one day I’ll need to understand this better than I currently do. This is closely related to some things I learned (or maybe just partially learned) from Aaron Lauda, math.CT/0502550, which in turn triggered my understanding of “FFRS formalism as locally trivialized 2-transport” #.

The point is that if we have a 2-functor tra\mathrm{tra} to bimodules, and want to identify it locally with a “simpler” such 2-functor tra 0\mathrm{tra}_0, “without losing information”, we need at least a special ambidextrous adjunction # between them.

But if one writes this down in components, it means that one also needs a (special ambidextrous) adjunction on the objects in the image of tra\mathrm{tra}.

Sorry. Now it is me who is rambling once again. :-)

Posted by: urs on February 14, 2007 3:26 PM | Permalink | Reply to this

Re: Day on RCFTs

when I write cS(c)W(c)\int^c S(c) \otimes W(c) above, do I have to address this is a weighted colimit or as a coend. Or is it always both? Or just under some extra conditions?

I’d call this (presentation of a weighted colimit) an example of an enriched coend.

An ordinary coend xF(x,x)\int^x F(x, x) of a functor F:C op×CDF: C^{op} \times C \to D, i.e., a coend as a type of colimit in the world of ordinary categories, is a colimit of a diagram pasted together from pieces that look like

F(x,x)F(f,1)F(y,x)F(1,f)F(y,y)F(x, x) \stackrel{F(f, 1)}{\leftarrow} F(y, x) \stackrel{F(1, f)}{\rightarrow} F(y, y)

and another way you could put this is that the coend is a coequalizer of a pair of maps of the form

f:xyF(y,x) xF(x,x).\sum_{f: x \to y} F(y, x) \stackrel{\to}{\to} \sum_x F(x, x).

Or, put still another way, that the coend is a coequalizer of the form

x,yF(y,x)×hom(x,y) xF(x,x).\sum_{x, y} F(y, x) \times \hom(x, y) \stackrel{\to}{\to} \sum_x F(x, x).

Now the last formulation suggests the right way to define an enriched coend in VV (replacing ×\times by \otimes of course). The one before that doesn’t always give the correct enriched coend; as you can see, it refers to elements ff of the underlying set of the hom-object. I think it gives the enriched coend if the underlying set functor hom(I,):VSet\hom(I, -): V \to Set is faithful or something; I’m not exactly sure what the precise statement should be, but hopefully it’s in that ballpark. I’m guessing that a typical example where it doesn’t give the enriched coend is where V=V = chain complexes, where hom(I,)\hom(I, -) is far from being faithful.

Also, this coequalizer description is for enriched coends of functors F:C opCVF: C^{op} \otimes C \to V valued in VV. I think one can run into trouble trying to extend that description to enriched coends of functors F:C opCDF: C^{op} \otimes C \to D in a general VV-category DD. Instead of defining it as a coequalizer in the underlying category of DD, one has to define it by saying that all the contravariant homs carry it to an appropriate end in VV.

With all those caveats in place, it’s true that weighted colimits in a VV-category DD can be presented in terms or tensors and coends, as suggested by the formula

cF(c)W(c)\int^c F(c) \otimes W(c)

in cases where DD has tensors and coends. (We say DD admits tensors if each hom(d,):DVhom(d, -): D \to V has an enriched left adjoint.) I regard that as akin to saying that all ordinary colimits in DD can be presented using coproducts and coequalizers, if DD has them – but that’s just one presentation. (At the same time, a coend is a very canonical sort of colimit, being a colimit of a functor F:C opCDF: C^{op} \otimes C \to D wrt the weight hom:C opCV\hom: C^{op} \otimes C \to V.)

Posted by: Todd Trimble on February 14, 2007 6:18 PM | Permalink | Reply to this

Notation

Todd (hi Todd!) wrote:

I’d just add one comment, perhaps not a big deal but sufficiently bothersome to me to point it out.

There’s nothing like a debate about terminology! And I want to join in.

Todd - and I think the majority of people - says that a profunctor AB\mathbf{A} \Rightarrow \mathbf{B} is a functor

(1)B op×ASet. \mathbf{B}^\op \times \mathbf{A} \to \mathbf{Set}.

Here A\mathbf{A} and B\mathbf{B} are ordinary, Set\mathbf{Set}-enriched, categories, for sake of simplicity. What I’ve written \Rightarrow is usually written –+–>, but for some reason I can’t type that in mathmode here.

I - and I think a sizeable minority of people - say that a profunctor AB\mathbf{A} \Rightarrow \mathbf{B} is a functor

(2)A op×BSet. \mathbf{A}^\op \times \mathbf{B} \to \mathbf{Set}.

Of course, everyone wants to have the contravariant part first, so that Hom AHom_\mathbf{A} is a profunctor AA\mathbf{A} \Rightarrow \mathbf{A} in both conventions.

I understand Todd’s reason for preferring (1). Here are two reasons why I prefer (2).

First reason Given a functor MM as in (2), it’s good to write an element mM(a,b)m \in M(a, b) (where a,bAa, b \in \mathbf{A}) like this: amb. a \stackrel{m}{\Rightarrow} b. That way, every diagram a nf nf 1a 0mb 0g 1g mb m a_n \stackrel{f_n}{\rightarrow} \cdots \stackrel{f_1}{\rightarrow} a_0 \stackrel{m}{\Rightarrow} b_0 \stackrel{g_1}{\rightarrow} \cdots \stackrel{g_m}{\rightarrow} b_m has an unambiguous composite a ng mg 1mf 1f nb m. a_n \stackrel{g_m \ldots g_1 m f_1 \ldots f_n}{\Rightarrow} b_m. (Here the f if_i are maps in A\mathbf{A} and the g jg_j are maps in B\mathbf{B}.) This can be very useful: e.g. you can talk about commutativity of diagrams such as this: a b a b \begin{aligned} a &\Rightarrow &b \\ \downarrow& &\downarrow \\ a' &\Rightarrow &b' \end{aligned} I’ll call this the module-element notation.

That much is uncontroversial. But the point is that if you use convention (2), a profunctor AB\mathbf{A} \Rightarrow \mathbf{B} consists of elements aba \Rightarrow b (which is good), whereas if you use convention (1), a profunctor AB\mathbf{A} \Rightarrow \mathbf{B} consists of elements bab \Rightarrow a (which is bad!).

There’s a further level that makes (2) even more desirable, as follows. For any B,ACat\mathbf{B}, \mathbf{A} \in \mathbf{Cat}, there’s a category of left B\mathbf{B}-, right A\mathbf{A}-modules. Moreover, any diagram AABB \mathbf{A'} \rightarrow \mathbf{A} \Rightarrow \mathbf{B} \rightarrow \mathbf{B'} has an unambiguous (up to iso) composite AB\mathbf{A'} \Rightarrow \mathbf{B'}. In short, we have a ‘2-profunctor’ or ‘2-module’ Mod\mathbf{Mod} from Cat\mathbf{Cat} to Cat\mathbf{Cat}. So when we write AMB, \mathbf{A} \stackrel{M}{\Rightarrow} \mathbf{B}, we could either be using the module-element notation (with respect to the 2-module Mod\mathbf{Mod}) or be using one of conventions (1) or (2). So if we’re not to get hopelessly confused, it’s essential that the module-elements go in the same direction as the module itself - in other words, that we use convention (2).

Second reason Everyone agrees that a functor MM as in (2) can be regarded as a left B\mathbf{B}, right A\mathbf{A}-module: BM A{}_\mathbf{B} M_\mathbf{A}, for short. Then we have a tensor product, CN B BM A= C(N BM) A. {}_\mathbf{C} N_\mathbf{B} \otimes {}_\mathbf{B} M_\mathbf{A} = {}_\mathbf{C} (N \otimes_\mathbf{B} M)_\mathbf{A}. Using convention (2), this says that AMBNCcomposestoANMC \mathbf{A} \stackrel{M}{\Rightarrow} \mathbf{B} \stackrel{N}{\Rightarrow} \mathbf{C} composes to \mathbf{A} \stackrel{N\otimes M}{\Rightarrow} \mathbf{C} - in other words, \otimes is composition in the bicategory of modules (=profunctors). But if we use convention (1), it says that CNBMAcomposestoCMNA \mathbf{C} \stackrel{N}{\Rightarrow} \mathbf{B} \stackrel{M}{\Rightarrow} \mathbf{A} composes to \mathbf{C} \stackrel{M\otimes N}{\Rightarrow} \mathbf{A} - in other words, \otimes is back-to-front composition in the bicategory of modules. This makes things confusing!

Posted by: Tom Leinster on February 14, 2007 5:02 PM | Permalink | Reply to this

Oops

Obviously my fingers refused to type something so immoral. In the last display, MNM \otimes N should be NMN \otimes M.

Posted by: Tom Leinster on February 14, 2007 5:17 PM | Permalink | Reply to this

Re: Oops

Heh, you know your Second Reason just caused a bunch of us to become happier with Convention (1). ^_^

Posted by: Toby Bartels on February 14, 2007 11:35 PM | Permalink | Reply to this

Re: Oops

Go on…

Posted by: Tom Leinster on February 14, 2007 11:52 PM | Permalink | Reply to this

Re: Oops

I just mean that for a bunch of people it seems obvious that the composite of M followed by N should be M ⊗ N. Since few math papers are written in Hebrew or Arabic, you know.

Posted by: Toby Bartels on February 15, 2007 12:09 AM | Permalink | Reply to this

Re: Notation

(Hey there, Tom!) Yeah, I know, those are points that I have an internal debate about every time I deal with these things. It usually takes me a good half hour at least to wrestle myself into a position where the me that wants to define bimodules as the mainstream does [and as I think I want] comes out on top. But the bottom half still feels uncomfortable and gives a muffled cry, for exactly the reasons you give!

I think it’s akin to the age-old debate about how to write composition of morphisms. Maybe I’m just getting old, but I think that bottom line for me is that I’ve just gotten so used to doing things one way that I would just wind up confusing myself if I didn’t come to a firm decision to stick to my guns and the heck with it. I mean, the arguments for composing as if reading English instead of Hebrew make sense to me, but I’m sorry: writing the counit of an adjunction FGF \dashv G as GF1GF \to 1 still looks perverse to me, dammit! (Jim Dolan and I must have wasted a few man-hours apiece unraveling each other’s notation when we talk – we almost always disagree it seems. Imagine the fun we have with ambidextrous adjunctions!)

Tom, in the immortal words of Talking Heads: Stop Making Sense!

Posted by: Todd Trimble on February 14, 2007 7:01 PM | Permalink | Reply to this

Re: Notation

Todd wrote:

Maybe I’m just getting old, but I think that bottom line for me is that I’ve just gotten so used to doing things one way that I would just wind up confusing myself if I didn’t come to a firm decision to stick to my guns and the heck with it.

I’m definitely there too. Uncertainty is exhausting!

Posted by: Tom Leinster on February 14, 2007 11:54 PM | Permalink | Reply to this

Re: Notation

Tom wrote (in sympathy with Todd):

Uncertainty is exhausting!

As a proponent of the antiLeibniz convention, let me just say for the record that I agree! Nobody should ever use the notation ‘fg’ (or ‘f ⋅ g’) for composition (in either order) without stating their conventions explicitly. And nobody should ever use ‘f ° g’ for composition in the antiLeibniz order (unless for some reason the unambiguous notation ‘fg’ is unavailable, in which case they should still state their conventions explicitly).

Posted by: Toby Bartels on February 15, 2007 12:13 AM | Permalink | Reply to this

server

Grrr… I just spent a long time composing a comment, hit preview, and was told there was an internal server error and lost all this work.

Just for your information, in case this happens again:

I just encountered one of these “internal server arrows” myself again. Just hitting my browser’s back-button helped: no information was lost (I had a backup anyway, but still) and hitting preview or submit a second time then had the desired result.

Posted by: urs on February 14, 2007 9:22 AM | Permalink | Reply to this

Re: Day on RCFTs

I am trying to understand the role played by these “Kan extensions” that Day mentions.

It seems that the issue is the following:

For an ordinary bifunctor F:AB F : A \to B respect for composition is expressed by A(x,y)×A(y,z) A A(x,z) F×F F B(Fx,Fy)×B(Fy,Fz) B B(Fx,Fy). \array{ A(x,y) \times A(y,z) &\stackrel{\circ_A}{\to}& A(x,z) \\ F \times F \downarrow \;\; &\simeq& \;\; \downarrow F \\ B(F x,F y) \times B(F y,F z) &\stackrel{\circ_B}{\to}& B(F x,F y) } \,. But when composition of Hom-categories A(x,y)A(x,y) is just a profunctor, i.e. a functor p:A(x,y) op×A(y,z) op×A(x,z)V, p : A(x,y)^\mathrm{op} \times A(y,z)^\mathrm{op} \times A(x,z) \to V \,, then the top horizontal morphism does not land in A(x,z)A(x,z), but in [A(x,z),V][A(x,z),V]:

A(x,y) op×A(y,z) op[A(x,z),V]. A(x,y)^\mathrm{op} \times A(y,z)^\mathrm{op} \to [A(x,z),V] \,. The op{}^\mathrm{op} is not an issue, but the different right hand side is.

So for morphisms FF between probicategories there must be some F^\hat F such that A(x,y) op×A(y,z) op p [A(x,z),V] F×F F^ B(Fx,Fy)×B(Fy,Fz) B B(Fx,Fy). \array{ A(x,y)^\mathrm{op} \times A(y,z)^\mathrm{op} &\stackrel{p}{\to}& [A(x,z),V] \\ F \times F \downarrow \;\; &\simeq& \;\; \downarrow \hat F \\ B(F x,F y) \times B(F y,F z) &\stackrel{\circ_B}{\to}& B(F x,F y) } \,.

Under the condition on pp that I already mentioned above, these F^\hat F are given by the coend F^(f)= cf(c)F(c), \hat F(f) = \int^c f(c)\otimes F(c) \,, apparently, where f[A(x,z),V]f \in [A(x,z),V]. With these F^\hat F then we get a diagram of the form we are familiar from for ordinary bifunctors, but now with all Hom-categories replaced by their categories of maps into VV: [A(x,y),V]×[A(y,z),V] [A(x,z),V] F^×F^ F^ B(Fx,Fy)×B(Fy,Fz) B B(Fx,Fy). \array{ [A(x,y),V] \times [A(y,z),V] &\stackrel{\circ}{\to}& [A(x,z),V] \\ \hat F \times \hat F \downarrow \;\; &\simeq& \;\; \downarrow \hat F \\ B(F x,F y) \times B(F y,F z) &\stackrel{\circ_B}{\to}& B(F x,F y) } \,.

If I understand correctly, the “\circ” in the top horizontal row here is now that important convolution product

fg= abf(a)g(b)p(a,b,). f \circ g = \int^{a b} f(a)\otimes g(b)\otimes p(a,b,-) \,. So, using the definition of F^\hat F from above, the source 1-morphism of the above diagram would be F^(fg)= c abf(a)g(b)p(a,b,c)F(c), \hat F(f \circ g) = \int^c \int^{a b} f(a)\otimes g(b) \otimes p(a,b,c) \otimes F(c) \,, while F^(f)×F^(g) \hat F(f) \times \hat F(g) would be given by abf(a)g(b)(F(a)F(b)). \int^{a b} f(a)\otimes g(b) \otimes (F(a) \circ F(b)) \,.

So, in as far as I can see, this is the reason why we are interested in finding “Kan extensions” of morphisms F:A opBF : A^\mathrm{op} \to B to morphisms F^:[A,V]B\hat F : [A,V] \to B. (This is probably not the right way to say this.)

In prop. 1 (p. 2 ) of Day&Street, Kan Extensions along Promonoidal Functors, something similar appears.

I am still far from having a good understanbding for what is going on here. But I thought I’d share this, maybe in the hope to provoke somebody to help out.

Posted by: urs on February 13, 2007 9:01 PM | Permalink | Reply to this

Re: Day on RCFTs

I almost never feel comfortable with this “pro stuff” without first translating into more ordinary terms with the help of a well known but fundamental fact, which I’ll state first for ordinary categories, and then for enriched categories. Let CC be a small category.

Set C opSet^{C^{op}} is the free cocompletion of CC, in the sense that given a functor F:CDF: C \to D to a cocomplete category DD, there exists (up to canonical isomorphism) a unique colimit-preserving functor, indeed a left adjoint, F^:Set C opD\hat{F}: Set^{C^{op}} \to D, which restricts to FF along the Yoneda embedding (again up to isomorphism). F^\hat{F} is the Kan extension of FF along the Yoneda embedding.

(For VV a complete cocomplete symmetric monoidal category and CC a small VV-category) V C opV^{C^{op}} is the free VV-cocompletion of CC (with a similar interpretation as above, but speaking in terms of enriched left adjoints etc.).

Baez and Dolan enunciate this principle and some of its offshoots in HDA0. One offshoot which is directly relevant to what’s going on in this paper is that a probicategory AA enriched in VV is virtually the same thing as a certain biclosed bicategory A^\hat{A} enriched in VV, in which the hom-categories of A^\hat{A} are the free VV-cocompletions of the hom-categories of AA. Here, the bimodule maps

C xyz:hom(x,y)hom(y,z)hom(x,z)C_{xyz}: \hom(x, y) \otimes \hom(y, z) \to \hom(x, z)

for probicategory composition are tantamount to functors

C^ xyz:V hom(x,y) opV hom(y,z) opV hom(x,z) op\hat{C}_{xyz}: V^{\hom(x, y)^{op}} \otimes V^{\hom(y, z)^{op}} \to V^{\hom(x, z)^{op}}

which are separately VV-cocontinuous in their arguments, i.e., such that C^ xyz(F,)\hat{C}_{xyz}(F, -) and C^ xyz(,G)\hat{C}_{xyz}(-, G) are left adjoints for each fixed FF and GG (so that precomposition with FF and postcomposition with GG have right adjoints – that’s what “biclosed” means here).

So I think all Brian is doing is giving in nuts-and-bolts “pro terms” what it would mean to have a homomorphism of bicategories A^B\hat{A} \to B, locally valued in VV-cocomplete VV-categories, VV-cocontinuous functors, and VV-transformations.

Posted by: Todd Trimble on February 14, 2007 2:15 AM | Permalink | Reply to this

Re: Day on RCFTs

I am afraid I still need to think a little about coends (or bombard you with questions – thanks a lot for all your input, John, Todd and Tom!) (or read more about it) before I feel really comfortable with them. But not today. It’s late already.

But here is something related:

today I was asked to demonstrate that composition of bimodules is, under suitable conditions, weakly associative.

To my great embarrassment, I had to realize that for general bimodules (as opposed to, say, left-induced bimodules or Frobenius bimodules) I had never worked that out myself, nor seen it worked out anywhere, but always taken the standard hint in the literature for granted, which would just assert that there is a canonical associator for bimodule composition.

Now, it may get even more embarrassing, because possibly this is quite easy to show and all I could come up with is the construction given in

these brief notes

I assume that a good exercise to become really familiar with coends, would be to redo this exercise for the general case of profunctors.

Posted by: urs on February 14, 2007 9:58 PM | Permalink | Reply to this

bimodules

Together with Jens Fjelstad I am in the process of polishing my notes on FFRS formalism.

We need a good working basis for how “left-induced” bimodules internal to a suitable monoidal category work concretely.

This must be all elementary stuff to those who know it well, but us hapless physcists had to work it out for ourselves.

If any expert would care to have a critical look at our file

Bimodules

I’d be very grateful.

These notes define the concept of a bimodule and describe in detail how to construct the tensor product together with its actions as well as the associator. Then the nature of the full sub-bicategory of “left-induced” bimodules is worked out.

I’d also be interested in knowing to what extent others (non-experts, in particular) would find such a compilation of basic facts about bimodules useful.

Posted by: urs on March 1, 2007 1:59 PM | Permalink | Reply to this

Re: bimodules

On page 2, you probably want the tensor product to preserve any colimits (that you may want) in each argument.

In the case where CC has all colimits, one customarily refers to CC’s being “monoidally cocomplete”. A common situation where this occurs is where CC is monoidal biclosed. I’m not sure what your intended audience is – maybe they feel very comfortable with bicategories. But I’m not sure that defining a monoid as a monad in the suspension bicategory of CC will be all that helpful for a general reader who doesn’t already know what a monoid in a monoidal category is (i.e., it sounds sort of question-begging).

On page 4, where you define the AA and CC actions on the bimodule N 1 BN 2N_1 \otimes_B N_2, you invoke a universal property of the dotted arrows, namely that the bottom left and right hand corners are the coequalizers of the diagrams above them. Here you are implicitly using the fact that AA \otimes - and C - \otimes C preserve coequalizers (cf. comment above on monoidal cocompleteness).

The wording of the argument below that diagram confused me at first. I think what you mean in the penultimate paragraph is that ()l A(\otimes)l_A coequalizes the pair r Br_B and l Bl_B, not that it is a coequalizer of the pair. Also, the last paragraph doesn’t quite follow because what you need is that the left column is a coequalizer (cf. previous paragraph).

Aha – you do say something like that on page 7: “if we assume that the diagram characterizing the universal property is not affected by tensoring it with N 3N_3” (are you saying here “assume that tensoring with N 3N_3 preserves coequalizers”?).

Okay, now at this point I admit that I didn’t go through in fine detail your proof that we get a bicategory of bimodules (the associativitor and so on) – I remember glancing at it in your previous draft, and thinking I could give an easier and more general proof. Well, “general” needs to be qualified. Whew! I’ll need to explain.

Ok, before I get into the generalization I have in mind, let me suggest one easier path for the reader willing to accept a useful standard lemma found in the literature. Now I don’t have Johnstone’s Topos Theory with me [and actually I would probably have trouble procuring it anytime soon], but I think on page 1 he has something like the following, of which only part (a) is needed for your purposes. It’s basically a 3 x 3 lemma familiar from homological algebra; it’s in fact just a straight diagram chase which can be left as an exercise.

Consider the 3 x 3 diagram A A A B B B C C C \array{ A &\stackrel{\to}{\to} & A' & \to & A'' \\ \downarrow \downarrow && \downarrow \downarrow && \downarrow \downarrow \\ B &\stackrel{\to}{\to} & B' & \to & B'' \\ \downarrow && \downarrow && \downarrow \\ C &\stackrel{\to}{\to} & C' & \to & C'' }

where all three rows are right exact (are coequalizer diagrams) and the first two columns are right exact, and the diagram “commutes” [in the obvious parallel sense when applied to squares with parallel pairs of arrows]. Then (a) the third column is right exact. (b) If all three rows and columns are reflexive coequalizers, then the obvious diagonal diagram is also a reflexive coequalizer.

The particular application of part (a) to your situation is, I hope clear: here the top left corner is occupied by someting like MBNNCPM \otimes B \otimes N \otimes N \otimes C \otimes P where BB and CC are monoids, MM is an AA-BB module, NN a BB-CC bimodule, and PP is a CC-DD bimodule. The bottom right corner is occupied by say (M BN) CP(M \otimes_B N) \otimes_C P, with the rows giving ( B)(\otimes_B) constructions tensored by something, and the columns are somethings tensored with ( C)(\otimes_C)-constructions. I guess you get the gist. Anyway, the point is that the 3 x 3 lemma is completely standard. Hopefully my memory about Johnstone is right. [By the way, part (b) does apply to your situation, i.e., the coequalizers here are reflexive.]

Ok, now for the “generalization”. I’m putting that in quotes because actually I want to start with something symmetric (or braided) monoidally cocomplete, not just monoidally cocomplete, a hom-base for an enriched category, VV. Maybe you don’t want the symmetry or braiding as part of your assumed structure. But if you don’t mind that, then I can give a much greater generalization of bimodules, and some nice conceptual proofs.

Well, let me put my cards on the table: I want VV to be a “cosmos”, i.e., a complete and cocomplete symmetric monoidal closed category. Then instead of considering monoids = one-object categories enriched in VV, consider (small) categories AA enriched in VV. Having the symmetry iso in VV also allows one to construct the enriched opposite category A opA^{op}. Then VV-enriched functors F:AVF: A \to V give left AA-modules (or right modules, depending on your favorite convention), and VV-functors F:A opVF: A^{op} \to V give right modules.

We again have a bicategory of “bimodules” where the objects are V-enriched categories AA , the 1-cells ABA \to B are left AA right BB bimodules AB opVA \otimes B^{op} \to V, 2-cells are VV-natural transformations between such functors, and bimodule composition is by tensor product (weighted colimit) construction discussed at the caf^eacute;. Just for memory’s sake, if M:ABM: A \to B and N:BCN: B \to C are 1-cells, then their composite is given by the formula which says that MN(a,c)MN(a, c) is the coequalizer of the obvious diagram b,bM(a,b)#B(b,b)#N(b,c) bM(a,b)#N(b,c) \sum_{b, b'} M(a, b)#B(b,b')#N(b',c) \stackrel{\to}{\to} \sum_b M(a,b)#N(b,c) where # is used for tensor in VV and B(,)B(-, -) is the BB-hom. (I may have switched some conventions on you; if so, sorry; I’m writing fast).

Ok, now we want the associator. The general insight I want to use is that under the cosmos assumptions, that we actually have a biclosed bicategory, i.e., postcomposing BN:Bim(A,B)Bim(A,C) -- \otimes_B N: \mathrm{Bim}(A, B) \to \mathrm{Bim}(A, C) and precomposing M A:Bim(B,C)Bim(A,C) M \otimes_A --: Bim(B, C) \to Bim(A, C) both have right adjoints, namely BN:Bim(A,C)Bim(A,B) -- \Leftarrow_B N: Bim(A, C) \to Bim(A, B) M A:Bim(A,C)Bim(B,C) M \Rightarrow_A --: Bim(A, C) \to Bim(B, C) where for example (Q BN)(a,b) (Q \Leftarrow_B N)(a, b) is an enriched end, i.e., the equalizer of an obvious pair of maps

Prod cQ(a,c) N(b,c)Prod c,cQ(a,c) N(b,c)#C(c,c). Prod_c Q(a,c)^{N(b,c)} \stackrel{\to}{\to} Prod_{c,c'} Q(a,c')^{N(b,c)#C(c,c')}.

If you accept all that, then here is a very compressed construction of the associator. Since precomposition and postcomposition are enriched left adjoints, they preserve all weighted colimits which happen to exist. Now (M BN) (M \otimes_B N) is the colimit of NN wrt the weight MM. To say that a functor FF preserves such a colimit means that the weighted colimit M BFNM \otimes_B FN exists in the target, and that it is precisely given by F(M BN)F(M \otimes_B N). Now apply this to FN=N CPFN = N \otimes_C P, which as we just observed, preserves weghted colimits, and voila! you get the canonical isomorphism (M BN) CPM B(N CP) (M \otimes_B N) \otimes_C P \simeq M \otimes_B (N \otimes_C P) !!!!!

This is an awfully slick proof, which may take some time to digest. But I think these sorts of ideas are really relevant to what is happening in the background of Day’s paper.

I think old papers of Day (his thesis?) must contain the proof I just outlined. They’re standard in Australia, but I’d have to do some snooping around [and I’m not close to a library]. I’ll try to think about where to find this stuff in the literature.

Posted by: Todd Trimble on March 1, 2007 6:02 PM | Permalink | Reply to this

Re: bimodules

Thanks a lot, indeed! Great.

I’ll look at what you wrote closely, but just a quick question concerning your slick proof:

while I have posted this to the entry on Day’s work, this is not quite the context that I am aiming at. We need to be able to handle bimodules in particular in “modular tensor” categories. These are not, in general, closed!

That would seem to make your very short proof not apply, right?

Can we do anything to improve the situation when C is not closed?

Posted by: urs on March 1, 2007 6:05 PM | Permalink | Reply to this

Re: bimodules

We need to be able to handle bimodules in particular in “modular tensor” categories. These are not, in general, closed!

I see. Wish I knew about these modular tensor categories.

That would seem to make your very short proof not apply, right?

Right, but

Can we do anything to improve the situation when CC is not closed?

A couple of things maybe. I think the general idea is that your construction of the associator is at bottom a special case of one of those general exchange-of-colimit theorems. Namely, the coequalizer construction, seen as left adjoint to the diagonal functor diag:CC \mathrm{diag} : C \to C^{\bullet \stackrel{\to}{\to} \bullet} into the parallel-pair functor category, preserves colimits and in particular coequalizers. When you analyze that statement in detail, you get part (a) of the 3×33 \times 3 lemma I mentioned, of which your big diagrams are a special case. I think that’s all that’s going on…

I’m guessing that the modular tensor category admits coequalizers in general, so that the conceptual proof of the 3×33 \times 3 lemma which I just sketched would apply in your situation.

But is it true in your context that the tensor ABA \otimes B preserves coequalizers in each separate argument AA and BB?

Anyway, in an ultimate nutshell, I think (M BN) CPM B(N CP) (M \otimes_B N) \otimes_C P \simeq M \otimes_B (N \otimes_C P) can be seen an instance of interchange of colimits M BM \otimes_B -- and CP-- \otimes_C P, even under your more parsimonious assumptions.

Hope this helps,

Todd

Posted by: Toddd Trimble on March 1, 2007 7:17 PM | Permalink | Reply to this

Re: bimodules

But is it true in your context that the tensor ABA \otimes B preserves coequalizers in each separate argument AA and BB?

Hm, good point, I don’t really know in general!

But we will actually really be interested only in the full sub-bicategory of Bim(C)\mathrm{Bim}(C) whose objects are “special” Frobenius algebra objects. For those the bimodule tensor product can conveniently be define through a certain projector (as described here), which can be seen to make the bimodule compostion strictly associative.

So, there are various applications with various assumptions we have in mind here, which makes me want to not generally restrict to closed CC.

For closed CC will be actually a highly interesting special case (applying to the topological string) and I do want to eventually do this the way you indicated, using profunctors!

By the way, even not assuming CC to be closed, I imagine I can still consider algebroids internal to CC and bimdules for them and do the entire construction for that more general case. Shouldn’t that work? I haven’t thought it through in detail, yet.

Hope this helps,

Certainly! Thanks a lot!

Posted by: urs on March 1, 2007 7:34 PM | Permalink | Reply to this

Re: bimodules

Okay, I have now corrected that sentence which said “coequalizer” where I meant “coequalizes”.

Thanks for the term “monoidally cocomplete”. I knew I had to assume this and that, but was a little uncertain about that, therefore the disclaimer in the introduction and the concrete assumption appearing only in the proof where it appears.

Now I have changed that and stated that we can do everything we want to do when C is monoidal and monoidally cocomplete. Hope that’s right now (well, I just need the monoidal existence of the coequalizers that actually appear, of course).

Concerning the 3 by 3 diagram, I’ll think about that. Thanks a lot for mentioning this! It’s the kind of thing I was expecting existed, only me not being aware of it.

[…] a straight diagram chase which can be left as an exercise

Okay, I’ll think about it. Am a little worried about “diagram chases” in arbitrary CC, though. (?)

Posted by: urs on March 1, 2007 6:19 PM | Permalink | Reply to this

Re: bimodules

So, it seems to me, that 3×33 \times 3-diagram you mention is (at this point just as a diagram, without any assumption on what commutes and what we want to derive) exactly the diagram that I draw for the construction of the associator, right?

Maybe what I should just do is make the construction of the associator that I give into a statement that lives in the appendix, pointing out that it really is a standard situation?

Ah, by the way, here is a potentially very stupid question, which is however important:

For handling that diagram, I do assume existence of a coimage. Is that an extra assumption on CC when I already assume monoidal cocompleteness? How should I best deal with that?

Posted by: urs on March 1, 2007 7:02 PM | Permalink | Reply to this

Re: bimodules

First, let me say that my long and yucky looking post was an actually an email to Urs which I told Urs he could post if he wanted, which partly explains why it is long and yucky looking (being written in a kind of pidgin TeX in ASCII format).

Don’t worry about “diagram chasing” in this generality – the 3 x 3 lemma holds generally; the method I (and others) informally call “diagram-chasing” is in this case pure category theory which is really just chasing coequalizers (and see the other email I sent you for a more conceptual way of thinking about it – maybe I should just post that email myself).

I’ll have to check back to see where you think you need a coimage. Off hand I can’t see what goes wrong with the proof I outlined, which doesn’t mention coimage.
Let me get back to you though…

Posted by: Todd Trimble on March 1, 2007 7:26 PM | Permalink | Reply to this

Re: bimodules

[…] long and yucky looking post […]

I have now fixed that 3×33 \times 3-diagram. Should look better now.

Posted by: urs on March 1, 2007 7:44 PM | Permalink | Reply to this

Re: bimodules

Yeah, I really don’t think the coimage is needed – unless I’m confused, all you need is this 3 x 3 lemma business, of which I sketched a left-adjoint-y proof here.

Posted by: Todd Trimble on March 1, 2007 8:09 PM | Permalink | Reply to this

Re: bimodules

There is something I don’t understand yet: you say that

a) assuming the 3×33 \times 3-thing commutes,

and

b) assuming everything except one column is right exact

then it follows (in particular) that the right column is right exact, too.

But don’t I need the converse? I’d think I need:

a) assuming everything in sight is right exact

and

b) everything except for the bottom left square commutes

then there is an isomorphism making the bottom left square commute?

Posted by: urs on March 1, 2007 8:18 PM | Permalink | Reply to this

Re: bimodules

I think we’re saying the same thing but in slightly different ways. Perhaps I just expressed myself badly. I’ll try to do better.

Let’s look again at the statement that the coequalizer functor Coeq:C C,Coeq: C^{\bullet \stackrel{\to}{\to} \bullet} \to C, being a left adjoint, preserves coequalizers. As we go through this, it might help if we run it through your specific example, and see what it says.

So imagine what a coequalizer diagram in C C^{\bullet \stackrel{\to}{\to} \bullet} must look like (I’m sorry; I’m not good or fast at constructing these diagrams). Now coequalizers in this or in any functor category are computed pointwise, so what we get is the part of the 3 x 3 diagram we had before consisting of the first two columns say, which are coequalizer diagrams, and the six left-hand horizontal maps that mediate from the left-hand column to the middle one.

(To get back to your example: imagine that these first two columns involve coequalizers that express the construction of M BNM \otimes_B N, tensored by stuff like CPC \otimes P and PP. Try writing it down on the back of an envelope.)

Now apply Coeq to this complex. (This Coeq is supposed to lead to the C\otimes_C constructions. In more detail, Coeq applied to the parallel pair of the first row gives MB(N CP)M \otimes B \otimes (N \otimes_C P), Coeq applied to the parallel pair of the second row gives M(N CP)M \otimes (N \otimes_C P), and Coeq applied to the parallel pair of the third row gives (M BN) CP(M \otimes_B N) \otimes_C P.) The result is something that plays the role of the third column. And, we have the three right-hand arrows on the right: each is an instance of the unit of the monad CoeqdiagCoeq \dashv diag. The naturality of the unit guarantees the commutativity of the squares on the right (read in parallel, as appropriate).

If this is at all followable, then the column on the right looks like [turn it 90 degrees to make a column!]:

MB(N CP)M(N CP)(M BN) CP.M \otimes B \otimes (N \otimes_C P) \stackrel{\to}{\to} M \otimes (N \otimes_C P) \to (M \otimes_B N) \otimes_C P.

Okay, now: because Coeq preserves coequalizers, the thing I just wrote down should be a coequalizer diagram. But the coequalizer of the parallel pair in this rightmost column is, by definition,

M B(N CP).M \otimes_B (N \otimes_C P).

And so we get the canonical isomorphism

(M BN) CPM B(N CP),(M \otimes_B N) \otimes_C P \cong M \otimes_B (N \otimes_C P),

as desired.

Hope this makes sense. I can imagine that one of the maps in one of the last displays, M(N CP)(M BN) CPM \otimes (N \otimes_C P) \to (M \otimes_B N) \otimes_C P, could look terribly confusing at first, but if you think very carefully about what I said about naturality of the unit of CoeqdiagCoeq \dashv diag, it should all become clear eventually. I hope.

Posted by: Todd Trimble on March 1, 2007 10:58 PM | Permalink | Reply to this

Re: bimodules

I can imagine that one of the maps in one of the last displays, M⊗(N⊗ CP)→(M⊗ BN)⊗ CP, could look terribly confusing at first

In case it did, I’ll point out that on account of the associativity of tensor product, there are two ways of calculating the coequalizer of the parallel pair in the second row. One leads to M(N CP)M \otimes (N \otimes_C P) and the other leads to (MN) CP(M \otimes N) \otimes_C P.

Posted by: Todd Trimble on March 1, 2007 11:41 PM | Permalink | Reply to this

Re: bimodules

Let’s look again at […]

I think I get it! :-)

Thanks for going through the trouble of explaining this stuff, which would so much more conveniently be explained at a blackboard. I very much appreciate it.

I will see if I can type a detailed version of the construction you described.

Meanwhile, I have gone through the version of the proof that I had and simplified it a little. In particular, I thought about that you said that the assumption of coimages is unnecessary, and realized that I don’t need it either, because the fact that the bim tensor product is epi is sufficient for the purpose that I invoked the coimage for.

New version is here. (I’d be glad to acknowledge your help in that file, in case you don’t object to being associated with it. Cannot promise though what will become of that file. Chances are it will become a kind of appendix to something related to physics.)

Posted by: urs on March 2, 2007 1:48 PM | Permalink | Reply to this

Re: bimodules

This isn’t to beat a dead horse; I just want to belabor some things to get the proofs to look as simple and trivial-looking as possible.

We assume given a monoidal category VV with coequalizers such that the tensor product :V×VV\otimes: V \times V \to V preserves coequalizers in each of its separate arguments.

Let CC be a monoid in VV; let NN be a right CC-module and PP be a left CC-module. Let MM be any object in VV.

Lemma 1: The canonical map (MN) CM(N CP)(M\otimes N) \otimes_C \to M \otimes (N\otimes_C P) is an isomorphism.

Proof: This follows straightforwardly from the fact that MM \otimes - preserves coequalizers.

Lemma 2 (statement of 3 x 3 lemma).

Proof: Coeq:C CCoeq: C^{\bullet \stackrel{\to}{\to}} \to C preserves coequalizers.

Corollary: Let MM be a right BB-module, NN a (BB-CC)-bimodule, and PP a left CC-module. Then the functor CP- \otimes_C P preserves the coequalizer MBNMNM BN.M \otimes B \otimes N \stackrel{\to}{\to} M \otimes N \to M \otimes_B N.

Proposition: The dotted arrow in the diagram below exhibits the associator for bimodule composition:

(MBN) CP MB(N CP) (MN) CP M(N CP) (M BN) CP > M B(N CP) \array{ (M\otimes B\otimes N)\otimes_C P &\simeq& M \otimes B \otimes (N \otimes_C P) \\ \downarrow \downarrow && \downarrow \downarrow \\ (M \otimes N) \otimes_C P &\simeq& M \otimes (N \otimes_C P) \\ \downarrow && \downarrow \\ (M \otimes_B N)\otimes_C P &\cdots \gt& M\otimes_B (N\otimes_C P) }

Proof: The left-hand column is a coequalizer by the corollary. The displayed natural isomorphisms are given by lemma 1. Hence by universality of coequalizers, the dotted arrow exists and is an isomorphism.

Surely there is some standard reference for associativity of “vanilla” bimodule composition (vanilla V=VectV = \mathrm{Vect}), but I haven’t found it yet. Once found, then one could simply remark that the proof given by the standard reference works [assuming it’s basically the same as the proof above] only under the assumption that the tensor preserves coequalizers in its separate arguments – I don’t think more needs to be said. But feel free to use the proof sketched in this email if you like.

Some questions: can you describe for me the nature of the obstruction to closedness of modular tensor categories? Are they monoidally cocomplete [I think you hinted you’re not sure]? I guess I’m a little skeptical that they are, since one should then get closedness by a Freyd special adjoint functor theorem, or so I would suppose. But then, what would be the obstruction to monoidal cocompleteness?

Posted by: Todd Trimble on March 2, 2007 5:00 PM | Permalink | Reply to this

Re: bimodules

Okay, I see your point, that looks a little simpler! :-)

Well, Lemma 3 mostly absorbs the stuff that looks scary in my notes.

I have to run to now, but will see how to incorporate that. Thanks again.

Would then maybe also have to reformulate the proof that the associator is indeed a homomorphims of bimodules (though I found my proof of that pretty slick, even though the diagram is large).

And the pentagon I haven’t even mentioned yet…

can you describe for me the nature of the obstruction to closedness of modular tensor categories?

No, unfortunately I cannot.

Are they monoidally cocomplete [I think you hinted you’re not sure]?

I should have said: I have no clue! :-)

And I should have been more precise concerning applications:

Mostly, we need bimodules only for algebras that are special Frobenius. For those everything goes through, in a general abelian monoidal category.

So I don’t really need all colimits to exist and be preserved by the tensor product. It will be sufficient to have that on a subcollection of objects.

In any case, I cannot and don’t want to assume the ambient category to be closed, in general.

But in fact, part of the point of the exercise is to see how much the CFT constructions in modular tensor categories actually generalize (say to non-rational CFT). So in the optimal case we would arrive at a discussion of a very general context with few nice properties and then working out all the nice extra stuff that happens as we make further assumptions, like for instance closedness.

Really have to run now…

Posted by: urs on March 2, 2007 5:52 PM | Permalink | Reply to this

Re: bimodules

Would then maybe also have to reformulate the proof that the associator is indeed a homomorphims of bimodules

There’s a standard result in monad theory one can invoke in these situations. Roughly speaking, it says that if M:VVM: V \to V is a monad and V MV^M is the Eilenberg-Moore category, then the underlying functor U:V MVU: V^M \to V reflects classes of colimits which exist in VV (such as coequalizers) that are preserved by MM. Apply that to the monad A:VVA \otimes -: V \to V, and I think this gives you what you want: the coequalizer diagrams in VV used to construct (M BN) CP(M \otimes_B N) \otimes_C P and M B(N CP)M \otimes_B (N \otimes_C P), and the isomorphism between them, are automatically reflected in AA-Mod, because AA \otimes - preserves coequalizers. In other words, you can regard the construction of the associator as already living at the level of bimodules.

And the pentagon I haven’t even mentioned yet…

As for the pentagon, of course one’s natural inclination is just to say “it holds by universality” (in this case of coequalizers). But perhaps one would like a clearer and less wishy-washy way of saying it.

I thought of something here which is perhaps slight overkill but which is a nice picture making the pentagon pretty clear, I think. It uses this part (b) of the 3 x 3 lemma: that in the case where the rows and columns are reflexive coequalizers, the evident diagonal diagram is also a reflexive coequalizer (and I feel pretty sure that my memory of this being in Johnstone’s Topos Theory is accurate). So both (M BN) CP(M \otimes_B N) \otimes_C P and M B(N CP)M \otimes_B (N \otimes_C P) are common coequalizers of the diagonal pair (hence canonically isomorphic); the different ways of placing the parentheses in these expressions correspond to different edgewise paths from one corner of the diagonal in the 3 x 3 square to the other.

So then, we imagine that the various ways of associating parentheses for a 3-fold bimodule composite correspond to different edgewise paths from one corner to the other in the 3 x 3 x 3 cube, but of course these different expressions are canonically identified as the common coequalizer of the (reflexive) diagonal in the cube. The six faces of the cube correspond to the five isomorphisms in the pentagon, plus one more due to an ambiguity in the way the coequalizer (M BN) C(P DQ)(M \otimes_B N) \otimes_C (P \otimes_D Q) is built up [is the Bthe \otimes_B composite or the D\otimes_D composite formed first?]. I’m guessing this cube-pentagon connection is something familiar.

It looks like some of your big diagrams are touching on the use of reflexive coequalizers, so there may be a nice tie-in with what you already have in your file.

The material on special Frobenius algebras looks interesting (Baez, Dolan, and I have also been looking at these, in the context of studying quantum sl 2sl_2 and the Jones polynomial), and I look forward to learning more.

Posted by: Todd Trimble on March 3, 2007 2:02 PM | Permalink | Reply to this

Re: bimodules

It’s good to see this high-powered way of doing things!

The material on special Frobenius algebras looks interesting […]

Would you maybe know a high-powered way of pinpointing what is so special about special ambijunctions (those ambijunctions coresponding to special Frobenius algebras)? How do they appear from the topos-theoretic bird’s eye point of view?

Posted by: urs on March 3, 2007 7:43 PM | Permalink | Reply to this

Re: bimodules

Would you maybe know a high-powered way of pinpointing what is so special about special ambijunctions (those ambijunctions coresponding to special Frobenius algebras)? How do they appear from the topos-theoretic bird’s eye point of view?

Alas, no, not yet. I’ll have to tuck this question away and report back if something comes to mind. Actually, we’ve been looking at what might be called “superspecial ambijunctions” where the two beta constants coincide (this was in the context of discussing Temperley-Lieb algebras), and I guess it’s no coincidence that you use the letter “β\beta”.(?)

The left-induced bimodules remind me of some things in descent theory, but I’d want to think further before saying anything more sensible.

Posted by: Todd Trimble on March 5, 2007 9:41 PM | Permalink | Reply to this

special ambijunctions

I’ll have to tuck this question away and report back if something comes to mind.

Okay, please let me know when you find something!

Here is how I see the relevance of this concept. This is what motivated it for me and what pretty much explains the prevalence of special ambijunctions in the context of 2-dimensional QFT. Only trouble is that it is a worrisomely “component based” statement, which makes me feel that there should be a more natural and more abstract way to say this.

Anyway, here goes: in the context of 2Cat Gray2\mathrm{Cat}_{\mathrm{Gray}} consider two 2-functors that are related by an adjunction going back and forth between them. There are tin-can naturality equations expressing these adjoint morphisms of 2-functors. Try to find the condition under which these tin-can equations may be solved for the image of either one of the two 2-functors (with all other 2-cells brought to the other side). This is the case when the adjunction is actually special.

I guess it’s no coincidence that you use the letter “β\beta”.(?)

I must confess that I adopted this notation from FFRS and that I don’t know where they adopted it from!

I must also confess that I don’t even really know at the moment what a Temperley-Lieb-algebra is.

What is it?? :-)

(BTW, I have thought quite a bit about all you remarks on bimodules and everything, and I really like that. If I have not completely rewritten everything in my notes yet it is because the impact of what you said was too large rather than too small, if you know what I mean. I am waiting for my internal energies to reassemble themselves to attack this issue afresh.)

Posted by: urs on March 5, 2007 10:11 PM | Permalink | Reply to this

Re: special ambijunctions

What is it??

Hm, that happens when one gets too many good answers. One starts asking too many questions!

I should have just looked this up. In particular, given that David just had a posting about Temperley-Lieb algebras…

Posted by: urs on March 5, 2007 10:52 PM | Permalink | Reply to this

Re: special ambijunctions

Yes, and I should have linked to that post myself. I’m not sure what John has in store for us when he writes Next Week’s Finds, but among other things he might tell us a nice categorified story about these Temperley-Lieb algebras A nA_n. (Actually, there is an accumulating embarrassment of riches and he might not talk about that at all – I don’t want to jump the gun.)

A terse description without the niceties is that A nA_n is the endomorphism algebra of the n-th tensor power of the tautological or spin 1/2 representation VV of quantum su 2su_2. This VV is self-dual, and there is an ambijunction determined by a “vanilla” adjunction VVV \otimes - \dashv V \otimes - together with a “chocolate swirl” adjunction VVV \otimes - \dashv V \otimes - obtained by twisting the vanilla one with a “symmetry” or Yang-Baxter operator VVVVV \otimes V \to V \otimes V. For this ambijunction, the bubble values are equal; the common value is denoted β\beta.

The Yang-Baxter operator provides an action of the n-th Hecke algebra on V nV^{\otimes n}. There is a string of surjective algebra maps

n thBraidgroupalgebran thHeckealgebraA nn^{th} Braid group algebra \to n^{th} Hecke algebra \to A_n

and the Jones polynomial is obtained by taking a certain trace on the Temperley-Lieb algebra A nA_n.

I’m saying this all too rapidly and sketchily and decategorifiedly, but there’s a beautiful categorified picture which is slowly emerging…

Posted by: Todd Trimble on March 6, 2007 1:29 AM | Permalink | Reply to this
Read the post Limits and Push-Forward
Weblog: The n-Category Café
Excerpt: Question on the relation between push-forward of functors and (indexed) limits.
Tracked: March 31, 2008 8:57 PM

Post a New Comment