Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 4, 2009

nLab – General Discussion

Posted by Urs Schreiber

The comment section of this entry is the place to post contributions to general discussion concerning the nnLab, the wiki associated with this blog.

Discussion previously held at the nnLab entry General Discussion should eventually migrate here.

Notice that, when posting a comment here which does not reply to a previous comment but starts a new thread, you can and should choose a new descriptive headline in the little box above the edit pane in which you type your message.

Previous blog discussions concerning the nnLab which we had were

- Organizing the pages at the nnLab

- Beyond the blog

- Toward a higher-dimensional Wiki.

Posted at January 4, 2009 12:53 PM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1886

344 Comments & 1 Trackback

terminology conventions

Twice now I have read an entry at the nLab and found myself disagreeing with the author’s choice of terminology. Of course, I proceeded to add a comment/query about it. The resulting discussion at subcategory seems to be converging, while a discussion at Grothendieck topology has yet to begin (of course, I just posted my inflammatory remarks there a few minutes ago).

This raises an interesting question: in cases of conflicting terminology, how should the nLab decide which to adopt? Wikipedia mandates a neutral point of view for all its articles, but the nLab is already intentionally taking sides on some controversial issues, as its About page says: “we do not hesitate to provide non-traditional perspectives… if we feel that these are the right perspectives, definitions and explanations from a modern unified higher categorical perspective.”

There may be little to no disagreement among contributors to the nLab about what the “right” modern unified higher categorical perspective is (although such disagreements are probably not out of the question), but there may be much more disagreement over terminology. Ideally, we could all discuss things and come to an agreement, but some new contributor in the future might arrive with strongly held differing views. Should we try to avoid making choices at all? We should certainly alert the reader of the existence of differing terminologies, but it seems that it might be desirable for the nLab as a whole to use consistent terminology. Any thoughts?

Posted by: Mike Shulman on January 4, 2009 1:12 PM | Permalink | Reply to this

Re: terminology conventions

Regarding “Grothendieck topology” – those are all good points, of course (and not the first time such points have been made). I wouldn’t call it particularly “the author’s choice of terminology” in this instance, since that might suggest that the author was doing something idiosyncratic instead of just following a tradition (however irreflectively :-) ).

It reminds me of another “controversy” that came up at the Café some time ago: use of the word “schizophrenic” as in “schizophrenic object” (something used to induce a concrete duality, such as ‘2’ playing a double role in Stone duality). Tom Leinster complained about the propriety of this word, quite rightly in my opinion, and there was some discussion about alternatives. Anyway, as the Café and the nLab build strength and momentum and become forces to be reckoned with, I’m all for taking advantage of the gathering strength and coming to a consensus on what we regard as improved terminology, with a view toward a brighter future. :-)

On the specific matter of “Grothendieck topology”, I like something along the lines of “local operator”. Recalling something Lawvere once wrote, I’d prefer “local modal operator”, since inserting jj in front of a predicate ϕ\phi is trying to express a modality of the form “it is locally the case that ϕ\phi”.

Posted by: Todd Trimble on January 4, 2009 1:14 PM | Permalink | Reply to this

Re: terminology conventions

I wouldn’t call it particularly “the author’s choice of terminology” in this instance, since that might suggest that the author was doing something idiosyncratic instead of just following a tradition (however irreflectively :-) ).

You are quite right; I realized after I wrote that that it was poor phrasing. Part of the point I wanted to make is exactly what you said: that we can try to be a force for good terminology as well as good mathematics.

Not that one should go around changing terminology willy-nilly. But I think that “Grothendieck topology” is bad enough to be worth trying to change. I like “coverage” for a system of covers on a category, but your suggestion of “local modal operator” for the thing formerly known as a Lawvere-Tierney topology is certainly attractive. It’s a little long though; what about just “local modality”?

Posted by: Mike Shulman on January 4, 2009 9:18 PM | Permalink | Reply to this

Re: terminology conventions

‘what about just “local modality”?’

So far I like that! I’d like to sleep on it, try it on for size, etc., before I fully assent to that however.

My general instinct is to be cautious about inventing terminology – particularly avoiding the temptation to be too “cutesy”. The thing to remember is that whatever term of phrase we adopt (it could be for anything), we should be prepared to live with it for decades down the pike, and something that seems witty or cute in the present day can become very tiresome or even embarrassing twenty or thirty years from now. But I don’t see that happening with “local modality”. Has a pleasant and dignified ring to it.

Posted by: Todd Trimble on January 4, 2009 9:36 PM | Permalink | Reply to this

Re: terminology conventions

Do ‘modalities’ in the sense you are using exhaust/exceed those generally considered in modal logic?

  • tense logic: henceforth, eventually, hitherto, previously, now, tomorrow, yesterday, since, until, inevitably, finally, ultimately, endlessly, it will have been, it is being…
  • deontic logic: it is obligatory/forbidden/permitted/unlawful that
  • epistemic logic: it is known to X that, it is common knowledge that
  • doxastic logic: it is believed that
  • dynamic logic: after the program/computation/action finishes, the program enables, throughout the computation
  • geometric logic: it is locally the case that
  • metalogic: it is valid/satisfiable/provable/consistent that

I see Awodey and Kishida note in their paper – Topology and Modality:

Although the topological formulation presented here is more elementary and perspicuous, the topos-theoretic one is more useful for generalizations. For example, we see from it that any geometric morphism of toposes (not just id *id *id^* \dashv id_*) induces a modality on its domain. This immediately suggests natural models for intuitionistic modal logic, typed modal logic, and higher-order modal logic.

Hmm, that id *id *id^{*} \circ id_* is a comonad is a case of what you, Todd, were saying elsewhere in response to the claim that “modal logic is seen as the logic of coalgebras”. So is there some larger topos-coalgebra story to tell?

Posted by: David Corfield on January 5, 2009 3:12 PM | Permalink | Reply to this

Re: terminology conventions

I don’t know a whole lot about modal logic, but I think that the modalities we are considering are probably precisely the particular modalities you listed as “geometric logic: it is locally the case that.” The idea is that for a proposition PP, the proposition j(P)j(P) is true iff PP is “locally true,” or true after passing to a cover, where the meaning of “cover” is defined by the particular local modality jj under consideration.

This suggests that “geometric modality” might also be an appropriate term.

Posted by: Mike Shulman on January 5, 2009 7:44 PM | Permalink | Reply to this

Re: terminology conventions

We should certainly alert the reader of the existence of differing terminologies, but it seems that it might be desirable for the nLab as a whole to use consistent terminology. Any thoughts?

Yes indeed. A glossary page with nlab terminology in the first column would help.
Recall my problem with lax versus pseudo versus sh = infty

Posted by: jim stasheff on January 4, 2009 2:38 PM | Permalink | Reply to this

Re: terminology conventions

There is an interesting site

http://jeff560.tripod.com/mathword.html

which is an ongoing attempt to catch the earliest occurrance of mathematical words

Posted by: jim stasheff on January 6, 2009 1:02 AM | Permalink | Reply to this

pseudo

Searching the n-lab for pseudo,
I am told of lots of entries containing the word
but search for pseudo-tensor or pseudo-functor
or without the - I find nothing

and I can’t create an entry becasue I don’t know the definitions

might also be good to have a page

pseudo versus lax

Posted by: jim stasheff on January 12, 2009 5:12 PM | Permalink | Reply to this

Re: pseudo

Three hits for pseudo\-functor.

Posted by: Jacques Distler on January 12, 2009 6:14 PM | Permalink | PGP Sig | Reply to this

Re: pseudo

Found it that way
but why is the \ necessary?
Following two of the leads, I still can’t find a definition of pseudo anything

we do need a glossary page
eithe with dfintions and/or links

Posted by: jim stasheff on January 13, 2009 1:00 AM | Permalink | Reply to this

Re: pseudo

but why is the \ necessary?

Regexps.

Posted by: Jacques Distler on January 13, 2009 7:09 AM | Permalink | PGP Sig | Reply to this

Re: pseudo

I can’t create an entry becasue I don’t know the definitions

Don't let that stop you!

Already many entries have been created, where someone just describes the basic idea, leaving an ellipsis where the definition should go.

Most of these now have definitions, often written by a different person.

Posted by: Toby Bartels on January 13, 2009 3:48 AM | Permalink | Reply to this

Asking Questions

I probably represent the lower rungs of knowledge, but am very interested in everything at the nLab. Consequently, I have lots of questions. I’ve been asking questions in a “Discussion” section at the bottom of the page. Is that the right thing to do? Where should I ask questions? Sometimes my questions are very basic and I feel a little guilty for “polluting” the page with silly questions. Then again, even if my questions are silly, the answers are very helpful. Maybe it is best to ask questions on the page.

If the “Discussion” section begins to get unruly, we can always (as I suggested somewhere) create a separate page, i.e.

[[Discussion: directed graph]]

with

category: discussion

What do you think?

Posted by: Eric on January 4, 2009 3:38 PM | Permalink | Reply to this

Re: Asking Questions

I think concrete technical questions, at whatever level of understading, well deserve to be placed in the corresponding nnLab entry. Please continue asking such quiestions there.

I think if and when a questions asked at an nnLab entry turns out not to have a straightforward answer, or if the answer is controversial or the like, i.e. whenever the question requires discussion, this discussion should be moved here to the blog.

Posted by: Urs Schreiber on January 4, 2009 4:04 PM | Permalink | Reply to this

Re: Asking Questions

Seconded. If you are confused about something, it is likely that other people may also be confused about it, now or in the future, and when the answer is incorporated into the page it will make it that much better of an expository resource.

Posted by: Mike Shulman on January 4, 2009 9:27 PM | Permalink | Reply to this

Re: Asking Questions

I agree that questions about a given topic should appear on the page devoted to that topic, rather than “General Discussion”. As long as they’re clearly separated from the main part of page - maybe at the end, in a section called “Discussion” or “Questions” or something - they should be helpful to other readers who have similar questions.

What I’m wondering about is this. After a question has been answered, and some time has passed, would it be okay for me to go back and “polish” the question a bit, or the answers? Often a question will be a mixture of idiosyncratic concerns and some classic “question that everyone has”… and I feel that in the long run, most new readers would prefer to read the answer to a polished version of the question - the sort of thing you see in a FAQ.

For example, in the page monoidal categories, Eric asked if monoidal categories could be defined by internalization. This is a classic question: everyone who learns category theory should eventually ask:

Can we define monoidal categories using internalization? For example, is a monoidal category is a monoid object in CatCat?

The answer to this is quite instructive — so this question, and the answer(s), deserve to live permanently on this nLab page.

But Eric didn’t ask the question in its “perfected form”. So, I’m wondering if it’s okay for me to go back and “polish” it.

Obviously this runs the risk of annoying the person who originally asked the question. And this may be even more true when it comes to perfecting people’s answers. So, what should we do?

One possibility is to have a FAQ section that distills the results of discussions, separate from the actual discussion.

Posted by: John Baez on January 6, 2009 6:57 PM | Permalink | Reply to this

Re: Asking Questions

I think: whenever you see by your judgement any way to improve the content of any entry, you should not hesitate to do so.

Every discussion within an entry is indication that it needs improvement. Anyone who feels he or she can offer this improvement should, I’d say,

- do so;

- and then move the respective dicussion to the very bottom of the entry under a headline “Discussion” and a remark: “Previously we had had the following discussion”, or the like.

If the improvement does not lead to more discussion, we should all wait a while, and if nothing further happens concerning this point, eventually somebody should delete the now redundant archived discussion.

We proceeded this way on a couple of entries already, such as that on tensor product and that on ω\omega-categories.

Posted by: Urs Schreiber on January 6, 2009 7:11 PM | Permalink | Reply to this

Re: Asking Questions

The only thing I would suggest is that if you modify a question (from me, for example), maybe also delete the “Eric says” so that it doesn’t look like I asked a perfect question (since we know THAT would never happen :))

Posted by: Eric on January 6, 2009 7:43 PM | Permalink | Reply to this

Re: Asking Questions

I wrote on HowTo that query boxes are impermanent parts of a page, and that you should expect your queries to be deleted (possibly by yourself) some time after they've been answered. (That was before we started moving long query boxes to discussion sections at the end of the page.)

Often, the result of a discussion is that you explain the answer to the original question in the article. That is the ultimate rewritten question; you rewrite the article to contain the answer already! Then you can delete the discussion, preferably after the original questioner agrees that their concerns are addressed.

On the other hand, some entries probably could use a FAQ section, something apart from the main material that answers common questions. That could be a good idea as well.

Posted by: Toby Bartels on January 6, 2009 11:12 PM | Permalink | Reply to this

Re: Asking Questions

A few times I have already incorporated the result of a discussion into the main article and then removed the discusion. Of course, one shouldn’t change text that’s specifically attributed to someone else, but polishing a discussion into an unattributed “FAQ” sounds like another good approach.

I think there isn’t a big risk of annoyance if we wait until it seems fairly clear that a consensus has been reached in a discussion (or at least an “agreement to disagree”), and then announce what you’ve done on the “latest changes” page to give anyone who was involved in the discussion a chance to object. Waiting a while after doing the improvement to the article before removing the discussion may also be a good idea, but we do have history saved so I’m not sure it’s always necessary.

Posted by: Mike Shulman on January 7, 2009 12:52 AM | Permalink | Reply to this

Adding “category: people” for New Contributors

Hi,

I meant to ask about this anyway, but Toby just left a note on my page, so I thought I would pose it here to see what others think.

First, a brief background. Toby noticed that I’ve been adding the words “category: people” to any new contributors page before they have a chance to add content to give some background about themselves. Here is my response with my reasoning. Opinions welcome!

Is it really a good idea to create pages for new authors that say nothing but ‘category: people’? I think that it might be better to leave the pages unwritten, so that they might notice the question marks by their names and be inspired to tell us about themselves. —Toby

Eric says: That is a good question! I was going to ask about that. I’ll pose the question on the blog and see what others have to say. Personally, when I first edited a page and saw my author name at the bottom with a big greyed out background and a question mark after it, I felt “pressured” to create my own page just to cosmetically get rid of the greyed out background. My thought was that by adding SOMETHING to the author’s page, it would take off some of that pressure and they can modify it whenever they want to at their leisure.

The point for adding “category: people” is that they will show up when you click “people”, so you can see a list of names of people who have contributed.

I’m happy to do whatever you guys want. I thought I was doing a service to new authors, but I can see various arguments either way.

Again, my experience was a bit of “panic” after making a small change to a page and seeing my name show up with the greyed out background and question mark after it. I can understand how new contributors might feel a little timid about changing something only to see their name highlighted with an intimidating (to me anyway) question mark.

By adding “category: people”, I thought I might be relieving any new contributors from the feeling that they MUST add something to their page just to get rid of that grey background and question mark next to their name.

I think it is obvious that new contributors are highly encouraged to add a bit of background to their page even without the greyed out background and question mark.

Like I said, I’m happy to do whatever. I feel like my contributions to the nLab will generally be more cosmetic than substantive, but I’m happy to help out in any way I can. (“Help out” is an ambiguous term and sometimes good intentions have undesired effects so I’m happy Toby asked this question)

Posted by: Eric on January 5, 2009 4:39 AM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

What about adding a comment to the HowTo page like:

“Once you edit a page, your name will appear at the bottom, grayed out with a question mark since you don’t have a user page yet. You may take this as an invitation to create your user page and tell us about yourself. But if you don’t want to or don’t have the time right now, you should also feel free to just make a blank user page for yourself, containing only ‘category: people’ (so that you show up in the list of contributors).”

Posted by: Mike Shulman on January 5, 2009 5:43 AM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

I like that idea!

Maybe even with a clear heading

#Note to New Contributors#

Once you edit a page, your name will appear at the bottom, grayed out with a question mark since you don’t have a user page yet. You may take this as an invitation to create your user page and tell us about yourself. But if you don’t want to or don’t have the time right now, you should also feel free to just make a blank user page for yourself, containing only ‘category: people’ (so that you show up in the list of contributors).

I was going to say, “If this is agreeable, I can add it.” But in the wiki spirit, I will just add it and if you’d like to improve upon it or remove it, fell free :)

Posted by: Eric on January 5, 2009 6:02 AM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

feel free to just make a blank user page for yourself, containing only category: people; (so that you show up in the list of contributors).

is it already on the Howto page as to how to make such?

Posted by: jim stasheff on January 5, 2009 2:32 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

There is now :)

Posted by: Eric on January 5, 2009 3:42 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

Yes! I was going to say: “Please go ahead and add whatever you think should be added.” Now Eric already did. Good.

Posted by: Urs Schreiber on January 5, 2009 6:51 AM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

The point for adding “category: people” is that they will show up when you click “people”, so you can see a list of names of people who have contributed.

The comments here are quite sensible, except that most of them are based on this misconception.

Here is the problem: category: people already contains one person that is not a contributor (James Dolan), and presumably it will eventually have further articles on people unlikely to contribute (Bill Lawvere, maybe Ross Street), and even people who couldn't possibly contribute (Max Kelly, Saunders Mac Lane).

On the other hand, if you want a list of contributors, you click Authors at the top of the page.

Of course, we could restrict category: people to contributors; that is more convenient to look at than Authors for a simple list. And we could add, say, category: biography if we want to pick out the articles on Jim Dolan and Saunders Mac Lane.

But the main point is this: Nothing has to be done to get people's names on a list of contributors. In fact, they couldn't take it off the list if they tried!

I felt “pressured” to create my own page just to cosmetically get rid of the greyed out background.

Ah, see, I think that such pressure is a good thing. But if people are intimidated (as you suggest), then your removing that intimidation would be good. A note on HowTo would also be good (but I don't think that we should encourage them to only write category: people, since that's not necessary to join a list of contributors and therefore really adds nothing).

Posted by: Toby Bartels on January 5, 2009 4:40 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

The comments here are quite sensible, except that most of them are based on this misconception.

Bad proofreading; “this misconception” referred to a paragraph that I deleted from my final post, not to Eric's paragraph that I quoted; Eric's paragraph seems to based on that misconception, but it does not actually contain it. (Which is: that category: people is the place to look for a list of contributors.)

Posted by: Toby Bartels on January 5, 2009 4:46 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

Good point!

Ok. So we should decide what “category: people” should mean.

Since we already have a “category: people”, I would suggest we use “category: people” for contributors and “category: biography” for pages dedicated to biographical information. This just means that contibutors with biographical information should have

category: people, biography

This way you show up on both lists.

In my case, I have no biographical info on my user pager, so mine would remain just

category: people

What do you think?

Posted by: Eric on January 5, 2009 5:33 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

What do you think?

Sounds good to me!

Posted by: Urs Schreiber on January 5, 2009 5:47 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

A less automated, but effective solution is to just leave “category: people” as is and create a page [[Contributors]].

Posted by: Eric on January 5, 2009 7:10 PM | Permalink | Reply to this

Re: Adding “category: people” for New Contributors

And how is that not completely redundant with the “Authors” page?

Posted by: Jacques Distler on January 5, 2009 8:45 PM | Permalink | PGP Sig | Reply to this

Re: Adding “category: people” for New Contributors

The “Authors” page is a list of authors together with every single page they’ve ever modified. It is a good thing to have, but is difficult to read and isn’t exactly the right thing if all you want to see is a concise list of contributors.

Posted by: Eric on January 5, 2009 9:41 PM | Permalink | Reply to this

Re: nLab – General Discussion

Excuse me, it’s just to report a small typo on this page: obserrvables. I don’t know whether this is the right place to report on typos as well as other basic questions… Please remove this comment afterwards.

Posted by: Christine Dantas on January 9, 2009 8:37 PM | Permalink | Reply to this

Typos (Was: nLab – General Discussion)

The right place to report typos on a page is through the “Edit” link near the bottom of the page.

It's a highly automated process; instead of giving you a form to write to a human about the typo, this link simply presents you with the full contents of the page in the original markup. Correct the typo there, hit the “Submit” button, and the software will automatically correct the typo on the page itself in real time.

(You need Javascript and cookies, for some security or anti-spam reason that Jacques knows. But you can delete the cookies afterwards if you want.)

Posted by: Toby Bartels on January 9, 2009 9:02 PM | Permalink | Reply to this

Re: Typos (Was: nLab – General Discussion)

Thanks. I will try it.

Posted by: Christine Dantas on January 10, 2009 2:06 PM | Permalink | Reply to this

enriched sheaf and topos theory?

We have lots of entries currently which describe concepts enriched over/in Sets that exist just as well enriched over/in more general enrichment categories.

In particular everything we say about presheaf, Yoneda lemma and all we currently say at sheaf.

I would like to eventually add more systematic remarks on how things generalize to more generic contexts. But I notice that there are lots of aspects that I don’t know yet: for instance while the definition of sheaf generalizes to the arbitrary enriched context, various constructions need not, such as notably sheafification, I suppose.

And what happens to the topos-theoretic aspects of presheaf categories in the enriched context?

Different but maybe not unrelated to that is the question if we can formulate some of the aspcets in entries such as (-1)-groupoid more robustly in an internal way. I gather that (1)(-1)-categories and (1)(-1)-groupoids are another for talking about the subobject classifier in the ambient topos.

Now, as we realize that the 0-category of (-1)-categories is really a (0,1)(0,1)-category, namely the poset Subobjects(Ω)Subobjects(\Omega), how do we say that internally? We don’t want a “partially ordered set” but a “partially ordered object” in the topos, an internal (0,1)(0,1)-category. Or not?

Maybe I am asking if in parallel to enriched homotopy theory, which tries to coherently pair [enriched category theory]\cup[homotopy theory] we can discuss a coherent pairing [enriched category theory]\cup[sheaf and topos theory].

Probably this already exists and I am just being ignorant. I’d like to learn about that and start adding stuff about it to the nnLab.

I find myself thinking about the pairing all three of these, actually. All input would be welcome.

Posted by: Urs Schreiber on January 10, 2009 12:32 PM | Permalink | Reply to this

Re: enriching concepts

The general plea from Urs is a good one, but may I ask that there is some morediscussion of some pedagogical issues in this discussion section as well. This particularly relates to enrichment and may help clarify things with regard to the mathematical aspects as well.


When going towards enriched settings it is easy for those in the know to do something that is correct but not completely transparent. The task of dragging concepts from the dark is hard to do, but sometimes especially with enriched concepts to get the idea to work one has to rewrite the original unenriched concept slightly. If the nLab is to be useful not only for us who are already reasonably skilled at this process, but also for the beginner in the area, then we need something more like:

Original unenriched concept:

Adapted description with reasons why, intuition etc. for the slight change in viewpoint, then

Enriched concept.

I have been trying to use something like this approach in working with weighted limits and colimits, with just partial success. (It is not that easy to do!) If someone wants to see my efforts in the longer menagerie notes just ask.

The same thing goes for categorification. I am trying to explain categorifcation to some analysts and mathematical physicists for some seminars and the problem is to find the right approach direction.

I would greatly appreciate discussion of these slightly pedagogic issues on the blog.

Posted by: Tim Porter on January 10, 2009 4:42 PM | Permalink | Reply to this

Re: enriching concepts

Hi Tim,

I would love to see anything you have to say, e.g. notes, lectures, whatever. I think I am a good target audience for what you are trying to do (PhD in applied physics/engineering, but have been e-hanging around with these guys for decades).

If you have anything you prefer sending via email, mine is eforgy at yahoo, but providing links works too!

Posted by: Eric on January 10, 2009 9:36 PM | Permalink | Reply to this

Re: enriching concepts

I agree with Tim. And I think that this goes for more than just enrichment (and, obviously, internalization). We naturally like, in a page's first draft, to get down the cool nn-categorial way of looking at something; but when we get a fleshed-out article that we expect newcomers to learn something from, then we really should start with the familiar and move from there to the new and exciting.

Posted by: Toby Bartels on January 11, 2009 10:00 AM | Permalink | Reply to this

Re: enriching concepts

Amen! and remember those of us for whom the muse did not sing that language at our cradle

Posted by: jim stasheff on January 11, 2009 2:36 PM | Permalink | Reply to this

Re: enriching concepts

We naturally like, in a page’s first draft, to get down the cool n-categorial way of looking at something; but when we get a fleshed-out article that we expect newcomers to learn something from, then we really should start with the familiar and move from there to the new and exciting.

We discussed this before in some other thread:

a) YES, we want description at each single entry of everything at all levels of sophistication which seem useful. Under suitable headlines “Basic idea”, “How to think about this” “Now the full definition” “Here the full-blown abstract-nonsense way to think about it” or the like, we want all the relevant discussion.

b) But this requires work. So it requires YOU to help out. If any entry contains just the lowbrow or just the highbrow description of anything at the moment it does not mean that it is intended to be restricted to that. Everybody PLEASE go ahead and add whatever material or discussion he or she feels whould not be omitted from the given entry.

Posted by: Urs Schreiber on January 11, 2009 6:01 PM | Permalink | Reply to this

Re: enriched sheaf and topos theory?

I edited (1)(-1)-groupoid to clarify the poset situation; you are right that Ω\Omega is always a partially ordered object in the topos.

Enriched topos theory is actually a deep and (at least, to me) unclear subject. So much of topos theory seems to depend on cartesianness. I’ve occasionally thought about what an enriched topos might be, but never really come up with anything really satisfactory. If the enrichment isn’t cartesian monoidal, then the internal logic of an enriched topos would probably be linear logic, but how to interpret linear logic internally in some category is also not an obvious question. One expects that perhaps “quantales” (closed monoidal suplattices) will play a role.

On the other hand, one can certainly define sheaves of anything one likes. It may be a bit harder if the site is also enriched. But I don’t know a whole lot of examples. In one approach, spectra are defined to be a certain sort of sheaf of topological spaces, although the word “sheaf” isn’t usually used.

Posted by: Mike Shulman on January 10, 2009 9:35 PM | Permalink | Reply to this

Re: enriched sheaf and topos theory?

On the other hand, one can certainly define sheaves of anything one likes. It may be a bit harder if the site is also enriched. But I don’t know a whole lot of examples.

The motivating class of exmaples which made me come to post the above comment is the case where VV is some category of “higher structures”, such as simplicial sets, ω\omega-categories.

Following our discussion at \infty-stack homotopically we can model a whole lot of \infty-category theory in terms of the VV-enriched homotopical category of VV-valued (pre)sheaves on some locally small category SS.

But thinking about this, the statement becomes most natural if we think of SS as being VV-enriched itself. Mostly implicitly one thinks here of a natural canonical inclusion SetsVSets \hookrightarrow V.

But in most every discussion of covers, this makes people run into a little cheat: in the context of locally small SS covers are described by a variety of means, but in most applications a much more elegant description arises after enlarging the perspective suitably: covers in SS are just suitable acyclic fibrations in [S op,V][S^op,V] after the Yoneda embedding.

(There is a discussion (which has potential for improvement…) at the end of nnLab:sieve which mentions how a sieve is nothing but the presheaf represented by the acyclic fibration in [S op,V][S^op,V] induced from the cover. )

That looking at SS as a locally small, i.e. Sets-enriched category here is in some respects suboptimal or at least slightly awkward is rarely mentioned in the litarture. It is mostly considered as obvious, I suppose. I found one remark which mentions the awkwardness explicitly as footnote 3 on p. 10 of Toën’s Higher and derived stacks.

Maybe I am seeing ghosts here, but this made me think that there is reason to comtemplate seriously the possibility that we want to think of SS here as VV-enriched for possibly more general enrichements than those factoring through SetsVSets \hookrightarrow V and consider [S op,V][S^op,V] in the fully VV-enriched context. For instance instead of taking S=TopS = Top or S=DiffS = Diff as usual, one could start with S=Top Δ opS = Top^{\Delta^op} and S=Diff Δ opS = Diff^{\Delta^op}, which would make the covers of spaces be honest objects in SS.

Whether one does this fully explicitly or not, it seems that a bunch of general questions arising when modelling \infty-categorical structures using “enriched homotopy theory” or “homotopy coherent category theory” should naturally live in a context of “sheaf and topos theory enriched over VV”. Or that was at least what motivated my remark.

Posted by: Urs Schreiber on January 11, 2009 5:44 PM | Permalink | Reply to this

Re: enriched sheaf and topos theory?

I agree that there is reason to contemplate it seriously, but I have not myself done so. I expect that some, but not all, aspects of the unenriched theory will carry over. I am not aware of much work in this direction, although I haven’t done a careful literature search and there might well be some I am unaware of. I would definitely be interested in seeing what you come up with.

Posted by: Mike Shulman on January 12, 2009 5:42 AM | Permalink | Reply to this

Re: enriched sheaf and topos theory?

Mike wrote:

I agree that there is reason to contemplate it seriously, but I have not myself done so. I expect that some, but not all, aspects of the unenriched theory will carry over. I am not aware of much work in this direction, although I haven’t done a careful literature search and there might well be some I am unaware of. I would definitely be interested in seeing what you come up with.

I am thinking that the concept of derived infinity-stack (\to nnLab) is trying to be one realization of this idea.

Posted by: Urs Schreiber on January 13, 2009 6:53 PM | Permalink | Reply to this

Re: enriched sheaf and topos theory?

Further concerning homotopical+enriched+topos theory:

There is a notion of model topos by C. Rezk. This and most everything else related for the case of enrichment over V=SimpSetV = SimpSet is in Toën-Vezzosi’s Homotopical geometry I, Topos theory.

I am beginning to collect this stuff at nnLab: derived \infty-stack.

Posted by: Urs Schreiber on January 13, 2009 9:08 PM | Permalink | Reply to this

Composing Papers

It seems that a natural evolution for the nLab is to actually start writing research papers in the open. We’ve collected a bunch of pages with definitions, but it seems you’re already pushing the boundaries of known material. Would it make sense to start discussing a framework for collaborating?

What I’m thinking about is possibly a new category, e.g.

category: article

or

category: draft

These would be pages that represent works in progress.

For example, it would be nice to have a page:

[[Groupoidification Made Easy]]

category: paper

This would be John’s paper out in the open ready for anyone to edit. It would point to many existing pages. Etc etc etc.

If we ever did anything like this, should we settle on a naming convention? For example:

[[Paper: Groupoidification Made Easy]]

or

[[Article: Groupoidification Made Easy]]

This seems like a natural thing to do, but I’m happily free of the pressures of being in academics, so I could easily imagine hesitations.

Thoughts?

Posted by: Eric on January 10, 2009 11:16 PM | Permalink | Reply to this

Re: Composing Papers

If we ever did anything like this, should we settle on a naming convention? For example: [[Paper: Groupoidification Made Easy]] or [[Article: Groupoidification Made Easy]]

That shouldn't be necessary, as long as the titles don't conflict with any normal page titles. Most of the time, capitalisation should be enough to avoid that, and we can always make a rule that you have to add something to your paper title (for example, [[On Day convolution]] to avoid clashing with [[Day convolution]]). Actually recognising and searching for papers (from among the other articles) is what the categories are for. Probably one category will be enough, but we can always add more later if we want.

Otherwise, I agree with everything that you say.

Posted by: Toby Bartels on January 11, 2009 7:43 AM | Permalink | Reply to this

Duplication

We have started (and I've done much of it, although I didn't start it) to duplicate subjects (in a way that they would never allow on Wikipedia), and I think that this is a good thing. But it can be confusing, especially if you think that it's something that needs to be fixed, so I think that I should explain here what I'm seeing and doing, and why I think that it's a good idea.

First, an example (the first that I ever noticed, and had to suppress a Wikipedia-borne urge to ‘fix’): [[NQ-supermanifold]] and [[Lie infinity-algebroid]]. At first glance, these are entirely different subjects, and it's an important theorem (or conjecture? I see only now that Urs has not given a reference for this fact) that they are equivalent. Wikipedia might allow this if they were used in different ways by different people, but even then there would be a discussion (see http://secure.wikimedia.org/wikipedia/en/wiki/Talk:Family_of_sets for an example).

An example of a different sort is [[2-category]], [[bicategory]], and [[strict 2-category]]. Here, [[2-category]] is a general article, with no one precise definition, of what a 22-category should be and how the concept should behave. In contrast, [[bicategory]] and [[strict 2-category]] are about specific definitions of 22-categories and their actual properties. (At least, that's how I see them potentially; there's very little on [[bicategory]] so far.) While there is, strictly speaking, no duplication of subject here, one might try to put it all on [[2-category]]; certainly, Wikipedia would (and does) have only two pages rather than all three.

Here is what I see as the big difference in practice: while Wikipedia's pages are about a given subject, ours are about a given term. So while there are only two kinds of 22-category used in mathematics, we have a page on each (with its term) and on the general concept (with its term, even though most people use that term for one or the other of the specific subjects). And while NQ-supermanifolds are equivalent to Lie \infty-algebroids, this is not obvious but is an important fact that must be noted, which we do on each page with no denigration of either term.

Now, one difference between the nn-Category Lab and Wikipedia is that Wikipedia's software makes it very easy to seamlessly redirect links to [[foo]] to another page [[bar]], and we don't have that feature here. So if we decide to combine pages or later split them, we have to go through the whole rest of the wiki and fix or disambiguate all of the links (or force the user to make an extra click every time they follow one). But of course, if that were the only reason for the difference, then we would prefer to add this feature to our own software.

However, there's a much more important difference between us and Wikipedia, which is that Wikipedia is a compendium of established knowledge, while we are pushing the frontiers. This means that many of our definitions will be tentative, and even some that we are sure about will need justification for the uninitiated. With that understanding, it's very useful to have a page on each term, so we can explain on each page the points relevant to that term. To the extent that we're sure of an identification of one term with another, we can justify the more obscure term on its page and send people to the more common term's page to read all the facts. To the extent that our identifications are uncertain or in flux, we can clearly distinguish on each term's page the known properties of that term's referent from the conjectured properties, a distinction that won't always be the same from term to term. (And where the identification is fully established and well known, as between [[monad]] and [[triple]], then we just pick one and use it throughout; in this case, we seem to have picked [[monad]].)

If, on the other hand, you disagree with me, then your best example of pointless proliferation is [[truth value]], [[-1-category]], [[-1-groupoid]], and [[0-poset]], for which I am entirely to blame.

Posted by: Toby Bartels on January 11, 2009 8:53 AM | Permalink | Reply to this

Re: Duplication

NQ-supermanifolds are equivalent to Lie ∞-algebroids, this is not obvious but is an important fact that must be noted

I doubt it’s even true - super implies Z/2 graded

even if true, would it mean the defintions are equivalent or just that the categories are equivalent, e.g. every foo is equivalent to a bar and every bar is equivalent to a foo??

Posted by: jim stasheff on January 11, 2009 2:43 PM | Permalink | Reply to this

Re: Duplication

NQ-supermanifolds are equivalent to Lie \infty-algebroids, this is not obvious but is an important fact that must be noted

I doubt it’s even true - super implies /2\mathbb{Z}/2 graded

And the “NN” in “NQ” says that this 2\mathbb{Z}_2-grading is lifted to a \mathbb{N}-grading.

This is well known to be true in low degree and has been proposed to be the general definition at least by Pavol Ševera, and is being generally adopted by people thinking about it.

I thought I gave references at the nLab. Maybe these ended up more in the entry on “Lie theory” or somewhere else. I’ll try to go through the entries and make this clearer.

(This gives me a chance to voice a general experience: writing a single thought into a single entry is one thing, producing a coherent wiki quite another which is beginning to require quite a bit of effort…)

Posted by: Urs Schreiber on January 11, 2009 5:53 PM | Permalink | Reply to this

Re: Duplication

Another reason to avoid the NQ-super terminology

Posted by: jim stasheff on January 12, 2009 2:14 AM | Permalink | Reply to this

Re: Duplication

For what its worth, I like the direction the nLab is going. It makes sense to me to have as many pages as you think are necessary. It is easy enough to link things together. Plus, we can always consolidate things later once a clear picture emerges.

But on the topic of “Duplication”, I’ve seen an example or two where content was copied from one page to another. I’m not sure that is a good idea. Then it becomes difficult to keep track. If someone modifies the content on one page, but not the other where the same content appears, then things can become messy. I suggest that we try to avoid copying and pasting content from one page to another and instead provide links. I have been tempted to undue cases where I’ve seen this, but have been hesitant to “delete” stuff even if it was just copied from another page.

Posted by: Eric on January 11, 2009 6:32 PM | Permalink | Reply to this

Re: Duplication

Speak of the devil!

I see Urs has just created a page [[Yoneda embedding]] with content that also appears on [[Yoneda lemma]]. As a general practice, I would suggest that once the new page [[Yoneda embedding]] is created, that we replace the Section “Yoneda embedding” with a link to [[Yoneda embedding]] to avoid duplicate content.

I will make the change (and you can revert back if you want to), but suggest this as a way to go moving forward.

Posted by: Eric on January 11, 2009 6:43 PM | Permalink | Reply to this

Re: Duplication

Let’s be a bit careful with Yoneda here.

Maybe it’s worth to have the overlap between the entries “Yoneda lemma” and “Yoneda embedding” the way we have right now (this very minute, that it, can’t guarantee for anything for the time you are reading this comment… ;-).

Posted by: Urs Schreiber on January 11, 2009 7:01 PM | Permalink | Reply to this

Re: Duplication

But on the topic of “Duplication”, I’ve seen an example or two where content was copied from one page to another. I’m not sure that is a good idea.

When I say that we should duplicate subjects, I don't mean that we should duplicate content. Each page will explain the ideas from the perspective of the terminology and definitions that it uses.

On the other hand, I do think that copying can be of some use, especially when starting a new page. But we shouldn't expect to keep them in sync; again, each page will discuss the material in its own way.

If [[foo]] talks about bars (including, but perhaps not limited to, their relationship to foos), but we realise that we want a separate page [[bar]], then I would initially copy the material on bars from [[foo]] to [[bar]]. Then I would rework that material to start from the notion of bar, which may amount to simply changing the introduction. And then I would abbreviate the discussion on bars in [[foo]], focussing on their relation to foos but removing the general facts about bars. At this point there may (or may not) remain duplicate sentences or even whole paragraphs about the relationship between foos and bars, but I would make no attempt to keep them identical in future edits (unless such an edit amounted to correcting an error or something like that).

Posted by: Toby Bartels on January 12, 2009 2:22 AM | Permalink | Reply to this

Re: Duplication

Toby wrote:

However, there’s a much more important difference between us and Wikipedia, which is that Wikipedia is a compendium of established knowledge, while we are pushing the frontiers. This means that many of our definitions will be tentative, and even some that we are sure about will need justification for the uninitiated. With that understanding, it’s very useful to have a page on each term, so we can explain on each page the points relevant to that term. To the extent that we’re sure of an identification of one term with another, we can justify the more obscure term on its page and send people to the more common term’s page to read all the facts. To the extent that our identifications are uncertain or in flux, we can clearly distinguish on each term’s page the known properties of that term’s referent from the conjectured properties, a distinction that won’t always be the same from term to term.

Yes, I agree.

One place where this issue is currently occupying my attention is the circle of entries which revolve around cohomology and \infty-stacks.

It was after reading (again) Mike Shulman’s work on enriched homotopy theory and re-reading in the light of this the work by Toën on \infty-stacks that I came to think that there is a nice big(ger) and more nicely more coherent picture to be described here. I started indicating this picture at \infty-stack homotopically and am further developing this currently more privately over at Nonabelian homotopical cohomology and fiber bundles. I am thinking that this should eventually make me go back to rearrange all the existing entries on higher cohomology and tell the story from this perspective.

But since this is more work with all the links in there, for the time being I think I will develop this at Nonabelian homotopical cohomology and fiber bundles and then impose the changes to the nnLab in one strike later.

Posted by: Urs Schreiber on January 11, 2009 7:10 PM | Permalink | Reply to this

Re: Duplication

In general I agree with what you say. For pairs of terms that are equivalent, but apparently different, I think can be useful to have a discussion from different points of view on the page of each name. But for pairs that are really just synonyms, it makes more sense to me for one page to just say “see blah.”

In the last-mentioned case, I think it is good to keep “truth value” separate from all the negative thinking pages, since many people may care about truth values but not want to wade through anything talking about (-1)-categories. However, since currently the content and points of view of “(-1)-category,” “(-1)-groupoid,” and “0-poset” are very very similar, I would be in favor of having two of those just say “see the third,” and my choice would probably be for the third to be (-1)-category.

I agree with what you said about copying being useful for starting pages, but only when we expect the new page to soon go in a different direction. I personally don’t have any expectations that (-1)-groupoid and 0-poset will contain any notably information or points of view from (-1)-category in the forseeable future; do you? If I want to change or add something to one of those pages, I don’t want to have to do it on all three, but if they all duplicate each other the only alternative is for them to become randomly out of sync (each containing only those changes made by people who happened to pick that page to edit, rather than one of the other two).

Posted by: Mike Shulman on January 12, 2009 5:37 AM | Permalink | Reply to this

logging latest changes

I want to ask again everybody who does anything on the nnLab more nontrivial than minor polishing to leave a brief note on that at latest changes.

As the wiki grows, we need to find ways to reduce unnecessary time overhead as much as possible, or we’ll all go insane. After staying away from the wiki for more than 24 hours it is already becoming burdonsome to go through the automatically created list at “recent changes”. It should be sufficient to look at “latest changes” to figure out which activity one might want to look into.

Posted by: Urs Schreiber on January 11, 2009 6:28 PM | Permalink | Reply to this

Re: logging latest changes

I wonder if there is a way to automate this. Is there a way to distinguish between a minor polish and the addition of content? For example, if less than 20 characters have changes, it is a minor polish. Then have the “Recently Revised” page only display modification that involve more than 20 characters?

How difficult is it to modify the source code? I’m not a programmer, but code give it a shot.

Posted by: Eric on January 11, 2009 6:36 PM | Permalink | Reply to this

Re: logging latest changes

I wonder if there is a way to automate this.

But I want everybody to drop a small human-written comment at “latest changes” describing what it is that was changed.

I think we really need this. We can’t just all go around working on the wiki without systematically telling each other what it is we are doing.

Posted by: Urs Schreiber on January 11, 2009 6:57 PM | Permalink | Reply to this

Re: logging latest changes

I haven’t been doing a good job recently updating my “latest changes” – sorry. I’ll try to be more disciplined about this.

It would help a little if “latest changes” were one of the headings (like Authors, Recently Revised, etc.). Could that be arranged?

Posted by: Todd Trimble on January 11, 2009 9:24 PM | Permalink | Reply to this

Re: logging latest changes

It would help a little if “latest changes” were one of the headings (like Authors, Recently Revised, etc.).

I agree, but in the meantime, it is at least one of the links in the [[contents]] banner (which is easy to edit) that appears to the side of [[HomePage]] and several other pages.

Posted by: Toby Bartels on January 12, 2009 2:08 AM | Permalink | Reply to this

Re: logging latest changes

A couple questions for the community on this:

  • Does creating a page ever count as “minor polishing?” Or should creation of new pages always be noted? Consider, for instance, bijection.

  • Should every continuation of a discussion be noted, as in “continued discussion with so-and-so at X page”?

Posted by: Mike Shulman on January 13, 2009 3:13 PM | Permalink | Reply to this

Re: logging latest changes

A couple questions for the community on this:

I would answer “yes” at least to the second of these questions. I find it very useful if after a day of work on something else, I can just open the nnLab, go to the “latest changes” page and get a quick idea for what happened this day. This will help me organize my time and attention for the nnLab and make my time spent on it more effectively useful for the general progress.

Posted by: Urs Schreiber on January 13, 2009 4:32 PM | Permalink | Reply to this

Re: logging latest changes

I can see why you like to have the “latest changes” page, but I can also easily imagine how it could be counter productive.

Let me be a devil’s advocate…

People should feel free to edit and add content whenever they can find a free moment. By making contributors feel obligated to update “latest changes”, it almost becomes a chore to keep track of what you have done. If people have limited time (imagine that!), they might just give up contributing if they have that extra pressure to update “latest changes”. In many cases, updating “latest changes” could easily take more time and effort than the actual content they added. At that point, you have to start considering the trade offs and some people might determine it is not worth it.

Instead of pressuring people to update “latest changes”, what if we created an RSS feed that contained ONLY the changes without the surrounding text. That way you can easily thumb through beautifications and see where content is really being created. It becomes obvious and would suit your purposes without putting additional burden on would-be contributors.

Posted by: Eric on January 13, 2009 5:56 PM | Permalink | Reply to this

Re: logging latest changes

I think ideally there would be a “comments” field when you edit a page along with an “is minor edit” checkbox. Then “latest changes” could be automatically populated with the comments for non-minor edits.

Posted by: Mike Shulman on January 14, 2009 5:45 AM | Permalink | Reply to this

Re: logging latest changes

THAT is a good idea. Now we just need someone to implement it :)

I’m not the best qualified but could give it a try. What language is the wiki written in? How do we modify the code?

Posted by: Eric on January 14, 2009 2:46 PM | Permalink | Reply to this

Re: logging latest changes

Logging editorial work at the pages latest changes was just a suggestion on my part, since it seemed to me to be beneficial for everybody involved. But it’s just a suggestion.

Everybody is perfectly free not to follow this suggestion of course. Please, I’d rather have you work on some entry and not log about it than not work on some entry!

Myself, I won’t invest energy in automatizing the “lastest changes” page, for one because for me its value is the human-written comments in it, but if you want to go ahead, don’t let me stop you. Concerning questions about how to change code of the nnLab, you’ll have to get in contact with Jacques. Much as I would like to, I don’t see myself finding the time to look into such activity.

But it would generally be good, for other purposes too, if somebody actively involved in the nnLab were knowledgable enough to be able to play around with the code.

Posted by: Urs Schreiber on January 14, 2009 3:59 PM | Permalink | Reply to this

Re: logging latest changes

Yeah, I know. And I was just being a devil’s advocate :)

But it DID elicit a very good idea from Mike.

The idea is to have a second “Comment” edit box appear on the same page as the edit box when you’re editing a page. That way, you can just jot down a little note as you submit the changes. This note would populate the “latest changes” page (or something like it).

I submitted a feature request on the Ruby page. I’ll dig around to see how much effort it would be for me to try to do it myself.

Posted by: Eric on January 14, 2009 4:54 PM | Permalink | Reply to this

Authorship

One of the aims of the nLab is to provide a space for collaborative work to take place. This raises the question of authorship. It would probably be a good idea to get something in place on this before anyone attempts anything serious rather than after.

Here’s a hypothetical. Someone starts a page on some topic with the intention of doing a little original research on said topic. Various people also contribute, to a greater or lesser degree. At some point, it is felt that something significant has been done and there is something worth publishing there. Assuming that we are still working with the current publishing model, what would people feel would be an acceptable procedure at this juncture? The point about the assumption being that the current model only permits three levels of authorship on a paper:

  1. Author
  2. Mentionned in the acknowledgements
  3. Not mentionned at all

On the other hand, the revision system of a wiki allows for a much more fine level of distinction. So we would need a translation from the wiki levels of authorship to those of a paper.

Now, I don’t expect this to get sorted out straightaway, and I expect that whatever concensus is reached at the beginning will be quickly seen to be a complete travesty once someone actually tries to implement it. So what I would actually like to see is something that allows for this.

What I have in mind is that the original author (who, let me remind you, intended there to be original work) to put up some sort of disclaimer/terms-of-use on the page along the lines of:

“This page (and its sub-pages) have been created for the express purpose of doing original work. It may happen that a snapshot of these pages will be submitted for publication. At that time, an author list will be decided upon. This author list will be decided by [Original author and Certain Others] subject to the guidelines laid down at [Guidance for Publication].”

The “Certain Others” should be a team of respected arbitrators (two names instantly spring to mind) whom everyone trusts to be fair and reasonable. The “Guidelines” obviously should contain a description of what the community regards as being a contribution worthy of authorship or of acknowledgement (for example, adding a full stop at the end of a sentence probably doesn’t warrant an acknowledgement, but sorting out some horrendous notation or checking references probably does).

The discussion I would like to see first is as to the overall model. I don’t think that this is the place to discuss the actual guidelines - if this model is acceptable then the obvious thing to do is create a page on the wiki and see what the eventual concencus is.

(My, but it’s hard writing a post in English when your spell-checker is set to Norwegian.)

Posted by: Andrew Stacey on January 14, 2009 4:59 PM | Permalink | Reply to this

Re: Authorship

One caveat, maybe:

I am getting the impression that it is hard to plan something like “our activity on the nnLab concerning this and that point will be precisely like such” without actually trying to do it first.

Maybe a quicker road to a satisfactory result is:

- just start doing something. First something which is not overly sensitive. See how it develops, see how it can be steered and how not. Then after some practical experience has been gained, go back here and see if one can find an informed consensus on how to proceed on similar, then maybe more ambitious, projects in the future.

Posted by: Urs Schreiber on January 14, 2009 5:09 PM | Permalink | Reply to this

Re: Authorship

For what it’s worth, I created this last week:

Causal Graphs

Obviously, I agree with you :)

Posted by: Eric on January 14, 2009 5:50 PM | Permalink | Reply to this

Re: Authorship

Causal graph:

at a minimum all edges are directed with no cycles

easy special case: leaves are labelled in or out
and edges `flow’ from in to out

somthing more subtle desired?

Posted by: jim stasheff on January 15, 2009 1:24 AM | Permalink | Reply to this

Re: Authorship

I think Eric wants a poset (\to nnLab) of sorts.

It seems that a good formalization of “smooth Lorentzian space(time)” is something like: a poset internal (\to nnLab) to a category of measure spaces.

There are some immediate possibilities about graph versions of this statement that come to mind. For one, a discrtete “Lorentzian spacetime” should be a poset such that all causal subsets are finite set.

A causal subset in a poset XX is what John in his latest entry calls an “interval”, namely for two objects x,yx,y in the poset the under-over category (\to nnLab)

xXy x\downarrow X \downarrow y

of all objects in the past of xx and in the future of yy.

Posted by: Urs Schreiber on January 15, 2009 9:34 AM | Permalink | Reply to this

Re: Authorship

Yes yes! This is exactly the kind of thing I want to try to do and write up at the nLab with help from anyone and everyone here.

The discrete case is easier to work out and, I believe from my experience, can lead to insights into the continuum versions, which seem to not be totally and completely understood yet.

I think switching terminology from graph to poset makes sense. I also like Tim Porter’s terms causet and pospace.

The idea is that from a poset, you obtain higher dimensional elements from differential graded algebras on those posets, i.e. quotients of the universal differential envelope on a poset.

Posted by: Eric on January 15, 2009 4:06 PM | Permalink | Reply to this

causet and pospace

I also like Tim Porter’s terms causet and pospace.

Thanks for this link to Tim Porter’s article! I hadn’t been aware of that. Will read it. You/we/somebody should create an nnLab entry on causets and start adding some stuff there. When I find the time I’ll join in.

Posted by: Urs Schreiber on January 15, 2009 4:21 PM | Permalink | Reply to this

Re: causet and pospace

Yeah. I like Tim’s paper a lot. He showed some interest in working together on something, which is one of the reasons I created that page. I might move it to the main nLab grid to encourage collaboration to make it clear it isn’t mine.

Posted by: Eric on January 15, 2009 5:35 PM | Permalink | Reply to this

Page Direction

Here’s another question, slightly related to the “authorship” one I just posted.

How much is someone allowed to set the direction for a page? I don’t mean “left-to-right” before anyone gives a spurious reply. For example, if someone is intending on using the nLab for collaborative research then they may wish to set a goal or slogan or something for the page. This sort of thing should be laid out in a “code of conduct”, I guess, so that both the original author and any contributors know what is allowed.

Posted by: Andrew Stacey on January 14, 2009 5:05 PM | Permalink | Reply to this

category: reference

I like the new category:reference. But I wonder whether we shouldn’t establish some naming convention for such pages, perhaps akin to the “alpha” naming convention in BibTeX. In particular, it would be nice to be able to keep down the length of the names of such pages. I am shuddering at the thought of having to type (or even cut-and-paste) the page Brown – Abstract Homotopy Theory and Generalized Sheaf Cohomology if I ever want to cite it. Linking to, say, “Bro73” or even “Brown:AHTGSC” would be much easier.

Posted by: Mike Shulman on January 14, 2009 6:22 PM | Permalink | Reply to this

Re: category: reference

But I wonder whether we shouldn’t establish some naming convention for such pages, perhaps akin to the “alpha” naming convention in BibTeX.

Good point. Yes, I agree. So which convention should we use?

What’s the alpha-convention?

Posted by: Urs Schreiber on January 14, 2009 6:48 PM | Permalink | Reply to this

Re: category: reference

A compromise might be to have a

category: redirect

on a page with the abbreviated name that points to the full name.

For example, [[Bro73]] being a page the contains:

See [[Brown – Abstract Homotopy Theory and Generalized Sheaf Cohomology]]

category: redirect

Posted by: Eric on January 14, 2009 7:44 PM | Permalink | Reply to this

Re: category: reference

Do we need reference pages for every reference? Maybe we do; it could be useful, if we get the names sorted out. But I initially populated it with the Elephant and Categories Work (books so well known that they have nicknames!) because I actually wanted to write something about them, not because I thought that the bibliographic data needed to be written down on a page. (In fact, the bibliographic data was pretty sparse; I suppose that I expected people to search the Internet if they really wanted more.)

Posted by: Toby Bartels on January 14, 2009 7:58 PM | Permalink | Reply to this

Re: category: reference

I created that entry for Brown’s article because it is being referred to several times at several entries. I was thinking that when I next want to refere to it, I wouldn’t have to type the full bibliographical details but just give the link to the article’s own entry (which, on top of the bib information, has now a summary of the article).

Moved it to BrownAHT now.

Posted by: Urs Schreiber on January 14, 2009 8:09 PM | Permalink | Reply to this

Re: category: reference

I don’t think we need a page for every reference, but I can see that it might be useful to have pages for those that are referred to fairly frequently, in addition to those about which we have specific things to say.

The “alpha” convention in BibTeX is what you get when you say \bibliographystyle{alpha}; it produces things like [Bro73]. I would prefer something a little more memorable; BrownAHT is reasonable.

One possible convention (which is more or less what I use when giving tags to my own BibTeX entries) is the following. If the reference has one author and a title that can be summarized or abbreviated in one word, call the reference page “Lastname:Title”. So we could have Johnstone:Elephant. If there are multiple authors, use instead their last initials; so we could have CG:Degeneracy I. (For just two authors, we could be flexible and allow “Cheng-Gurski:Degeneracy I.”) And if the title doesn’t seem amenable to comprehensible abbreviation, just use the first letters; so we could have Brown:AHTGSC. There is still some discretion involved on the part of whoever makes the reference page, but we would end up with names that are fairly short and fairly memorable.

Posted by: Mike Shulman on January 14, 2009 9:42 PM | Permalink | Reply to this

Re: category: reference

and as much as possible let’s have references in bibtex format somewhere

perhaps a [[Bibtex]] page for ALL those we contribute in one place

Posted by: jim stasheff on January 15, 2009 1:28 AM | Permalink | Reply to this

Re: category: reference

Regarding a citation convention, why not use the MRNumber, at least where that is available? And the arXiv number where that is available (with MR beating arXiv if both are available)? The only objection to that that I can think of is that neither is particularly memorable, but to be honest I don’t think that any convention is going to eliminate the need to look up the exact reference each time. By my estimation, [Bro73] could refer to about 60 different papers.

I’ve yet to contribute to the nLab so I don’t know how easy it is to implement a new type of reference. Something like the [[word]] code but maybe [[mr:mrnumber]] or [[arxiv:arxivref]] automatically puts in all the rest of the bumph. That also helps Jim’s request since MathSciNet can export to BibTeX. The Front used to do this for the arXiv but I don’t think that it does anymore.

Posted by: Andrew Stacey on January 15, 2009 8:21 AM | Permalink | Reply to this

Re: category: reference

Well, I can’t speak for anyone else’s memory. But I will remember “Elephant” or “Johnstone:Elephant” though I have no chance of remembering “MR1953060.”

Posted by: Mike Shulman on January 15, 2009 8:26 PM | Permalink | Reply to this

Re: category: reference

Actually, that makes my point. You find “Johnstone:Elephant” or “Elephant” easy to remember. But that is because you are already very familiar with the work. Someone who isn’t would have to look up how to spell “Johnstone” - is it “Johnstone”, “Jonstone”, or “Johnston”? Some others might even have to look up “Elephant” (in Norwegian it is “Elefant”). In addition, “Elephant” on MathSciNet picks up 46 references (slightly less than Bro73 would match, I accept).

A naming convention isn’t actually supposed to make life easier for you. It’s to ensure that there are no duplications and to ensure that there is a standard process by which anyone - even and particularly a machine - could find an unknown reference.

For my papers, I have a standard citation convention for labelling references. Some references are easy to remember because I use them all the time. However, I frequently forget other references - and no naming convention would help me remember them - so I have a quick-search program which greps through my BibTeX files and finds matches on author or title or something else. Likewise, it doesn’t take long to flip over to MathSciNet and search for a paper and get the MR number. In fact, if I were creating a new reference for a paper on the nLab then I would do that anyway whatever the convention because I would want to make sure that I had the name and title right and because I’d (probably) want to put a link on the reference page to the MathSciNet entry.

The additional advantage of MR and arXiv is that it makes it so much easier to cross-reference to the original source and to other blogs etc.

Posted by: Andrew Stacey on January 16, 2009 8:31 AM | Permalink | Reply to this

Re: category: reference

But we are not planning to have an nLab page for every math paper in the world. We don’t mean to duplicate MathSciNet or the arxiv. There are only a few papers and books that we cite frequently and/or have contentful things to say about. The page “Elephant” is not a bibliographic reference, and in fact it doesn’t contain the usual sorts of bibliographic data (although it might definitely be helpful for it to link to MathSciNet). Neither is it a review or a summary. It’s a page written about a book and its place in mathematical culture, to give some context for references to that book elsewhere in the nLab.

Therefore, I would argue that actually, the purpose of this naming convention is to make things easier for us. I see no reason a machine would be looking for reference pages at the nLab; that’s what MathSciNet and the arXiv are for. The nLab is not a bibliographic reference; it’s more like a blog (though it isn’t a blog). If you blog about a new math paper, do you use the MR# as the title of the blog entry so that a machine could find it? No, you just make a link, and then the arXiv has a list of backlinks.

Posted by: Mike Shulman on January 16, 2009 8:39 PM | Permalink | Reply to this

Re: category: reference

The system needs to be robust. What happens if a review page is written for a paper after it has been cited in several articles on the nLab? What happens when someone writes a page on the nLab who doesn’t know all the “jargon terms” and who therefore doesn’t recognise which references have reviews and which don’t?

One possible system would be to use MR number or arXiv number by preference on the pages of the nLab and then have something internal which cross references that to reviews on the nLab. If a review exists, it links to the review. If not, it links to the MathSciNet or arXiv page. That way, the system automatically updates when new pages are added or not.

Posted by: Andrew Stacey on January 19, 2009 2:44 PM | Permalink | Reply to this

Re: category: reference

It should be robust, but it doesn’t need to be perfect. Your suggestion would be fine with me, if someone cared to code it up, as long as we could also link directly to the review by a more memorable name. But with the software we have, I don’t think there’s any way to automate creating links to newly created category:reference pages, no matter how we name them. As for your second question, I think that’s what the page category:reference is for.

Posted by: Mike Shulman on January 19, 2009 7:51 PM | Permalink | Reply to this

Order of composition

Given morphisms f:XYf: X \to Y and g:YZg: Y \to Z, how do we name the composite XZX \to Z? Most people on the Lab seem to be calling it gfg f, although I know that some people prefer fgf g. Should we pick one?

Actually, I think that we should not pick one. I think that we should write either gfg \circ f or f;gf; g. These notations are unambiguous (for those who know them) and are explained on [[composition]]. Of course, we can abbreviate if we've got a lot of composites on a page, but then I think that we should say so specially.

Posted by: Toby Bartels on January 15, 2009 5:58 PM | Permalink | Reply to this

Re: Order of composition

Actually, I think that we should not pick one. I think that we should write […]

Yes, I agree. I am in favor of following your suggestion.

Posted by: Urs Schreiber on January 15, 2009 6:31 PM | Permalink | Reply to this

Re: Order of composition and Webs

When above I wrote:

Yes, I agree. I am in favor of following your suggestion.

before the controversy ensued I was thinking mainly:

yes, let’s adopt the notation f;gf ; g Toby suggested when we want to indicate that we are writing composition in the way opposite to the usual one, but natural for diagram compositions.

As in, if in the conventionally labeled diagram

x gf y f g z \array{ x &&\stackrel{g f}{\to}&& y \\ &{}_f\searrow&& \nearrow_{g} \\ && z }

or equivalently

x gf y f g z \array{ x &&\stackrel{g \circ f}{\to}&& y \\ &{}_f\searrow&& \nearrow_{g} \\ && z }

I feel that the notation convention for the labels gets in the way of the nature of the diagram, then I can enforce diagrammatically natural notation by saying, hey, I’ll switch to that notation with the semicolon for the present purpose and write (still equivalently)

x f;g y f g z. \array{ x &&\stackrel{f ; g}{\to}&& y \\ &{}_f\searrow&& \nearrow_{g} \\ && z } \,.

From the discussion I gather that there is pretty much consensus about this.

The other problem that arose in the discussion is a general one of a wiki which we should talk openly about:

In principle, everything anyone writes into the wiki can be subject to editing by others. This has its advantages (which is why we run the wiki in the first place) and its disadvantages (when the first author does not find the edits of his original material to be an improvement).

I believe the solution is: let’s notify each other as much as possible about our work on the nnLab at the “latest changes” place (or even by private email, if urgent/important).If I make a change to something somebidy else wrote and notify him or her shortly afterwards, the edits can be checked and, if there is disagreement, be undone and the issue taken to a discussion section if necessary.

With a complex project as the nnLab we are bound to run into disagreement about various details from time to time. But I’m sure we can deal with that.

I should mention that one option for those who from time to time want to contribute material which should not be editable by others (for whatever reason) is to create a separate “web” on the server.

Currently there exist four different “webs”, which you can see here:

nnLab web list.

Besides the nnLab itself, this currently contains private webs for John Baez, for Eric Forgy and for myself.

John hasn’t done much with his private web so far, so I don’t know what he intends it to be. Myself I am collecting material on projects that I am working on at my private web, stuff which I feel should not be merged into the nnLab before it has been further developed/polished/srutinized but which I want to share nevertheless.

Eric Forgy had started to develop some tentative ideas he wants to work on first in an nnLab page and then had asked for a private web to pursue that activity there.

I think that we could create such private webs for others who are and have been actively and consistently involved in the development of the nnLab (such as everybody participating in this discussion here). If there is any interest, please contact me by private email. These private webs work, technically, precisely as the nnLab itself, the only difference being that you have the choice between making it entirely public and editable by everybody (as the nnLab itself is) or making it public but editable only using a password (that’s the way I have set up my web currently), or even making it completely password protected, so that viewing and editing is restricted to a chosen circle of people.

I can imagine good use for all three of these options. In general I think it is good to have not too much material separated in different webs, as one of the powers of the nnLab will be its connected ness via links and backlinks as a single web, but for some purposes it may be good.

Posted by: Urs Schreiber on January 16, 2009 1:58 PM | Permalink | Reply to this

Re: Order of composition and Webs

I know I am blowing against the wind, but the notation for diagrams could be completely consistent with every other human being (including mathematicians) if diagrams were drawn with arrows pointing left:

ygxy\stackrel{g}{\leftarrow}x

zfyz\stackrel{f}{\leftarrow}y

with composition diagrams

z fg x f g y \begin{aligned} z & {} &\stackrel{f\circ g}{\leftarrow} & {} & x \\ {} & f\nwarrow & {} & \swarrow g & {} \\ {} & {} & y & {} & {} \end{aligned}

It’s never too late to change! Plus, it would require fewer people to make adjustments.

Posted by: Eric on January 16, 2009 3:07 PM | Permalink | Reply to this

Re: Order of composition and Webs

Or you could argue: what ought to be changed is the notation for function evaluation f(x) f(x) , which should really have been written (x)f (x)f .

(I remember, by the way, all these arguments have been exchanged already elsewhere on the web. I just don’t remember where…)

Maybe at this point it might help to notice that greater minds have decided in favor of convention in these matters before.

Quoting from Wikipedia

Leonardo da Vinci is famous for having written most of his personal notes in mirror, only using standard writing if he intended his texts to be read by others. #

;-)

That said let me suggest we close this discussion and concentrate our energy on more important issues.

Posted by: Urs Schreiber on January 16, 2009 3:20 PM | Permalink | Reply to this

Re: Order of composition and Webs

Snap!

(And stop replying to my comments before I’ve finished writing them. Next, you’ll be reporting on conferences that haven’t happened yet!)

<mutter, grumble>

Posted by: Tim Silverman on January 16, 2009 4:09 PM | Permalink | Reply to this

Re: Order of composition and Webs

Or you could argue: what ought to be changed is the notation for function evaluation f(x)f(x), which should really have been written (x)f(x)f.

This only really matters if you approach category theory from the example of sets and functions. Still, it would be nice to have everything consistent. I use x fx^f a lot in personal notes; it's an extension of notation for a right action on a group, which is itself an extension of notation for conjugation. For an example of this in action (applied, as it happens, to category theory, where the ffs are functors), see my Answer to Week 2 here (you'll probably want to read John's Homework question first, for context).

Incidentally, I also think function application needs a symbol as much as composition does in the introductory pedagogy; I have seen f xf^&#8768;x used, and I'd like to try it in an elementary algebra class sometime.

Posted by: Toby Bartels on January 18, 2009 3:26 AM | Permalink | Reply to this

Re: Order of composition

I prefer not being forced to write anything at all between the factors, provided that the context already forces which order is meant. For example, if someone refers to an adjunction FUF \dashv U and then refers to the monad UFU F, I’ll know what is meant (unless, of course, the adjunction is given as ambidextrous!).

Posted by: Todd Trimble on January 15, 2009 7:34 PM | Permalink | Reply to this

Re: Order of composition

I prefer not being forced to write anything at all between the factors, provided that the context already forces which order is meant. For example, if someone refers to an adjunction F⊣U and then refers to the monad UF, I’ll know what is meant (unless, of course, the adjunction is given as ambidextrous!

I agree that if the context is clear, then you shouldn't feel forced to do anything. (And regardless, you can't actually be forced, of course!) But I or Urs may come by and add in a symbol.

Actually, this example is a good one for showing why such a symbol can be useful. If someone is not familiar with how adjunctions give rise to monads, then they would not know whether UFU \circ F or U;FU; F is meant (since both exist). Certainly one would want to include an explicit symbol if that (how adjunctions give rise to monads) were the topic!

Posted by: Toby Bartels on January 15, 2009 7:46 PM | Permalink | Reply to this

Re: Order of composition

Yes, but if one writes U:DCU: D \to C and F:CDF: C \to D and one refers to the unit 1 CUF1_C \to U F, then it’s again unambiguous.

I really don’t much like to use either \circ or ;; except in certain circumstances (frequently I find them ugly, or that they clutter notation), and with all due respect I’d prefer not to have them edited into the stuff I write. So I’ll make you a deal: if I write in such a way that the context makes clear which order is meant, then you (or whoever the editor is) curb the temptation to stick the darn things in – deal?

Posted by: Todd Trimble on January 15, 2009 8:09 PM | Permalink | Reply to this

Re: Order of composition

So, someone correct me if I am wrong, but my experience is that apart from some category theorists and some computer scientists, the convention that gf=gfg f = g\circ f is fairly universal among mathematicians. Since composition is such a fundamental operation, I think that using more than one convention, or a convention which is unfamiliar to one’s reader, is a significant strike against the reader’s sympathy and comprehension. And given that we are trying to change the world in other ways already, I think we should take every opportunity to make the non-category-theorist reader comfortable. So I would prefer we adopt the convention that gf=gfg f = g\circ f, with the option to write f;gf; g if we prefer (with explanation, or perhaps a link to [[order of composition]]).

Posted by: Mike Shulman on January 15, 2009 8:31 PM | Permalink | Reply to this

Re: Order of composition

I agree and would happily go along with that.

Posted by: Todd Trimble on January 15, 2009 9:27 PM | Permalink | Reply to this

Re: Order of composition

On the contrary, I think for people unfamiliar with categories, writing gfg f seems very strange at first.

(I agree that gfg \circ f has a well established meaning outside of category theory, and I don't propose to mess with that. But gfg f for composition of functions is not nearly so well established.)

Especially if one is introduced to categories through diagrams, fgf g is the natural thing to do. But as it is ambiguous, I wouldn't use it; I'd write f;gf ; g instead.

(It is true that f;gf ; g is not well known; even with that order, fgf g is probably more common. But if I started writing fgf g, then I think that people would get confused, even in situations where there was only one possible intepretation.)

Note: All notation above lives in this context: X f Y g Z\array{ X & \to^f & Y \to^g & Z }

Posted by: Toby Bartels on January 15, 2009 9:36 PM | Permalink | Reply to this

Re: Order of composition

But gfg f for composition of functions is not nearly so well established.

My feeling is that gfg f as meaning gfg \circ f is far better established than the other way, g;fg ; f. One would have to compile some good statistics to be sure, but pick up a book in which this notation is used at all: if the author doesn’t bother to explain which convention s/he’s using, then chances are (overwhelmingly, I think) that gfg \circ f is meant. Authors who choose to use the other meaning seem to realize they’re in the minority, and therefore explicitly stipulate the intended meaning.

If anyone can point out a published counterexample to what I just asserted, I’d be very interested.

Posted by: Todd Trimble on January 15, 2009 10:40 PM | Permalink | Reply to this

Re: Order of composition

I agree with all your facts, Todd. But I think that it's also true that, overwhelmingly, authors use gfg \circ f rather than gfg f for composition of functions, and probably even for composition in a category. So if following published precedent is what matters, then we should use gfg \circ f. And indeed, I'm not proposing changing your gfg f to f;gf; g (maybe occasionally if that seems to make things clearer, but that would be unusual); I just want to change them to gfg \circ f, the best established precedent.

But more than that, our job at the nn-Category Lab is not to follow published precedent, but to explain things in a way that is easy to learn and follows the Tao of nn-categories. For people already used to gfg f in category theory, that will be easiest; for people moving to category theory from set theory, gfg \circ f will be easiest; and either of these would prefer that order. But for people used to doing category theory through diagrams or moving to category theory from a diagrammatic perspective, fgf g or f;gf ; g will be easiest (at least if they read from left to right). And as I see nn-category theory is an algebraic approach to diagrammatic or geometric reasoning, I certainly think that such an order fits the Tao better … although that is more a matter of personal opinion.

Posted by: Toby Bartels on January 15, 2009 11:09 PM | Permalink | Reply to this

Re: Order of composition

It’s possible that we won’t come to a consensus on what we “should” use. In which case, I’d much prefer it if we simply “live and let live”: tolerate some differences of conventions among contributors, provided that the contributors make clear choices. For example, we tolerate differences in spelling: you’ve adopted British spellings in your own writings (why, I’d be curious to know – you were born and bred in the US, right? but anyway), I use American, and we can live with that.

I am actually quite mindful of your preferences on the topic of compositional order. Indeed, I understand them well, and at times in my life I’ve adopted that preference as well, but at some point I decided to stick with the order I use today, because I choose to fight different battles. Let me say that I do make conscious effort to write in a way so that gfg f will not be misunderstood. And I hope others can live with that. Let me also say I will be somewhat annoyed if, say I wanted to refer to the associativity law for the monad attached to an adjunction FUF \dashv U, and wrote

UFUFUFUεFUFUFUFU F U F U F \stackrel{U \varepsilon F U F}{\to} U F U F

for one of the arrows, someone invariably comes through and changes it to

UFUFUFUεFUFUFUFU \circ F \circ U \circ F \circ U \circ F \stackrel{U \circ \varepsilon \circ F \circ U \circ F}{\to} U \circ F \circ U \circ F

on ideological or normative grounds. It’s ungainly, it’s ugly to me – I’d much rather just say in the preamble that UFU F means… if it comes to that.

So, if we can’t agree, I suggest we respect each others’ differences, trust each other to behave sensibly, and not worry about it much. I think there are rather bigger fish to fry!

Posted by: Todd Trimble on January 16, 2009 12:23 AM | Permalink | Reply to this

Re: Order of composition

I’d much rather just say in the preamble that UF means… if it comes to that.

I'm happy with that.

So, if we can’t agree, I suggest we respect each others’ differences, trust each other to behave sensibly, and not worry about it much.

I can't argue with that.

Posted by: Toby Bartels on January 18, 2009 2:16 AM | Permalink | Reply to this

Re: Order of composition

For people already used to gfg f in category theory, that will be easiest; for people moving to category theory from set theory, gfg\circ f will be easiest; and either of these would prefer that order. But for people used to doing category theory through diagrams or moving to category theory from a diagrammatic perspective, fgf g or f;gf;g will be easiest (at least if they read from left to right).

If the notations gfg f, gfg\circ f, and f;gf;g were completely unrelated to each other, I would agree. However, I think that for people who are used to working with functions of one sort or another (and that would be almost all mathematicians, I believe), if they are used to writing gfg\circ f, then the notation gfg f will be much more familiar and comprehensible, because it goes in the same order that they are used to. They just have to learn “the symbol \circ is omitted, just like we omit ×\times when multiplying numbers after we get past 6th grade.” People who work with groups are already familiar with this; the multiplication symbol is usually omitted in group theory, but in groups of automorphisms the multiplication is function composition.

Likewise, many mathematicians are already used to drawing arrows for functions and basic commutative diagrams, even if they don’t know much official category theory, and yet they still write function composition in the order gfg\circ f. So I don’t think that the left-to-right reversal between diagrams and composition is any sort of additional conceptual hurdle presented by category theory.

I could wish that people had always written their functions on the right of their arguments, and composed functions in the f;gf; g way, or else that they had always drawn their arrows going from right to left, so that the world would be consistent. But I don’t think there’s anything to be gained by pretending that something is so, that ain’t.

Posted by: Mike Shulman on January 16, 2009 1:17 AM | Permalink | Reply to this

Re: Order of composition

If the notations gfg f, gfg \circ f, and f;gf ; g were completely unrelated to each other, I would agree. However, I think that for people who are used to working with functions of one sort or another (and that would be almost all mathematicians, I believe), if they are used to writing gfg \circ f, then the notation gfg f will be much more familiar and comprehensible, because it goes in the same order that they are used to.

As I said, if you come to category theory as a generalisation of familiar categories of structured sets, then this is true. (And the notation gfg \circ f, which I find acceptable, makes complete sense.) But if you come to it another way (say as higher-dimensional order theory, or via groupoids from homotopy theory, or from logic, or from theoretical computer science, or as an algebraic structure for diagrams, or fresh out of the blue), then this is not true.

Maybe the problem is that we still think of category theory, and certainly most schools still teach it, as only or primarily a generalisation of categories of structured sets. Surely this is a mistake. Category theory will only become more accessible if we move away from notation that only makes sense from one point of view.

Posted by: Toby Bartels on January 18, 2009 1:52 AM | Permalink | Reply to this

Re: Order of composition

Okay, you’ve caught me. (-: I do, indeed, primarily think of category theory as about structured sets (or, at least, objects that behave like structured sets).

However, using f;gf;g is going to be at least as confusing for the folks who think like I do as gfg f is for anyone else. And I think that even people who don’t come to category theory from the perspective of structured sets will at least have some acquaintance with the notions of function and composition, and the idea of writing composition of functions gfg\circ f. By contrast, I think many people coming from structured sets will not be at all familiar with things whose composition is written f;gf;g.

I’m all in favor of doing things to help bring in new users of category theory from other perspectives. But I want to make sure that we don’t alienate current users of category theory either by gratuitous changes of notation.

Of course, as has been said already, now that we’ve agreed to live and let live, we should probably drop this argument and go do something more productive.

Posted by: Mike Shulman on January 18, 2009 4:24 AM | Permalink | Reply to this

Re: Order of composition

The problem arises when you want to represent the result of applying an arrow to an object (or, e.g., the result of applying a function to an element of a set).

The notation where gfgf means gfg\circ f works well with the object on the right: gf(x)gf(x) means g(f(x))g(f(x)). But then, using fgfg to mean f;gf;g ought to imply putting the object on the left: (x)fg(x)fg. Which is, more or less, what people are doing in diagrams like this:

XfYgZX\overset f\rightarrow Y\overset g\rightarrow Z. But are people then going to be consistent and write (X)fg(X)fg? I doubt it.

It would be (utopianly) nice if people could be explicit about the direction of the arrows:

gf\overset\leftarrow{gf} vs fg\overset\rightarrow{fg}. But that just adds extra work, which people aren’t going to want to do, and might conflict with other uses for this notation.

So in the absence of this level of explicitness written into the expression itself, it seems best if authors make it clear from the context which convention they are adopting. Maybe as more entries are developed, some sort of convention about which convention to use where might develop. :-)

Posted by: Tim Silverman on January 16, 2009 3:55 PM | Permalink | Reply to this

Re: Order of composition

A long time ago ?Hilton and ?? tried - it never caught on.

Posted by: jim stasheff on January 16, 2009 4:15 PM | Permalink | Reply to this

Re: Order of composition

Hilton and Wylie, Introduction to Algebraic Topology, Cambridge U.P., 1960, I believe.

Posted by: Tim Porter on January 18, 2009 6:45 AM | Permalink | Reply to this

Re: Order of composition

The problem arises when you want to represent the result of applying an arrow to an object

What does that mean?

But are people then going to be consistent and write (X)fg(X)f g?

What would that mean?

My questions are a bit rhetorical; here's the point: In general, morphisms are not applied to anything, so there is no compatibility problem (with either approach). It is only the introduction to category theory through examples of categories of structured sets that leads people to expect to apply morphisms, and that's an expectation that people actually have to work to unlearn.

To be sure, you can restore the idea of applying morphisms somewhat, even allowing composition of morphisms to match composition of application, by working with generalised elements (or more sophisticatedly, through the Yoneda lemma). However, both of these can in fact be done either way —necessarily, since category theory itself is perfectly symmetric.

OK, this is my last post on the subject … at least until next week. (^_^)

Posted by: Toby Bartels on January 18, 2009 4:03 AM | Permalink | Reply to this

Re: Order of composition

Even when I’m dealing with morphisms that aren’t arrows, I often like to think of a morphism as taking its source and in some sense “sending” it to its target. This is true even when the objects are points of a manifold and the morphisms are paths. It’s just a psychological thing, but one of the nice things about mathematics is that it encourages you to take a mode of thought derived from one field and apply it to another, even though it wouldn’t have seemed natural at first. So I’d be reluctant to abandon this way of thinking, even in situations where it seems inappropriate. In fact, I’d be reluctant to abandon any of these ways of thinking—hence my tolerance for mulitiple notational conventions.

You mentioned left-to-right reading order somewhere else in the thread. It would also be interesting to compare the reactions of speakers of a language with consistent adjunct-head word ordering, like Japanese.

(Hmm, doesn’t seem to be anything succinct in Wikipedia on this. I mean a language where:

1) In a noun phrase, adjuncts of the head noun, such as articles (‘a’, ‘the’), demonstratives (‘this’, ‘that’, etc), cardinals (‘one’, ‘two’, ‘three’, etc), quantifiers (‘more’, ‘some’, ‘any’, ‘both’, etc), genitives (‘John’s’, ‘of the table’), adjectives (‘green’, ‘dangerous’) and even relative clauses (‘who liked eating beans’) come before the head noun.

2) There are postpositions instead of prepositions (‘Rome to’ rather than ‘to Rome’; ‘breakfast before’ rather than ‘before breakfast’).

3) In verb phrases (if the language has them), direct objects come before their verbs, and main verbs come before auxiliary verbs.

4) Other similar stuff, where relevant.)

Writing (x)f rather than f(x) seems somewhat related.

Posted by: Tim Silverman on January 18, 2009 3:14 PM | Permalink | Reply to this

Re: Order of composition

(Sorry for distracting people with trivial, off-topic comments. I’m actually working on more substantive things (honest!), but they’re not ready yet.)

Posted by: Tim Silverman on January 18, 2009 3:55 PM | Permalink | Reply to this

Re: Order of composition

I have the solution!

Why don’t we convince category theorists to write morphisms like

g:yxg:y\leftarrow x

and

f:zyf:z\leftarrow y

so that

fg:zxf\circ g:z\leftarrow x

and EVERYONE would be happy :)

The whole issue comes about because someone unwisely chose to use \to long ago :)

Posted by: Eric on January 16, 2009 3:38 AM | Permalink | Reply to this

Re: Order of composition

Your smileys indicate that you're kidding, but I've considered this. However, left to right (and up to down) reinforce the basic intuition of arrows as going from the source to the target.

At least it does if one reads left to right. I wonder what mathematicians who read right to left think. (Maybe we can get David Ben-Zvi to hop over and say something.)

Posted by: Toby Bartels on January 17, 2009 9:49 PM | Permalink | Reply to this

Trimble’s notion of weal n-categories

I am more than glad that, triggered by our discussion at nnLab: interval object now Todd started the entry nnLab: Trimble’s notion of weak nn-categories where he presents the definition the way he originally gave it, which is something I can well relate to.

Here is one question, which I thought I’d ask here in the open instead of just by private email:

All composition operations in a Trimble weak nn-Cat are maximally weak. But interchange is still strictly satisfied. I remember last time that I mentioned the strict fundamental path ω\omega-groupoid of a space here, people jumped at me pointing out that its strict interchange suggests that it misses nontrivial whitehead products, topologically.

Now, how is that here? Is there a problem? Will Trimble-weak-nn-categories be “not weak enough”? How does one tell?

Another question:

I have only looked at very little of the literature on Trimble-weak-nn-categories, but in

Eugenia Cheng, Comparing operadic theories of n-category (arXiv )

mentions on p. 19 the issue of what it means more generally to say that the operad which describes the A A_\infty-co-category structure on the interval object is contractible.

When I wrote the stuff that is now at interval object I supposed working in a homotopical category and thought that “contractible” means: “weakly equivalent to the terminal object”.

Now, I see that Eugenia Cheng mentions on p. 19 that this is the definition Peter May gave in his talk Operadic categories, A A_\infty-categories and nn-categories (which I haven’t really read yet, to be frank).

Okay, but this raises one question: if the homotopical category C 0C_0 comes with its own notion of homotopy then picking an interval object in it leads to another notion of homotopy – which may be different.

Currently in my own notes where I apply this stuff I found it natural to decree that:

we demand that

- inside C 0C_0 we have found and picked a category of fibrant objects F 0FF_0 \subset F;

- for every object BB in F 0F_0 the the hom-object [I,B][I,B] with its canonical structre morphisms B[I,B]B×BB \to [I,B] \to B \times B is a path object of BB.

This is a compatibility assumption between the interval object II and any other notion of homotopy which may exist on the category of whose objects we are computing the Trimblean fundamental \infty-categories.

In particular, II need not be itself weakly equivalent to the point (the obvious choice I=1GlobeI = 1Globe in strict ω\omega-categories is not, for instance) so that we genuinely may have non-reversible homotopies in the game, which is important for some applications.

So, the interaction of the interval object with any homotopical structure on the category in which it lives may be subtle.

Any ideas on that? I’d be grateful for comments.

Posted by: Urs Schreiber on January 15, 2009 11:58 PM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

Will Trimble-weak-nn-categories be “not weak enough”? How does one tell?

Well, I think the only way to tell “for sure” is to prove that they’re equivalent, or inequivalent, to some notion which is accepted to be “weak enough.” A way to get a good hint as to whether they’re weak enough would be to ask whether Trimble nn-groupoids model all homotopy nn-types.

Evidence seems to suggest that in order to be “weak enough” you have to have either weak interchange or weak units, but one or the other could be strict. Certainly they can’t both be strict, since in that case the Eckmann-Hilton argument happens strictly and collapses braided and symmetric things. See Simpson’s conjecture.

Now, I see that Eugenia Cheng mentions on p. 19 that this is the definition Peter May gave in his talk Operadic categories, A A_\infty-categories and nn-categories (which I haven’t really read yet, to be frank).

If you do read it, don’t put a lot of effort into the details. Peter May now admits that a lot of his claims in that paper are rubbish. But the general idea of iterated enrichment, in a setting more general than Trimble’s, is interesting and worth pursuing.

Posted by: Mike Shulman on January 16, 2009 12:37 AM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

A way to get a good hint as to whether they’re weak enough would be to ask whether Trimble nn-groupoids model all homotopy nn-types.

And indeed this was the original motivation. The idea is that they certainly should be sufficient, because the fundamental nn-groupoid of a space XX in my sense is exactly an algebraic structure whose 0-cells are points, whose 1-cells are paths, …, whose jj-cells are continuous maps D jXD^j \to X for jj < nn, and whose nn-cells are homotopy classes of maps D nXD^n \to X rel boundary. In other words, the whole point behind the definition I gave was to pin down the algebraic operations that this particular globular set (which was my understanding of what Grothendieck actually meant by ‘fundamental nn-groupoid’) enjoys.

But the general idea of iterated enrichment, in a setting more general than Trimble’s, is interesting and worth pursuing.

I quite agree (and weak iterated enrichment is certainly in the spirit of what May was after) – Eugenia Cheng and Nick Gurski have done some interesting work in this direction, greatly generalizing the definition I presented and comparing it to other approaches (notably Batanin’s), as well as using this approach to provide a suitable niche for the algebra of kk-tangles in nn-space. The original definition I gave may still be of interest, I feel, at least in terms of hoped-for applications to homotopy theory (e.g., classifying nn-types).

Posted by: Todd Trimble on January 16, 2009 2:59 AM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

In other words, the whole point behind the definition I gave was to pin down the algebraic operations that this particular globular set (which was my understanding of what Grothendieck actually meant by `fundamental nn-groupoid’) enjoys.

Makes perfect sense. But I take it that you (or anyone else) haven’t actually checked yet that it does work?

Posted by: Mike Shulman on January 16, 2009 4:55 AM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

If by ‘works’ you mean indeed models nn-types, the answer is no – this hasn’t been proven.

Posted by: Todd Trimble on January 16, 2009 12:10 PM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

Incidentally, Mike – do I detect some doubt on your part that it does work? Because if you have doubts, it would be good to hear you articulate them; it might save me or others who wish to pursue this a lot of time and effort. Already Urs has raised a conundrum, to which I have no solid answer yet.

Posted by: Todd Trimble on January 16, 2009 3:17 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Already Urs has raised a conundrum, to which I have no solid answer yet.

Might it matter that this conundrum – in as far as it is correctly conceived – assumes that we have a smooth structure on a space?

As you will know, Ronnie Brown has (two I think) definitions of strict fundamental ω\omega-groupoids for (filtered) topological spaces, where he divides out homotopy (not just thin homotopy) at each level, but relative to the filtering.

This is designed, as far as I understand, to be applied to realizations of nerves, which are naturally filtered, and to nicely recover the ω\omega-groupoid which gave rise to the nerve.

For instance Brown’s strict fundamental ω\omega-groupoid of the standard nn-simplex regarded as a filtered space with the standard filtering is justthe free strict ω\omega-groupoid on the nn-oriental.

But if we forget the filtering, i.e. if we regard all filterings to be equal to the total space, Brown’s definition gives another notion of fundamental strict ω\omega-groupoid.

Maybe just something to keep in mind in our discussion here.

Posted by: Urs Schreiber on January 16, 2009 4:24 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Someone (I apologise as I forget who, but probably from the Bangor school) had a notion of thin homotopy for continuous paths: the map from the square factors through a tree. The same should be true for higher homotopies I n×IXI^n \times I \to X, but a tree replaced by a subspace which is homeomorphic to an nn-dimensional complex. Certainly this includes all reparameterisations and retraction of “tendrils” of various dimensions.

Posted by: David Roberts on January 20, 2009 5:07 AM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

So if I understand correctly (?), Michael Batanin’s notion of fundamental ω\omega-groupoid of a space is obtained pretty straightforwardly – conceptually – from the Trimblean definition by replacing the role played by the interval cospan

ptIpt, pt \to I \leftarrow pt \,,

which is the full subcategory of the globe category GG on its objects 0 and 1 (the 0-dimensional and 1-dimensional globe) by all of GG, usefully thought of as a multispan which in low degree looks like

pt = pt I G 2 I pt = pt \array{ pt &&=&& pt \\ \downarrow &&&& \downarrow \\ I &\rightarrow& G^2 &\leftarrow& I \\ \uparrow && && \uparrow \\ pt &&=&& pt }

drawn here such as to remind you/me/somebody of the pictures drawn here.

Am I guessing right (from the little that I actually had time to read in detail) that by straightforward generalization of how the interval cospan gives an operad – the co-endomorphism operad of the interval cospan regarded as an object in the monoidal catgegors of spans on the point – the full GG gives a Batanin globular operad?

And that as in the Trimblean definition the Hom-spaces Hom(I n,X)Hom(I^{\vee n}, X) for XX a topological space inherit the structure of an algebra over the interval’s co-endomorphism operad, so does Hom(G,X)Hom(G', X) – with GG' now thought of as GG with the object nn represented by the standard topological globe – inherit the structure of an algebra for a Batanin globular operad and hence gives rise to a Batanin weak fundamental ω\omega-groupoid?

Posted by: Urs Schreiber on January 21, 2009 8:28 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

You might want to have a look around page 41 of this paper by Eugenia (where she compares my approach to the fundamental nn-groupoid to Batanin’s), if you haven’t already.

To answer your question directly, the globe category itself is probably much too rigid to support any sort of interesting operad along these lines. (Compare with the fact that you get nothing interesting for the co-endomorphism operad of the interval cospan in simplicial sets: simplicial sets are too “rigid” for this purpose.)

As explained by Eugenia, the underlying collection of the globular operad KK Batanin uses, the collection being a map

KT(1)K \to T(1)

to the underlying globular set of the free ω\omega-category on the terminal globular set (elements of T(1)T(1) being “pasting diagrams”), has, for its fiber over a pasting diagram α\alpha of dimension jj, the set of continuous maps

D j|α|D^j \to |\alpha|

which respect boundaries, where |α||\alpha| denotes geometric realization of α\alpha (as a globular set).

Posted by: Todd Trimble on January 22, 2009 1:34 PM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

the fundamental nn-groupoid of a space XX in my sense is exactly an algebraic structure whose 0-cells are points, whose 1-cells are paths, .., whose jj-cells are continuous maps D joXD^j \o X for j<nj \lt n, and whose nn-cells are homotopy classes of maps D nXD^n \to X rel boundary.

Certainly, yes. My question is motivated from the following puzzle:

if the space XX is equipped with a smooth structure, then we can, I believe, take the construction of Π n(X)\Pi_n(X) you described and then divide out thin homotopy of smooth maps γ:D kX\gamma: D^k \to X for all maps. I.e., identifying two such maps if there is a smooth homotopy Σ:D k+1X\Sigma : D^{k+1} \to X between them such that Σ *Ω (X)Ω (D k+1)\Sigma^* \Omega^\bullet(X) \to \Omega^\bullet(D^{k+1}) is the 0-map.

It feels like this quotienting doesn’t lose any crucuial information about the space XX. Rather, it feels like it just divides out a bit of redundant information in the maps D kXD^k \to X, because dividing out thin homotopy pretty much amounts to

- dividing out orientation preserving reparameterizations D kD kD^k \to D^k

- declaring orientation reversing reparameterizations D kD kD^k \to D^k take an kk-cell to its inverse.

Still, forming this quotient makes, I think, Π n(X)\Pi_n(X) into a strict nn-groupoid.

So, what’s happening? How does this strict nn-groupoid not know about all homotopy types?

I guess the answer is hidden in the term “models” in “models homotopy types”. We need to pick a notion |||\cdot| of realization of nn-nerve and see what the result Π n(|Π n(X)|)\Pi_n(|\Pi_n(X)|) of going back and forth is.

Or not? I am mentioning this because I have direct intuitive access to Todd’s statement that the Trimblean Π n(X)\Pi_n(X) is “right”, but since the same intuition seems to tell me that also the strict Π n(X)\Pi_n(X) is “not worse” it may be that my intuition is way off here.

I hope this can be figured out. Meanwhile, I’ll happily apply Trimblean Π n\Pi_ns, which are exactly what I need in my application…

Posted by: Urs Schreiber on January 16, 2009 9:57 AM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

I had a stupid typo in my statement of thin-homotopy above:

of course we say a smooth homotopy

Σ:D k+1X \Sigma : D^{k+1} \to X

is thin iff just

Σ *:Ω k+1(X)Ω k+1(D k+1) \Sigma^* : \Omega^{k+1}(X) \to \Omega^{k+1}(D^{k+1})

is the 0-map. This is equivalent to saying (which is the more traditional way to say it) that the rank of the differential of Σ\Sigma is everywhere non-maximal

rkdΣ<(k+1). rk d \Sigma \lt (k+1) \,.

I formulated it in terms of differential forms because that’s the way it generalizes to general smooth spaces which are sheaves on DiffDiff.

Posted by: Urs Schreiber on January 16, 2009 2:32 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Thanks for the clarification. I was extra-confused because to me Ω(X)\Omega(X) means by default the loop-space of XX, not its algebra of differential forms. You may want to make that extra clear when writing for homotopy theorists.

To answer Todd’s question above: no, I don’t have any particular reason to doubt that it will work.

As for the conundrum, here are some thoughts. Firstly, in your definition of Π n(X)\Pi_n(X) do you want to take just the smooth maps D kXD^k\to X? If so, it doesn’t seem that Todd’s operad will act on them, since it contains non-smooth maps. But if you allow non-smooth D kXD^k\to X, what does it mean to have a smooth homotopy between non-smooth maps? Perhaps you should alter the definition of Todd’s operad to include only smooth maps? It’s not immediately clear to me whether that would work.

Secondly, if all that can be overcome, I’m not so sure that the quotienting you describe “doesn’t lose any crucial information.” All the coherence data (associators, unitors, interchangors, etc.) in a fundamental nn-groupoid can be described as “orientation preserving reparametrizations.” So, as you say, quotienting by them seems tantamount to strictifying your fundamental nn-groupoid, which does of course lose information. Maybe the point is just that orientation-preserving reparametrizations do carry information, whether or not that is counterintuitive.

Posted by: Mike Shulman on January 16, 2009 8:30 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

All the coherence data (associators, unitors, interchangors, etc.) in a fundamental nn-groupoid can be described as “orientation preserving reparametrizations.”

Yes: all associators, pentagonators etc in the full fundamental nn-groupoid are maps with values in spheres that all represent trivial elements in π k(X,x)\pi_k(X,x)s.

So, as you say, quotienting by them seems tantamount to strictifying your fundamental nn-groupoid, which does of course lose information.

Well, maybe it does. But I am not sure about the “of course”.

For instance, for n=2n=2 it clearly does not.

I am currently discussing this with Todd by private email, but let me point out here the following:

One has to keep in mind, I think, the distinction between the “big and mildly weak” fundamental nn-groupoids and any skeletal but weak versions equivalent to them. Under equivalence things with no nontrivial associators etc may become equivalent to things with nontrivial such structure maps.

Just to amplify what you all know, let me make some of this explicit for n=2n=2.

So let Π 2(X)\Pi_2(X) be the strict smooth fundamental 2-groupoid of a smooth space XX. Objects are the points of XX, morphisms are thin-homotopy rel boundary classes of smooth maps [0,1]X[0,1] \to X with sitting instants, 2-morphisms are homotopy rel boundary classes of maps [0,1] 2X[0,1]^2 \to X cobounding classes of paths with same endpoints. This is a strict 2-groupoid.

On the other hand, let

( xπ 2(X,x)0 xπ 1(X,x)0π 0(X))(\sqcup_x \pi_2(X,x) \stackrel{0}{\to} \sqcup_x \pi_1(X,x) \stackrel{0}{\to} \pi_0(X))

be the skeletal version of this guy. Morphisms act on the 2-morphisms in the obvious way. On top of that, this has a nontrivial associator, a map

α([x]):π 1(X,x)×π 1(X,x)×π 1(X,x)π 2(X,x) \alpha([x]) : \pi_1(X,x) \times \pi_1(X,x) \times \pi_1(X,x) \to \pi_2(X,x)

We get a bifunctor

F:( xπ 2(X)0 xπ 1(X,x)π 0(X))Π 2(X) F : (\sqcup_x \pi_2(X) \stackrel{0}{\to} \sqcup_x \pi_1(X,x) \to \pi_0(X)) \to \Pi_2(X)

by making lots of choices:

on objects, choose for each connected component [x] [x] a point xx in that component and set

F:[x]x. F : [x] \mapsto x \,.

On 1-morphisms, choose for each element aπ 1(X,[x])a \in \pi_1(X,[x]) a thin-homotopy class of a loop

G:([x]a[x])(xF(a)x). G : ([x] \stackrel{a}{\to} [x]) \mapsto (x \stackrel{F(a)}{\to} x) \,.

This assignment is not functorial: there is a nontrivial compositor c a,bc_{a,b} which is a choice of homotopy rel boundary class of surface in XX

c: [x] a b [x] ba [x] x F(a) c a,b F(b) x F(ba) x. c : \array{ && [x] \\ & {}^a\nearrow && \searrow^b \\ [x] &&\stackrel{b a}{\to}&& [x] } \mapsto \array{ && x \\ & {}^{F(a)}\nearrow &\Downarrow^{c_{a,b}}& \searrow^{F(b)} \\ x &&\stackrel{F(b a)}{\to}&& x } \,.

From this we get for any three elements a,b,cπ 1(X,x)a,b,c \in \pi_1(X,x) the homotopy class of a sphere in XX, obtained from gluing four of the surfaces c ,c_{\cdot, \cdot} to each other. This sphere is the result of gluing the disk

x F(b) x c a,b F(a) F(ba) F(c) c ba,c x F(cba) x \array{ x &&\stackrel{F(b)}{\to}&& x \\ & \Downarrow^{c_{a,b}} \\ \uparrow^{F(a)} && {}^{F(b a)}\nearrow&& \downarrow^{F(c)} \\ \\ &&& \Downarrow^{c_{b a, c}} \\ x &&\stackrel{F(c b a)}{\to}&& x }

to the disk

x F(b) x c b,c F(a) F(c) c a,cb x F(cba) x. \array{ x &&\stackrel{F(b)}{\to}&& x \\ &&& \Downarrow^{c_{b,c}} \\ \uparrow^{F(a)} && \searrow&& \downarrow^{F(c)} \\ \\ & \Downarrow^{c_{a, c b}} \\ x &&\stackrel{F(c b a)}{\to}&& x } \,.

This is an element α([x])π 2(X,x)\alpha([x]) \in \pi_2(X,x) and this alpha is the associator of the skeletal weak 2-groupoid ( xπ 2(X,x)0 xπ 1(X,x)0π 0(X))(\sqcup_x \pi_2(X,x) \stackrel{0}{\to} \sqcup_x \pi_1(X,x) \stackrel{0}{\to} \pi_0(X)).

The way it arises is precisely the respect of the bifunctor FF with its compositor cc for the associators: a nontrivial one on the skeletal 2-groupoid, a trivial one on Π 2(X)\Pi_2(X).

The functor FF is clearly essentially kk-surjective for all kk and is hence an equivalence.

Okay, so I suppose this is essentially realizing an instance of the fact that every bicategory is equivalent to a strict 2-category. So the interesting question would be to see what happens to this kind of argument as we go to Π 3(X)\Pi_3(X).

But let me know if I am mixed up.

Posted by: Urs Schreiber on January 17, 2009 4:59 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Of course you are right for n=2n=2, but we already know that there is a big difference between n=2n=2 and n=3n=3: strict 2-groupoids do suffice to model all homotopy types, while strict 3-groupoids do not. So it seems to me that any process which makes the fundamental 3-groupoid into a strict 3-groupoid must lose information.

Posted by: Mike Shulman on January 18, 2009 3:13 AM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Of course you are right for n=2n=2, but we already know that there is a big difference between n=2n=2 and n=3n=3 […]

Right, so now let’s come back to the original question of whether Trimblean weak nn-categories can model all homotopy types.

The example I gave shows that just examining a given nn-category for which coherence maps it has may tell us little about which type it models. In the example we had a 2-groupoid with trivial associator modelling one with non-trivial associator.

Now, as we pass to n=3n=3 the next interesting coherence structure is the “exchangeator” or whatever, which on the skeletal 3-groupoid will be a 2-cocycle

π 2(X)×π 2(X)π 3(X). \pi_2(X) \times \pi_2(X) \to \pi_3(X) \,.

This measures the difference between expressing horizontal composition of 2-cells in terms of vertical composition by first re-whiskering in the two possible ways.

Trimblean nn-categories do not exhibit such an “exchangeator”. They do have weak units, though, and the conjecture is that as we find an equivalence between a Trimblean-style Π 3(X)\Pi_3(X) and a skeletal trigroupoid roughly, it’s these weak identities which turn into the above exchangeator.

Can anyone see how this might come about, in a way analogous to the example I spelled out above?

Posted by: Urs Schreiber on January 18, 2009 3:23 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Quick silly comment: I think I like the word “interchangor” (or maybe “interchanger”, but the “or” is a nod to the “or” in “associator” and “pentagonator”) for the coherence data of the interchange law in higher categories, as opposed to “exchangeator”.

Posted by: Bruce Bartlett on January 18, 2009 5:02 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

I like the word ‘interchanger’. I don’t think we need a funny half-Latinate word like ‘interchangor’ or ‘exchangeator’ when English has a perfectly fine word already.

But I’m not very consistent: on the other hand, I have a weakness for Toby’s term ‘unitor’, when ‘uniter’ would do as well.

(At least ‘unitor’ is all Latin-based, unlike the half-Latin, half-French ‘exchangor’. A word ending in ‘gor’ with the ‘g’ soft is clearly a mutt.)

Posted by: John Baez on January 18, 2009 9:05 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

I think that English spelling rules will allow you to write ‘exchangeor’ and ‘interchangeor’, with the ‘-e-’ in ‘-eor’ remaining silent.

If you don't like sticking Latin suffixes on French words, you can use a hyphen to show that you're just using a technical suffix with no etymological justification (like Urs did when he created [[co-span]]): ‘exchange-or’ and ‘interchange-or’.

However, since ‘exchange’ and ‘interchange’ are already perfectly good nouns, why not just use them as they are? This also emphasises that the ‘foo-or’ is not different from the ‘foo law’ that it's named after but is the same thing seen from a higher point of view (much like saying ‘functor’ between tricategories instead of ‘pseudo-33-functor’ or whatever).

Unfortunately, while ‘unit’ is also a perfectly good noun, it refers to the wrong thing, so we still need ‘unitor’. But if we had it all to do over again, we might use ‘association’ and ‘unity’ for both the equational laws and the higher operations.

Posted by: Toby Bartels on January 18, 2009 9:41 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

It’s precisely to distinguish the equation from operation.

As for hybrids, consider the (pen?) ultimate example: macadamization

Posted by: jim stasheff on January 19, 2009 2:36 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Urs wrote:

Now, as we pass to n=3 the next interesting coherence structure is the “exchangeator” or whatever, which on the skeletal 3-groupoid will be a 2-cocycle

π 2(X)×π 2(X)π 3(X).\pi_2(X) \times \pi_2(X) \to \pi_3(X).

This measures the difference between expressing horizontal composition of 2-cells in terms of vertical composition by first re-whiskering in the two possible ways.

You know this — and said it later — but just for people who don’t, I want to emphasize that the interchanger is not ‘the same’ as the map shown above. It’s just one ingredient.

The operation

π 2(X)×π 2(X)π 3(X)\pi_2(X) \times \pi_2(X) \to \pi_3(X)

is called the Whitehead product, and it’s built by taking two elements [α],[β]π 2(X)[\alpha], [\beta] \in \pi_2(X), picking representatives

α,β:[0,1] 2X,\alpha, \beta : [0,1]^2 \to X,

braiding these maps around each other twice to get a homotopy from the horizontal αβ\alpha \otimes \beta to itself, and using a standard trick to reinterpet this as a homotopy from the trivial map *:[0,1] 2X\ast : [0,1]^2 \to X to itself, and thus an element of π 3(X)\pi_3(X).

To reinterpret this in the language of nn-categories we need to use the Eckmann–Hilton argument to braid α\alpha around β\beta twice. Here’s how that argument lets you braid α\alpha around β\beta once:

The first and last steps in this argument use left and right unitors for the tensor product:

1ααα11 \otimes \alpha \simeq \alpha \simeq \alpha \otimes 1

while the middle step uses the ‘interchanger’ (what you’re calling the ‘exchangeator’):

(αβ)(γδ)(αγ)(βδ)(\alpha \otimes \beta)(\gamma \otimes \delta) \simeq (\alpha \gamma) \otimes (\beta \delta)

together with left and right unitors for composition:

1ααα1 1 \alpha \simeq \alpha \simeq \alpha 1

So, we don’t need the interchanger to be nontrivial to get the braiding

αββα \alpha \otimes \beta \to \beta \otimes \alpha

to be nontrivial — it suffices to have nontrivial unitors.

As you know, this is the approach taken by Joyal and Kock. Maybe it works for Trimble’s weak 3-categories too? Since the reasoning above is essentialy homotopy-theoretic, and his definition is motivated by homotopy theory, maybe this isn’t even very hard to see.

By the way, anyone who hasn’t read these yet should take a peek:

Posted by: John Baez on January 18, 2009 8:46 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Trimblean n-categories do not exhibit such an “exchangeator”. They do have weak units, though, and the conjecture is that as we find an equivalence between a Trimblean-style Π 3(X)\Pi_3(X) and a skeletal trigroupoid roughly, it’s these weak identities which turn into the above exchnageator. Can anyone see how this might come about, in a way analogous to the example I spelled out above?

I’m going to follow Bruce and call it an “interchanger”.

It’s pretty easy how this works: just follow John’s prescription. Let XX be a (Trimble) 3-category, and suppose we have two 2-cells α,β\alpha, \beta which are composable across a 0-cell bb:

(α:fg)X(a,b)(β:fg)X(b,c)(\alpha: f \to g) \in X(a, b) \qquad (\beta: f' \to g') \in X(b, c)

Let me represent composition across a jj-cell by j\otimes_j (this time in diagrammatic order, ha!). By weak units, there are invertible 3-cells

ρ:α 11 gαλ:1 f 1ββ\rho: \alpha \otimes_1 1_g \to \alpha \qquad \lambda: 1_{f'} \otimes_1 \beta \to \beta

contained in local 2-categories X(a,b)X(a, b), X(b,c)X(b, c), respectively. Now composition across the 0-cell bb is a strict map of 2-categories

0:X(a,b)×X(b,c)X(a,c)\otimes_0: X(a, b) \times X(b, c) \to X(a, c)

and this gives the strict interchange equation appearing as the first arrow in

(α 01 f) 1(1 g 0β)=(α 11 g) 0(1 f 1β)ρ 0λα 0β(\alpha \otimes_0 1_{f'}) \otimes_1 (1_g \otimes_0 \beta) = (\alpha \otimes_1 1_g) \otimes_0 (1_{f'} \otimes_1 \beta) \stackrel{\rho \otimes_0 \lambda}{\to} \alpha \otimes_0 \beta

Similarly, but in a reverse direction, there is an arrow

α 0β(1 f 0β) 1(α 01 g)\alpha \otimes_0 \beta \to (1_f \otimes_0 \beta) \otimes_1 (\alpha \otimes_0 1_{g'})

and putting these two arrows together, we get the interchanger.

Jim Dolan has the following metaphor for the 3-cell interchanger. Imagine the two 2-cells as a pair of eyes of someone about to fall asleep, or rather, think of each 2-cell as a movie taken of a drooping eyelid. In one extreme form of sleeping behavior, we have the left eye shutting first, followed by the right eye. At the other end, the right eye shuts first, followed by the left. There would be a whole continuum of behaviors mediating between these two extremes, where in the middle is the normal way, with both eyes closing in unison. The interchanger 3-cell would be such a homotopy through shut-eye behaviors, with the middle way represented by α 0β\alpha \otimes_0 \beta.

(I am happy to go into more detail about any of this, if needed.)

Posted by: Todd Trimble on January 19, 2009 2:16 AM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

John, Todd,

thanks, you describe how interchange works in the category itself. I suppose I know this, but what I was looking for was the analogue of the construction I described for Π 2(X)\Pi_2(X): the construction of the weak 3-functor which takes a skeletal 3-groupoid to Π 3(X)\Pi_3(X).

Because I think if we want to prove (or at least argue for) any classification, we need to find equivalences of our Π n(X)\Pi_n(X)s, in some way, to skeletal weak nn-groupoids to read off the structure morphisms there.

Instead of posting comments here I should probably sit down and spend more time thinking about this, since this looks like it is a bit subtle and tedious, with all the structure morphisms of the 3-functor itself flying around on top of everything else. For instance in the discussion for Π 2(X)\Pi_2(X) it is crucially the compositor of the 2-functor which cancels the associator of the skeletal 2-groupoid.

Won’t it be something similar here for Π 3(X)\Pi_3(X)? For instance, can the interchanger which Todd describes in the Trimblean Π 3(X)\Pi_3(X), in particular when all of f,f,g,gf, f', g, g' themselves are identities, pick up anything nontrivial by itself? I am having trouble seeing that at least intuitively, since all that happens there is that the constant map is getting reparemetrizzed(!), the interchange itself being strict. No?

So I was thinking that, as with Π 2(X)\Pi_2(X), the crucial information will be picked up by the structure morphisms of the 3-functor which goes from the skeletal version of Π 3(X)\Pi_3(X) to Π 3(X)\Pi_3(X) itself.

Posted by: Urs Schreiber on January 19, 2009 12:43 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

For instance, can the interchanger which Todd describes in the Trimblean Π 3(X)\Pi_3(X), in particular when all of f,f,g,gf, f&#8242;, g, g&#8242; themselves are identities, pick up anything nontrivial by itself?

No, but so what? We should only expect something interesting when at least some of these are nontrivial.

It’s the same deal as with Π 2(X)\Pi_2(X): the associator (that is, any choice of associator; there are lots of choices in Trimble Π 2(X)\Pi_2(X))

α f,g,h:(fg)hf(gh)\alpha_{f, g, h}: (f g)h \to f(g h)

returns a trivial 2-cell when f,g,hf, g, h are trivial 1-cells (constant paths), but I don’t see what force there is in that, since we are obviously interested in what happens when f,g,hf, g, h are nontrivial.

Posted by: Todd Trimble on January 19, 2009 2:59 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

No, but so what?

I was just trying to see the Whitehead product π 2×π 2π 3\pi_2 \times \pi_2 \to \pi_3 arise, which should come from composing two endomorphisms of identity 1-cells. In the presence of strict interchange, it seems it should be all encoded in the weak units, but I have trouble seeing how that can be, if the unitor here just reparametrizes the constant map.

This puzzle is what I am trying to understand currently. Likely I am mixed up and am not making sense. Sorry for that.

Posted by: Urs Schreiber on January 19, 2009 3:30 PM | Permalink | Reply to this

Re: Trimble’s notion of weak n-categories

Back here, Mike wrote:

Perhaps you should alter the definition of Todd’s operad to include only smooth maps? It’s not immediately clear to me whether that would work.

Eugenia Cheng and Nick Gurski have already studied this problem of defining smooth variants of the operad I used, here. See their development in section 5.2 (page 18ff). Their motivation is to define nn-categories of tangles and cobordisms, within a framework of iterated operadic enrichment that generalizes my definition.

Posted by: Todd Trimble on January 19, 2009 5:00 AM | Permalink | Reply to this

Re: Trimble’s notion of weal n-categories

I have only looked at very little of the literature on Trimble-weak-n-categories

AFAIK there is only very little of that; in fact the three or four references you found, Urs, authored by Tom Leinster, Eugenia Cheng, and Nick Gurski, are all that I know of. I’m grateful to them for all they’ve done: they’ve generalized and brought this sort of definition into more of the mainstream (comparing with e.g., Batanin’s approach), and have suggested some new potential applications.

Posted by: Todd Trimble on January 18, 2009 12:44 PM | Permalink | Reply to this

Re: nLab – General Discussion

This page is getting tricky to follow. Is there any possibility of hosting a forum on the same site as the nLab? That’s the right place for discussions. Great as the nCafe software is (better than any other blog I’ve seen yet), the kind of discussion going on here would be much better in a forum than in the comments section.

Posted by: Andrew Stacey on January 19, 2009 2:52 PM | Permalink | Reply to this

Re: nLab – General Discussion

Andrew wrote:

This page is getting tricky to follow. Is there any possibility of hosting a forum on the same site as the nnLab? That’s the right place for discussions.

We already have a general discussion page on the nnLab, but it became very hard to follow, so we decided that here is the right place for discussions.

Perhaps any discussion where lots of people are talking about lots of different subjects is going to get hard to follow, unless someone keeps reorganizing it to make it follow some semblance of a logical pattern.

I think the motto on the nnLab homepage isn’t bad: you go to the nnLab to work, and go to the nnCafé to chat. Meaning: things that we want to be visible in some permanent, well-organized way should be on the nnLab, while free-roaming conversation belongs here.

But perhaps we should have more blog entries about more different aspects of the nnLab?

Posted by: John Baez on January 20, 2009 11:30 PM | Permalink | Reply to this

Re: nLab – General Discussion

I clearly see Andrew’s point, that a forum software would be best suited for the kind of disucssion in this thread here.

On the other hand, I am hesitant to branch off the n-community into yet another branch. The big advantage of the discussion here is that it is not happening in a separate “room”.

I’d say the best option would be to go along with John’s suggestion:

But perhaps we should have more blog entries about more different aspects of the nnLab?

Yes, we should move non-minor new comments to separate entries. For instance discussion of the homotopy hypothesis clearly would deserve an entry of its own.

Posted by: Urs Schreiber on January 21, 2009 7:54 AM | Permalink | Reply to this

Re: nLab – General Discussion

I can understand your reluctance to have yet another n-Thing and I’m a little hesitant to even try to make the case again for an n-Forum. I’ll have one more go and then let it rest.

The main point is that it would save you guys a lot of work. Urs’ suggestion about having different blog entries would seem to me to have the potential to be a lot of work for you. You have to decide when something merits its own entry, you have to move everything over (no idea how much that would involve - maybe it’s really easy). You may have to split comments that were about one thing and then deviated (how many new topics are started by the words “while I’m thinking about it, here’s another question …”?). In addition, the nCafe gets a bit … polluted … by non-mathematical technical stuff about the nLab.

There is a big difference in the number of authors between the nLab and the nCafe which means that while the TeXnical Discussion post has been extremely successful, I don’t think that a similar thread about the nLab would be as successful.

Although there are many innovative features of the nCafe, it’s basic model is fairly simple and fairly well-tested. That isn’t the case of the nLab. Therefore there will be lots of discussions, arguments, and the like over lots of things ranging from what ought to be minor discussions on order of composition to more fundamental issues of authorship (note to foreigners: set irony detector to maximum). These need to be easily read and searched so that, for example, when I finally get round to actually contributing to the nLab (instead of sitting in my office - with a fantastic view of the fjord today - whingeing about things) and discover some strange feature that I’m not sure about, I can quickly figure out if other people have encountered the same and what the general opinion was. At the moment, I’d have to scan through this entire page to find out and people may not have been too careful in choosing suitable headers for their comments making it all the harder to do.

The ‘authorship’ thread is an excellent example. The discussion there started by my question but very quickly moved onto a completely different topic. The headings lagged a little, though. More importantly, from my point of view, I only got a couple of responses to my original question.

On the ‘here to chat, there to work’ slogan. That’s not quite accurate. I can’t think of a pithy way to say it, but I can’t start a conversation by myself here. The best I can do is to hijack one already going (which, to your credit, is something you seem quite happy to allow).

In summary, the choices are:

  1. Leave things as they are, making it a little tricky to figure out what’s going on “behind the scenes” at the nLab.

  2. Essentially Urs’ suggestion whereby you moderate this page, reorganising and sorting discussions as suits and sometimes starting whole new topics. Probably involving a lot of work on your behalf.

  3. Start a forum, then sit back and relax. You can even make it clear that the forum is only for technical issues on the nLab and not for mathematics.

Gosh, I don’t half witter on when I get started, do I?

Posted by: Andrew Stacey on January 21, 2009 12:34 PM | Permalink | Reply to this

Re: nLab – General Discussion

For what it is worth, I completely agree with Andrew. There is very little chance of a forum being dilutive and, in fact, could be more inclusive because some contributors might feel more comfortable in a forum format than either the blog or the wiki.

My suggestion is that those who think a forum should be created (including Andrew and myself) start doing some research to see what kind of work would be involved. I was an active participant on a forum that had LaTeX capability:

NuclearPhynance

that I think would be pretty suitable. Any other good examples?

Wilmott

is another I’ve been involved with that has LaTeX capability.

Posted by: Eric on January 21, 2009 3:06 PM | Permalink | Reply to this

Re: nLab – General Discussion

FWIW, I would support an nLab-forum too, if some enterprising soul were to get one going. It would definitely be easier to find things than it is on this page.

Posted by: Mike Shulman on January 21, 2009 7:10 PM | Permalink | Reply to this

Re: nLab – General Discussion

Concerning setting up a forum:

these are all very good arguments. I actually believe that a forum may suit my personal needs better than a blog, when it works well. I just think it should be avoided that we spread out too much, as an online community.

Certainly, in case that needs pointing out, I am not clinging to the fact that here on this blog I am one out of a chosen three that are allowed to post their stuff around at will. What I really want and need is a place for online discussion.

In fact, I got the impression that one or two misunderstandings which I experienced would have been avoided had I acted the way I acted in what people readily recognized as a forum as opposed to a blog, given that in much of the blogosphere blogs are used mostly to proclaim personal truths and strongly held opinions.

That said, I don’t feel that I have yet more time and energy to spend to set up such a forum, let alone to run it, but it seems that both Andrew and Eric have experience with this and might just want to go ahead and create the nnForum! Then we’ll see what happens.

Posted by: Urs Schreiber on January 21, 2009 7:32 PM | Permalink | Reply to this

Forums

Is there any forum software that doesn’t completely suck?

For that matter, is there a public forum on any scientific subject — with the unique exception of CosmoCoffee — which doesn’t suck?

I ask these questions not to discourage anyone from setting up an n-Category forum, but by way of explanation of my lack of interest in the genre, heretofore.

Posted by: Jacques Distler on January 21, 2009 8:58 PM | Permalink | PGP Sig | Reply to this

Re: Forums

Jacques, I completely agree. I have no knowledge of a forum system that supports mathematics.

However, at the birth of the nLab I figure that there’s going to be a lot of discussion about stuff that isn’t mathematics or at least isn’t too heavy on the mathematics so could be written in simple language.

So I’ve had a go at setting up an nForum. It’s at:

n-Forum

Hopefully it’ll work, but please let me know of any problems. If it doesn’t suit then a quick rm -rf will soon get rid of it.

I’m using Vanilla, which seems fairly small and fairly extensible, so if anyone feels like trying to write a maths plugin then go ahead. I guess the main problem would be getting it to export valid XML+MathML. Eric, what do those forums you mentioned use?

Account creation is via Captcha+Email verification so it should be fairly immediate to get started.

I’ve no idea how one might go about migrating stuff from here to there but I guess that if that’s needed then someone can figure it out.

Lastly, if anyone wants to be promoted to administrator - or some intermediate role - then ask.

Posted by: Andrew Stacey on January 22, 2009 9:54 AM | Permalink | Reply to this

Re: Forums

Nice! :)

Jacques mentioned CosmoCoffee. It seems to supports mathematics (but not the nice way that the nCafe and nLab do) and is built on phpBB (open source).

The finance forums were 1.) commercial and 2.) proprietary (home grown), so if it isn’t too much trouble, it might be worth trying out phpBB since CosmoCoffee seems to be pretty successful.

By the way, it is great to see you taking the initiative. Thanks!

Posted by: Eric on January 22, 2009 3:30 PM | Permalink | Reply to this

Re: Forums

CosmoCoffee’s relative success has, in my opinion, nothing to do with the software they’re using (which is as crappy as all the rest), and everything to do with their rigourously exclusionary policy on registering new participants.

Posted by: Jacques Distler on January 22, 2009 5:29 PM | Permalink | PGP Sig | Reply to this

Re: Forums

Shall I put a link on the nLab? Given it’s purpose I thought I ought to ask rather than just go ahead (I know that asking first is against the Wiki-Spirit but I’m an inveterate asker).

Posted by: Andrew Stacey on January 23, 2009 7:55 PM | Permalink | Reply to this

Re: Forums

(I know that asking first is against the Wiki-Spirit but I’m an inveterate asker)

I was thinking about imposing a rule here, that for every discussion started about how we might eventually proceed with this or that on the nnLab, the person who starts the discussion has to pay 10 nn-credit points, These points, in turn, would be gained by editing the nn-Lab.

So, for instance, you could pay for your question if you should include a link by first including that link! :-)

That would have the advantage that everybody could see concretely what it is you are suggesting. Should there be considerable opposition against whatever changes you made, we can still always roll back to before it was implemented.

Posted by: Urs Schreiber on January 24, 2009 11:57 AM | Permalink | Reply to this

Re: Forums

How about a mailing list or newsgroup for the discussion side of things? I much prefer either to a forum, which you need to poll to see what has changed. The mailing list/newsgroup would of course have an easily browsable archive, and it could also have a web submission form making it easy for someone who is browsing the archive to post a comment.

Mailing lists that are forwarded to gmane can be accessed in all of these ways, and more:

  • On the web, using frames and threads.
  • On the web, using a blog-like, flat interface.
  • Using an NNTP newsreader.
  • RSS feeds:
    1. All messages from the list, with excerpted texts.
    2. Topics from the list, with excerpted texts.
    3. All messages from the list, with complete texts.
    4. Topics from the list, with complete texts.

and you can post by e-mail, in a newsreader, and on the web. gmane also takes care to avoid spam.

See

gmane.test

as an example.

We’d lose the ability to view math, but for the purposes of general discussion, this is a very lightweight, flexible solution.

Posted by: Dan Christensen on January 24, 2009 10:09 PM | Permalink | Reply to this

Homotopy Hypothesis

This is a question/comment that is too whacky to go on the nLab page

I had never heard of the homotopy hypothesis (as far as I can remember) until seeing it at the nLab and then following the link to John’s neat set of slides.

The idea is close to my heart though. One thing I have internalized is that “processes are not free”. Because of this, I am doubtful that spaces can truly be recovered from their groupoids as desired. Traversing a process (morphism) and then traversing back along its inverse, does not truly get you back to where you started because the process comes with a cost somehow, e.g. lost time.

What I think has a better chance of working is to recover a directed space from its fundamental category. Maybe a good name for this would be the “Directed Homotopy Hypothesis”. Borrowing from John’s slides, this could be phrased as:

To what extent are directed spaces `the same’ as (fundamental) \infty-categories?

This could conceivably satisfy some desires regarding spaces by taking the space XX under question and constructing a “directed cylinder” X×IX\times\uparr I.

I think (but could be wrong) that fundamental groupoids on XX can be mapped to fundamental categories on X×IX\times\uparr I.

Then work backwards…

Given an \infty-category, can we recovery a directed space from that? If we can, then under what circumstances can the directed space by factored into a directed cyclinder X×IX\times\uparr I? In this way, the space XX is recovered not necessarily from its fundamental groupoid, but from the fundamental category of its corresponding directed cylinder.

Does that make any sense?

Posted by: Eric on January 20, 2009 6:13 AM | Permalink | Reply to this

Re: Homotopy Hypothesis

Actually, it’s interesting that you bring this up, Eric, because I’ve been vaguely thinking about passing from TopTop to some more combinatorial notion of space, to which we could apply some of the ideas Urs and I have been kicking around recently. But I don’t want to attempt saying anything about this just yet.

It’s quite true that most functors from a topological context to an algebraic one lose lots of information – you’d never be able to retrieve the space from the algebraic model up to homeomorphism, for example. But the homotopy hypothesis says that in cases of interest, you should at least be able to recover the space up to homotopy equivalence, which is a lot coarser as equivalence relations go, but neverthless interesting.

Why would we believe it? Because there are lots of partial results that point in this direction. One particular incarnation of the homotopy hypothesis is about “nn-types”, where we say that a continuous map f:XYf: X \to Y is an nn-equivalence if it induces an isomorphism on just the homotopy groups up to π n\pi_n. One of the very earliest “success stories” was for the case n=1n=1, where the fundamental group(oid) functor does the trick. The functor going back the other way is

B:GroupoidTopB: Groupoid \to Top

which maps a groupoid to its classifying space, and the crucial result is that for any space XX there is a map

XBΠ 1(X)X \to B \Pi_1(X)

which induces an isomorphism on π 0\pi_0 and π 1\pi_1. Already this isn’t a trivial result.

Things quickly get more complicated as we move up in dimension, with a fairly substantial literature surrounding the case n=2n = 2. An early success story here is the recognition that “crossed modules”, which we’ve been discussing a bit in nLab as you know, can be used to classify connected 2-types. At some point is was recognized (thinking especially of mathematicians like Ronnie Brown) that these structures were connected with categorical groups, or monoidal groupoids, and by now this theory has been pretty well worked over.

Of course I won’t go into all the history (not that I know it myself), but there’s also this famous 600-page letter “Pursuing Stacks” by Grothendieck, which is now archived at Bangor, which outlines what Grothendieck called the fundamental nn-groupoid, and which is conjectured to model nn-types. This to me is an awfully tantalizing conjecture, and it’s something I’d like to spend more time with in the company of Urs and others. It was this idea which motivated me to develop these “Trimble nn-categories” which we’ve been discussing.

In a more combinatorial direction, there is an old idea that simplicial sets are, for present homotopical purposes, basically as good as general spaces: in technical terms, any space is weak homotopy equivalent to the realization of a certain simplicial set called its simplicial singular complex. Anyway, for various technical reasons, the experts tend to like using simplicial sets, and here we have an actual theorem instead of just a homotopy hypothesis, if we’re sufficiently sneaky about it. For, one notion of \infty-groupoid is the notion of Kan complex, and it’s a theorem that any simplicial set is weakly equivalent to a Kan complex! This is actually an useful and important result which today is embedded in the general theory of model categories.

So summarizing, there’s plenty of reason I think to have faith in the homotopy hypothesis, and there’s no doubt that combinatorial or discrete models will play an important role in the game.

Posted by: Todd Trimble on January 20, 2009 1:24 PM | Permalink | Reply to this

Re: Homotopy Hypothesis

Just to amplify one aspect of Todd’s nice reply:

the “singular simplicial complex” S(X)S(X) associated with any topological space XX is its fundamental \infty-groupoid with these thought of as Kan complexes (in the sense: has every right to be addresed as such):

we want any “fundamental \infty-groupoid” of a space XX to be a graded combinatorial gadget which in degree kk is something like the “collection of all kk-dimensional cells in XX” (with some source and target etc. information, which makes them kk-dimensional paths).

This is precisely how S(X)S(X) works: if Δ k\Delta^k denotes the standard topological kk-simplex, then the collection of kk-cells of S(X)S(X) is

S(X) k:=Hom Top(Δ k,X)S(X)^k := Hom_{Top}(\Delta^k, X),

i.e. the collection of paths in XX which have the shape of kk-simplices, if you wish.

Posted by: Urs Schreiber on January 20, 2009 1:56 PM | Permalink | Reply to this

Re: Homotopy Hypothesis

Thanks for your comment Todd :)

Between you and Urs, I’m wracking up so much homework that I’ll never see the end. Now I need to learn about Kan complexes :)

By the way, I didn’t mean to suggest anyone lose faith in the homotopy hypothesis. Rather, I think (feel?) that the homotopy hypothesis might end up being more transparent if looked at via the “directed” homotopy hypothesis. Then the homotopy hypothesis could be seen as something like a special projection of the fundamental category down through I\uparr I to a fundamental groupoid similar to how a space XX can be seen as a projection down through its directed cylinder X×IX\times\uparr I.

If directed spaces do begin to take more of a central role (as I think they are), I know of a nice combinatorial gadget, i.e. “diamonds”, that might come in handy :)

Posted by: Eric on January 20, 2009 3:37 PM | Permalink | Reply to this

Re: Homotopy Hypothesis

a special projection of the fundamental category down through I\uparrow I to a fundamental groupoid similar to how a space XX can be seen as a projection down through its directed cylinder X×IX \times \uparrow I.

I don’t understand what you mean!

I was about to guess that I\uparrow I is supposed to denote the standard directed interval, right? But then I could’t make sense of the sentence surrounding it… :-) You may have to give us more details on what you are thinking of.

Posted by: Urs Schreiber on January 20, 2009 3:55 PM | Permalink | Reply to this

Re: Homotopy Hypothesis

Ack! You want me to make sense?! I was hoping you could magically decipher what I wish I would have said like you usually do :)

Let me take a step or two and maybe you can see what I’m trying to say.

Consider a groupoid GG with two objects 11 and 22 and two non-identity morphisms g:12g:1\to 2 and g 1:21g^{-1}:2\to 1.

Now consider a preorder whose diagram looks like a zi-zag ladder with nodes labelled

C 0={(1,i),(2,i)|i}C_0 = \{(1,i),(2,i)| i\in\mathbb{Z}\}

with relation

(,i)<(,j)i<j(\cdot,i) &lt; (\cdot,j)\Leftrightarrow i &lt; j

and morphisms

(1,i)Id 1,i(1,i+1),(1,i)\stackrel{Id_{1,i}}{\to}(1,i+1), (2,i)Id 2,i(2,i+1),(2,i)\stackrel{Id_{2,i}}{\to}(2,i+1), (1,i)g i(2,i+1),(1,i)\stackrel{g_i}{\to}(2,i+1), (2,i)g i 1(1,i+1),(2,i)\stackrel{g^{-1}_i}{\to}(1,i+1),

where

g i+1 1g i=Id 1,i+1Id 1,ig^{-1}_{i+1}\circ g_i = Id_{1,i+1} \circ Id_{1,i}

and

g i+1g i 1=Id 2,i+1Id 2,i.g_{i+1}\circ g^{-1}_i = Id_{2,i+1} \circ Id_{2,i}.

This is an example (I hope!) of a discrete directed space and maybe even its fundamental category.

What I mean by “projecting” is to forget the “time” index so that

(1,i)1G(1,i)\mapsto 1\in G (2,i)2G(2,i)\mapsto 2\in G Id 1,iId 1GId_{1,i}\mapsto Id_1\in G Id 2,iId 2GId_{2,i}\mapsto Id_2\in G g igGg_i\mapsto g\in G g i 1g 1G.g^{-1}_i\mapsto g^{-1}\in G.

Then the fundamental category on the directed space maps to the fundamental groupoid on the undirected space.

Pictorially, I am thinking of a fundamental groupoid on an undirected space as a smashed fundamental category on a directed space.

Kind of like smashing a slinky to a circle.

The idea is that in a directed space, a morphism requires at least one time step to traverse it. So to unfold a fundamental groupoid to a fundamental category, you need to add a time index and each morphism requires at least one tick.

Posted by: Eric on January 20, 2009 6:09 PM | Permalink | Reply to this

Re: Homotopy Hypothesis

Todd’s excellent reply incites me to add my grain of salt.

(i) The Grothendieck Pursuit is not really archived at Bangor. I have a more or less complete copy as does Ronnie. The more important thing to note is that, if you did not know this before, George Maltsiniotis is editing a version which will be available hopefully later this year (typed in Latex).

(ii) Note the important fact that a space XX is weakly equivalent to |Sing(X)||Sing(X)|, not homotopy equivalent. Weak equivalence is great for locally nice spaces such as CW-complexes but not for general compact metric spaces which may have singularities. We thus may have some difficulties when trying to evaluate the homotopy type of a spatial topos Sh(X)Sh(X).

(iii) I raised the question over in the Lab but let me point it out again. What seems to be of interest for directed spaces is the change in topology of the `time-like’ slices, i.e. deadlock points, inaccessible points (so ‘sources’), bifurcation points etc. There are ideas from dynamical system theory and Morse theory, such as the Conley index, that try to look at these homologically. What would be interesting would be to take some of the \infty-category ideas that are around and see if there our extensive categorical toolkit can identify some at least of the sort of structure that can be of interest to the practioners and users of directed spaces.

I feel that since we have ideas such as an initial object and a terminal object, in category theory, we should be able to adapt them to get sources etc. (This may actually apply in cobordism theory as well.)

Posted by: Tim Porter on January 20, 2009 8:17 PM | Permalink | Reply to this

Connections

Having discovered that I have elsewhere NOT given the correct definition of an Ehresmann connection nor of a principal connection, I discovered the nLab has no entry for

connection

nor

Ehresmann connection

nor

principal connection

though the Wiki does a pretty good job

can someone create the desired entry, wither by cribbing from or linking to Wiki?

Posted by: jim stasheff on January 21, 2009 3:38 PM | Permalink | Reply to this

Re: Connections

If you see something missing from the nLab wiki that you think should be there, why don’t you add it yourself? That is the whole point of the wiki *confused*

Posted by: Eric on January 21, 2009 4:09 PM | Permalink | Reply to this

Re: Connections

Eric: Jim Stasheff is enough of an elder statesman of mathematics that we should be surprised that he’s posting comments to this blog, not surprised that he’s asking us to create wiki entries for him. It was his associahedron that set the wheels of categorification in motion. I learned about characteristic classes from his book with Milnor back when I was a pup. And later, his L L_\infty-algebras turned out to be fundamental to higher gauge theory!

So, just as I gladly help my mom operate the remote for her TV, I’ll gladly help make wiki entries if Stasheff asks for them.

Posted by: John Baez on January 22, 2009 7:58 PM | Permalink | Reply to this

Re: Connections

John,
Thanks for defending my `honor’.

jim

Posted by: jim stasheff on January 24, 2009 1:06 AM | Permalink | Reply to this

Firefox (document type) problems with Export Html

Using the latest version of FireFox (3.0.5), I’ve tried exporting the wiki as HTML. When I browse an unzipped file (say span.html) say from a FireFox directory listing of the files (e.g. ”file:///C:/nLab”), FireFox decides it is of type “text/html” instead of “application/xhtml+xml” and any MathML is wrongly displayed.
If I manually change the file name to span.xhtml, then FireFox correctly displays the MathML. So either Firefox does not know to extract the type of a ‘.html’ file or there is some problem in the file’s type specification.
If I change the extension of all the unzipped files (e.g. ‘C:\nLab> ren *.html *.xhtml’) then they will all display correctly but their internal links refer to ‘.html’ files and no longer work. (What sort of works is to unzip the files, rename them, and unzip them again which results in both ‘span.html’ and ‘span.xhtml’ existing.)
A simple fix would be for ‘export html’ to generate ‘.xhtml’ files that contain ‘.xhtml’ links.
If ‘export html’ is being fixed, could it also be fixed so that each page includes 1) a title at the top, 2) the list of what links here at the bottom, 3) the defalut filename of the ‘zip’ ends in ‘.zip’, not ‘.html’?
Thanks – Rod.


[ There also seems to be a problem with what goes into the ‘.zip’ of the wiki. Un unzipping the file there are several duplicates with different sizes and timestamps. In particular philosophy.html
homotopy+theory.html
Oidification.html
group+theory.html
light+mill.html
Enriched+Category+Theory.html
homotopical+cohomology+theory.html
Homological+resolution.html
internalization.html
sheaf+and+topos+theory.html
set.html
day+convolution.html
Foundations.html
Nonabelian+algebraic+topology.html
Lie+Theory.html
Ring.html
physics.html
jim+stasheff.html ]

[ Here is come confusing stuff which may indicate type problems. If I locally save the “show view” and the “print view” versions of span as ‘xhtml’ then they can be read locally and display correctly, However, if I save them as ‘html’, then the ‘print view’ displays as a blank page, while the ‘show view’ is the wrong type causing bad MathML displays. ]


Posted by: RodMcGuire on January 21, 2009 10:36 PM | Permalink | Reply to this

Re: Firefox (document type) problems with Export Html

FireFox decides it is of type “text/html” instead of “application/xhtml+xml” and any MathML is wrongly displayed.

That’s a generic problem for which there is no easy fix.

When served over the Web, the file extension is irrelevant. The MIME-type of the page is determined by the Content-Type header sent by the webserver. Firefox (and other XHTML-capable browsers) get application/xhtml+xml; Internet Explorer (in particular) gets text/html.

When viewing files on your local disk, the browser has only the file extension to rely upon. If we used .xhtml, IE would barf. If we use .html, then nobody barfs, but Firefox won’t render the MathML.

I guess the question comes down to: what do you intend to do with the exported files?

One answer is: import them into another Instiki installation. In that case, the filename extensions are irrelevant.

That applies whether the Instiki installation is on a remote webserver, or running on your local machine.

Another answer is: build a static website out of these pages. In that case, too, the webserver takes care of setting the Content-Type header.

A third answer is: view these as static files on my local machine, without importing them into Instiki, or serving them with a locally-running webserver.

In that case, you have a problem. Whatever file extension you choose, some browsers won’t handle it correctly.

Un unzipping the file there are several duplicates with different sizes and timestamps.

That’s simply a consequence of the decision, by the owners, to avoid cleaning up the site by removing obsolete pages.

There’s a “Philosophy” page and a “philosophy” page. The former consists, in its entirety, of a pointer to the latter. Similarly, there’s a “jim stasheff” and a “Jim Stasheff” page, etc.

Posted by: Jacques Distler on January 22, 2009 12:30 AM | Permalink | PGP Sig | Reply to this

Re: Firefox (document type) problems with Export Html

Jacques Distler sez: “When viewing files on your local disk, the browser has only the file extension to rely upon. If we used .xhtml, IE would barf. If we use .html, then nobody barfs, but Firefox won’t render the MathML. I guess the question comes down to: what do you intend to do with the exported files?

I would like to download the wiki so I can see it and navigate through it when I am disconnected from the Web, say on a laptop when I am on a airplane, or in general it is much faster to download the whole wiki than to web access 20 pages. Rather than try to work for a common solution that really doesn’t work for either browser (both FF and IE won’t display the MathMl in your .html files), why don’t you give Export the options of ‘XHtml (best for Firefox) and HTML (best for IE)’?

Jacques Distler sez: (re: when unzipping the file there are several duplicates with different sizes and timestamps.)That’s simply a consequence of the decision, by the owners, to avoid cleaning up the site by removing obsolete pages.There’s a “Philosophy” page and a “philosophy” page…

In the unzipped downloaded files I havn’t noticed any case conversion. There are some files names that start with Upper case and some that start with lower case. Maybe the unzippers I’ve been using barf if two file names differ in initial case but I think that that is unlikely. But then again I’ve looked at an unextracted listing that contains ‘Philosophy.html’ and ‘philosophy.html’ and find that only one is extracted. - Screwy!

Anyway, how about splitting Export Html into two options (Html and XHtml) so at least one works for FireFox (does anybody reading this not use FF given that you sometimes post SVG?). Then there my two other desires.

1) A title at the top of each exported page.

2) At the bottom, the list of ‘what links here’.

Thanks - Rod.

Posted by: RodMcguire on January 22, 2009 4:38 PM | Permalink | Reply to this

Re: Firefox (document type) problems with Export Html

or in general it is much faster to download the whole wiki than to web access 20 pages.

But much more overhead on the server side, as it has to render 500+ pages just to serve you your zip file. (As opposed to serving the existing 20 pages from the cache.)

I would try to be judicious in your use of this “feature.”

Rather than try to work for a common solution that really doesn’t work for either browser (both FF and IE won’t display the MathMl in your .html files), why don’t you give Export the options of ‘XHtml (best for Firefox) and HTML (best for IE)’?

Because it’s easy to auto-detect which kind of file to serve.

1) A title at the top of each exported page.

Easy enough.

2) At the bottom, the list of ‘what links here’.

Naw.

That’s expensive. And creating the zip file in the first place is already expensive enough.

Posted by: Jacques Distler on January 23, 2009 5:15 PM | Permalink | PGP Sig | Reply to this

Equation numbering

It occurs to me to point out to Todd (and perhaps others) that equation numbering is supported (both here and in Instiki).

You did realize that, I hope.

Posted by: Jacques Distler on January 24, 2009 9:56 PM | Permalink | PGP Sig | Reply to this

Re: Equation numbering

Well, I don’t claim to be at all competent in software-related matters, so as a general rule, don’t place your hopes too high.

That said, I do pay attention when people come by to improve on things I write in nLab, and this was no exception. The point has been noted.

Posted by: Todd Trimble on January 24, 2009 11:26 PM | Permalink | Reply to this

Re: Equation numbering

It’s quite up to you whether to take my advice on the matter. But I think automatically numbered and hyperlinked equations are worthwhile having, particularly when they are just as easy to create.

Similarly, I happen to think that the Theorem Environments are underutilized on the nLabs. But, again, tastes may differ.

Posted by: Jacques Distler on January 25, 2009 3:57 AM | Permalink | PGP Sig | Reply to this

Re: Equation numbering

Thanks for the advice, Jacques – you make some good points, and I do appreciate them.

And I did realize, I hope, that it’s quite up to me whether to take it. ;-)

Posted by: Todd Trimble on January 25, 2009 12:52 PM | Permalink | Reply to this

Homotopy hypothesis and strict n-fold groupoids

In our discussion at nnLab: homotopy hypothesis luckily Tim Porter and Ronnie Brown contributed a couple of important aspects, in particular emphasizing that strict nn-fold groupoids already model all homotopy nn-types, due to a result by Loday from 1982.

I have to admit that I didn’t properly appreciate this statement before in its generality, even though I did know about crossed squares and their role in the homotopy hypothesis. So the point is, I gather, that we look at strict nn-fold groupoids for which the various 1-categories of 1-morphisms may be different and without “connection” (i.e. without certain thin fillers)?

Anyway, I was starting to add commented references to the nnLab on this. But I am not sure what the best canonical references are. Help is appreciated.

Posted by: Urs Schreiber on January 27, 2009 8:06 PM | Permalink | Reply to this

Re: nLab – General Discussion

Hi guys. What’s the nnLab convention for choosing between the terms ‘2-category’, ‘weak 2-category’, ‘strict 2-category’ and ‘bicategory’, and what does ‘2-category’ mean on the nnLab? There are some inconsistencies; for example, the page on 2-categorial limits contains all three of ‘2-category’, ‘strict 2-category’ and ‘bicategory’. This is incredibly confusing! When reading about something like 2-categorical limits, which is confusing enough as it is, this sort of issue can cause severe psychological harm to the reader.

Posted by: Jamie Vicary on January 29, 2009 1:32 PM | Permalink | Reply to this

Re: nLab – General Discussion

We agreed that “nn-category” is as weak as possible, and everything else has to be explicitly qualified.

Yes, there are probably inconsistencies. Your help is appreciated. The general rule is: if you come across anything you find annoying or even psychological harmful: drop everything else, hit “edit” and improve it! :-)

Posted by: Urs Schreiber on January 29, 2009 1:39 PM | Permalink | Reply to this

Re: nLab – General Discussion

Will do!

Posted by: Jamie Vicary on January 29, 2009 2:12 PM | Permalink | Reply to this

Re: nLab – General Discussion

… or, I should add, if you don’t want to or cannot improve on it, drop lots of these green query boxes saying “this is unclear”, “what is it really that is meant here?”, “this conflicts with the defintion above”, etc.

Posted by: Urs Schreiber on January 29, 2009 1:44 PM | Permalink | Reply to this

Re: nLab – nForum

Just to reiterate, the n-Forum is open for discussion at

nForum

which should hopefully make it easier to follow discussions and see when new ones crop up.

Posted by: Andrew Stacey on January 29, 2009 3:36 PM | Permalink | Reply to this

Re: nLab – General Discussion

In case anyone is wondering: after some time-out for a couple of hours the Lab is back online.

(Thanks, of course, to Jacques Distler. Hopefully I’ll eventually be able to handle such things myself…)

Posted by: Urs Schreiber on January 30, 2009 2:39 PM | Permalink | Reply to this

Cubical Stuff

I was just reading connection. The idea looks neat, but I got lost in the equations.

In the past, I’ve tracked down references on cubical “stuff” and haven’t made much progress understanding things even though I feel I should be able to. I think the reason is that I can process pictures much better than strings of equations. I am pretty confident that the equations likely follow directly from a fairly simple picture, e.g. “If you understand this diagram all these equations follow directly.” That is the way I feel when I see any string of equations related to cubes. Is that true? Has someone bothered drawing those pictures? If not, could someone who understands cubical categories, cubical sets, and connections have mercy and draw some pictures and put them on the nLab? What is the arrow theoretic way to define cubical stuff?

Posted by: Eric on February 18, 2009 3:37 PM | Permalink | Reply to this

Re: Cubical Stuff

Hi Eric,

here is the quick idea:

in a cubical set, you are guaranteed for every nn-cell (which I draw as a 1-cell) afb a \stackrel{f}{\to} b that there is the identity (n+1)(n+1)-cell (which I’ll draw as a 2-cell) of the form a f b Id Id Id a f b. \array{ a &\stackrel{f}{\to} & b \\ \downarrow^{Id} &\Downarrow^{Id}& \downarrow^{Id} \\ a &\stackrel{f}{\to} & b } \,.

A cubical set is said to have a connection (no relation to the parallel transport meaning of “connection”!) if in addition it has for every nn-cell afba \stackrel{f}{\to} b also (n+1)(n+1)-cells of the form

a f b f Id Id b Id b. \array{ a &\stackrel{f}{\to} & b \\ \downarrow^{f} &\Downarrow^{Id}& \downarrow^{Id} \\ b &\stackrel{Id}{\to} & b } \,.

And so forth. You should think of this as saying that the “thin” cell

affb \array{ a \stackrel{\stackrel{f}{\to}} {\stackrel{f}{\to}} b }

is regarded as a degenerate cube by the cubical set in all the possible ways.

So it’s a very natural condition, in particular if you think of all these cubical cells as cubical paths in some space.

Posted by: Urs Schreiber on February 18, 2009 4:19 PM | Permalink | Reply to this

Re: Cubical Stuff

Thanks Urs. That is EXACTLY the kind of thing I was looking for. Can similar pictures be drawn for the all those defining relations, e.g. face maps and degeneracy maps, etc, for cubical sets?

For example, the relations in Proposition 2.8 (page 9) of

Sjoerd Crans, Pasting schemes for the monoidal biclosed structure on ωCat\omega-Cat

What does a face map look like? What does a degeneracy map look like? The relations probably follow from simple pictures like the diagrams you gave.

Posted by: Eric on February 18, 2009 4:56 PM | Permalink | Reply to this

Re: Cubical Stuff

no relation to the parallel transport meaning of “connection”!

Oh! The unfortunate clash of terminology AND notation, i.e. Γ\Gamma’s, compounds the likelihood of confusion :)

Posted by: Eric on February 18, 2009 4:59 PM | Permalink | Reply to this

Re: Cubical Stuff

Can similar pictures be drawn for the all those defining relations, e.g. face maps and degeneracy maps, etc, for cubical sets?

Sure! I started now describing the idea a bit more in the section Idea at nnLab: cubical set. Have a look and let me know if this helps.

Posted by: Urs Schreiber on February 18, 2009 9:37 PM | Permalink | Reply to this

Re: Cubical Stuff

not diagrams, pictures

e.g. . to ______ from cdot

or in words the inclusions of the faces where x_i = 0 or 1

and the projections in the ith-direction

but the extras?

Posted by: jim stasheff on February 18, 2009 9:49 PM | Permalink | Reply to this

Re: Cubical Stuff

I saw that! Yes, it is helpful thanks. Only it raises more questions :)

If only I had more time to think…

Posted by: Eric on February 18, 2009 10:20 PM | Permalink | Reply to this

Re: Cubical Stuff

This is a test to see if I’ve got the basic idea…

It looks like degeneracy maps are very much like identity assigning maps. Each node gets mapped to an “identity loop”, each edge gets mapped to an “identity cylinder”, etc.

The extra degeneracy maps are kind of like degeneracy maps except some of the cells get “pinched”. So instead of an edge getting mapped to an identity cylinder, it might get mapped to an “identity cone” since one node gets mapped to an identity loop and the other remains pinched as a node.

Posted by: Eric on February 19, 2009 4:13 PM | Permalink | Reply to this

Re: Cubical Stuff

I think the reason is that I can process pictures much better than strings of equations.

How about string pictures, i.e., string diagrams? The kinds of diagrams that look a little like Feynman diagrams, that people around here often enjoy using?

Somewhere in the entry on cubical sets I mentioned that the category \Box on which cubical sets are based (they are functors opSet\Box^{op} \to Set) could be described as the walking monoidal category with an interval object xx with two endpoints and a squashing map

i 0,i 1:1xp:x1i_0, i_1: 1 \overset{\to}{\to} x \qquad p: x \to 1

where 11 is the monoidal unit.

This may not look all that helpful at first, so I should say that one of the real points behind this description is to encourage people to think about cubical notions in terms of string diagrams. Maybe someone will roll up the sleeves and include string diagram pictures for entries like connections. Maybe I should have a go at this myself.

Posted by: Todd Trimble on February 18, 2009 4:21 PM | Permalink | Reply to this

Re: Cubical Stuff

Pictures for the extra degeneracies would help - is there a link? to whom do we owe the name?

Posted by: jim stasheff on February 18, 2009 9:42 PM | Permalink | Reply to this

Re: Cubical Stuff

I made an attempt at a picture of an extra degeneracy map. I hope it is in the ballpark of being correct.

Posted by: Eric on February 19, 2009 1:31 AM | Permalink | Reply to this

Re: Cubical Stuff

That link took me to the cubical set page
but I didn’t see any picture

Posted by: jim stasheff on February 19, 2009 2:26 PM | Permalink | Reply to this

Re: Cubical Stuff

There might be some issue with your browser’s cache not refreshing or something, but just to be clear, I was referring to this:

These extra degeneracy maps act by sending 1-cells to degenerate 2-cells of the form

(a f b)(a f b f Id b Id b).\left(\array{ a&\stackrel{f}{\to}&b }\right) \;\; \mapsto \;\; \left(\array{ a & \stackrel{f}{\to} & b \\ \downarrow^{f} & \Downarrow & \downarrow^{Id} \\ b & \stackrel{Id}{\to} & b }\right) \,.

If the cubical set has this additional property, one calls it a cubical set with connection.

Posted by: Eric on February 19, 2009 3:06 PM | Permalink | Reply to this

Icon

A while ago somebody had added an icon to the nnLab, which appeared in the top left corner of each page. That was nice. I felt the picture well expressed the nature of the nnLab.

Now the icon has disappeared again. I know nothing of either process. But I would enjoy seeing the icon back in place!

Posted by: Urs Schreiber on February 25, 2009 10:02 AM | Permalink | Reply to this

Re: Icon

The icon is back! Looks like a giant multispan to me.

Posted by: David Corfield on February 25, 2009 5:37 PM | Permalink | Reply to this

Re: Icon

You mean: ? Go here for an explanation.

Posted by: Jacques Distler on February 26, 2009 1:54 AM | Permalink | PGP Sig | Reply to this

Re: Icon

My immediate thought was of this painting by Matisse. :D

Posted by: David Roberts on February 26, 2009 4:57 AM | Permalink | Reply to this

Re: Icon

My immediate thought was of this painting by Matisse.

Considering the title of that piece, maybe we should pick one of its blobs and make that the logo of the nn-Category Lab. This would be a homage both to Instiki and (through Matisse's title) to the subject.

Posted by: Toby Bartels on February 26, 2009 7:53 PM | Permalink | Reply to this

Decomposing hom-functors into In- and Out-parts ?

Andrew Stacey has some questions for general-nonsense-experts at the new nnLab entry

request for help.

The first one is this, which arose in the context of coming to grips with what Frölicher space might mean in a nice abstract setting:

Suppose we are in the context of VV-enriched categroy theory with VV a symmetric closed monoidal category. For a fixed VV-category SS we wish to consider triples consisting of

- a VV-functor

In:S opV In : S^{op} \to V

- a VV-functor

Out:SV Out : S \to V

- a VV-natural transformation

In()Out()S(,):S opSV. In(-)\otimes Out(-) \to S(-,-) : S^{op} \otimes S \to V \,.

Question: where might such triples naturally live? Has something akin to such triples be considered elsewhere already? What would be the right category of such triples and which properties does it have?

Andrew has more related questions on that page.

Posted by: Urs Schreiber on February 26, 2009 11:57 AM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

Hi Andrew,

so I suppose we want to specify some nice condition on the transformation In()Out()S(,)In(-)\otimes Out(-) \to S(-,-) which mimics the saturation conditition on in/out plots for Frölicher spaces?

Do you have an idea what that condition might be like? That will depend on which kind of theorem one would like to hold for such triples. You might have to remind me here on where this is headed.

Ah, right, probably we want to characterize Isbell-self-dual objects in [S op,V][S^{op},V]?

So maybe the goal is to put a condition on In()Out()S(,)In(-) \otimes Out(-) \to S(-,-) such that the condition is satisfied precisely if

- Out()Out(-) is the Isbell-dual of In()In(-)

and

- In()In(-) is the Isbell-dual of Out()Out(-)

?

I.e. so that it is guaranteed that

Out():u[In(),V(,u)] Out(-) : u \mapsto [In(-),V(-,u)]

and

In():u[V(u,),Out()]. In(-) : u \mapsto [V(u,-),Out(-)] \,.

Posted by: Urs Schreiber on February 26, 2009 12:30 PM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

The first thing that springs to mind is V-bimodules or V-profunctors (two words for the same concept).

You have S(,)S(-,-) is the identity V-profunctor on SS and InOutIn\otimes Out is another V-profunctor from SS to SS. And you’re just writing down a morphism between them.

Take a simple case. If say V is Vect and SS is a one-object VV category, i.e., an algebra, then InIn is a left SS-module and Out is a right SS-module, so you just tensor them together to together to get an SS-SS bimodule. On the other hand S(,)S(-,-) is SS considered as an SS-SS bimodule in the obvious way (it is the identity on SS in the bicategory of bimodules). Your natural transformation is then just a map of SS-SS bimodules.

[I don’t see why you say this is `decomposing’; you’re not saying that the natural transformation has to be an isomorphism, are you?]

Posted by: Simon Willerton on February 26, 2009 12:41 PM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

I don’t see why you say this is `decomposing’; you’re not saying that the natural transformation has to be an isomorphism, are you?

I was just looking for a catchy word in the headline. I chose “decomposition” loosely to hint at the fact that we are not looking, in that bimodule language, at arbitary SS-bimodules, but just as those that are decomposed into a left and a right one.

Finding a better term for “decomposition” here may go some way towards answering Andrew’s question. Geometrically speaking the idea is that we want to look at maps between test spaces UVU \to V and all their factorization through a given generalized space XX as

UXV. U \to X \to V \,.

So, yeah, decomposition is not a good term. But rouhgly something like that is going on.

Posted by: Urs Schreiber on February 26, 2009 1:03 PM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

Thanks, Urs, for flagging this question.

Not being an expert in category theory in any way, I’m never sure if what I’m looking at is something new or I’m just rediscovering something everyone else knows about (hey, it rolls down a hill! Maybe if I took four of these, attached them to some logs, and also invented an internal combustion engine, I could avoid having to walk everywhere).

At this stage, I’m mainly looking for antecedents. Frölicher spaces seem to fit nicely into the setting of these “generalised spaces” which consist of, as Urs says, a presheaf and a copresheaf together with a “composition” natural transformation. Within the category of such things there are sub-categories where certain conditions are satisfied - such as , but not limited to, Isbell duality. One can see sheaves as an example of this (make the copresheaf be the Isbell dual of the presheaf and you can essentially ignore it).

So my initial questions are:

  1. Have these been studied before? Either in generality or special cases. For example, as in Simon’s example - though I think that Urs is right that one can’t just take arbitrary bimodules. It’s really pairs of modules (one left, one right) together with a pairing into the ground ring. Weak duality of topological vector spaces is another (similar) example.

  2. If so, where can I find out more?

  3. If not, what shall I call them? I’d like to call them “generalised objects” or “generalised spaces”.

  4. One can also ask that the presheaves and copresheaves be “concrete” in an appropriate manner. This is more than just a minor shift as it has a noticeable effect on what conditions can be imposed. Indeed, part of my interest in this is to see how to make a condition in the concrete case suitable for the non-concrete.

Obviously, my main interest is in smooth spaces, where the “test” category is some category of “standard” smooth spaces, but I’ll take answers in any setting!

Posted by: Andrew Stacey on February 27, 2009 8:54 AM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

I think that “generalized space” or “generalized object” is too general a word. There are lots of things that might be called a generalized space or a generalized object; lots of different directions in which one might “generalize.” In some situations, ordinary sheaves deserve that title. Or maybe cosheaves. Or even just presheaves. Or maybe a generalized space should be an object of the dual of some algebraic category. Locales can be thought of as generalized spaces; so can arbitrary toposes for that matter.

Ideally, I would suggest a name that is, in some way, descriptive of this particular way to generalize the notion of space/object. For instance, “bi-presheaf,” since it is both a presheaf and a co-presheaf with some compatibility. I’m not sure how much I like the way that sounds, but at least it would be unambiguous.

Posted by: Mike Shulman on February 27, 2009 7:30 PM | Permalink | Reply to this

Re: Decomposing hom-functors into In- and Out-parts ?

For the record and for those not subscribed to the cat theory mailing list:

Andrew asked about the right name for this structure there and the answer was: this thing is called the Isbell envelope of a category.

Eventually we’ll nLabify this stuff…

Posted by: Urs Schreiber on March 9, 2009 2:52 PM | Permalink | Reply to this

Service down?

Is it just me, or is ncatlab.org down?

Posted by: Toby Bartels on March 3, 2009 9:07 PM | Permalink | Reply to this

Re: Service down?

OK, it's back now.

Posted by: Toby Bartels on March 4, 2009 6:17 AM | Permalink | Reply to this

Waldhausen cat versus cat of fib objects

Let CC be a category with zero-object. It seems that the structure of a Waldhausen category on CC is pretty close to being equivalent to the structure of a category of fibrant objects on the opposite category C opC^{op}.

Can anyone say how equivalent exactly?

Posted by: Urs Schreiber on March 17, 2009 11:39 AM | Permalink | Reply to this

Grothendieck universes

I just created an entry Grothendieck universe. But I am not the right expert to do so, and I have questions:

it seem to me that the general attitude of Grothendieck universes is opposite to that of ETCS: the definition of the former makes sense only if we assume that “everything is a set” and that sets and their elements are on the same footing, while in the latter precisely this point is supposed to be changed.

Is that right? If so, what is the analog in ETCS of Grothendieck universes, or of the role which they play in practice?

What’s the “good”, “modern” way to talk about large categories?

(I feel I asked this kind of question before. If anyone feels he or she already gave the answer in discussion here, please give it again! Thanks.)

Posted by: Urs Schreiber on March 23, 2009 10:08 PM | Permalink | Reply to this

Re: Grothendieck universes

Possibly you are thinking of the last time that I (or somebody else) mentioned strongly inaccessible cardinals to you. Those are easily formulated in structural (ETCS-like) terms, and (as the Wikipedia article explains) are equivalent to Grothendieck universes.

Posted by: Toby Bartels on March 24, 2009 2:09 AM | Permalink | Reply to this

Re: Grothendieck universes

Possibly you are thinking of the last time that I (or somebody else) mentioned strongly inaccessible cardinals to you. Those are easily formulated in structural (ETCS-like) terms, and (as the Wikipedia article explains) are equivalent to Grothendieck universes.

I think I understand the bit about the inaccessible cardinals. This just means that the Grothendieck universe is big enough (not to say “large”) to qualify as a “universe”: we can’t leave it by taking power sets and not by taking unions.

What I am looking for (still, but your and Todd’s latest comments on the nLab helped a lot) is the global story to be told about size issues when introducing functor categories in general and Set-valued presheaf categories in particular from scratch.

So I take it the story (in its nice modern form) goes something as follows:

0) We assume that we understand and agree on the ETCS definition of the category of sets “by grace”. This assumption roots the formalism to come in something not further formally deduceable.

I’ll write StrucSetStrucSet for the ETCS category of sets obtained this way, to amplify how we think of it.

1) Given StrucSetStrucSet we have a notion of topos objects internal to StrucSetStrucSet. If everything works as expected, these should be precisely the Grothendieck universes UU in their structural incarnation the way you present it, to be thought of in the following in terms of their associated StrucSetStrucSet-internal categories USetU Set.

2) Next we consider for each such internal topos object USetU Set the categories internal to StrucSetStrucSet which happen to come from categories

a) enriched over USetU Set : the UU-categories;

b) internal to USetU Set: the UU-small categories.

3) For every given StrucSetStrucSet-internal category CC and every USetU Set we consider the presheaf categories [C op,USet][C^{op}, U Set]. For each we are guaranteed to find a topos object VV such that [C op,USet][C^{op}, U Set] is internal to VSetV Set.

4) In this spirit we proceed with our theory, always formally keeping track of to which internal topos object UU, VV we happen to be internal to. We hope/assume/argue that none of our central statements that are derived depend in a crucual way on these choices.

Is this roughly the story of size issues for handling presheaf categories when we place ourselves in the ETCS lore?

Do we want to talk about accessible categories in this context? If so, it will be a relative notion: something like UU-accessible VV-categories for all pairs of internal topos objects UVU \subset V.

Whether or not the above is close to being right, it is this kind of story which I would like to see more clearly.

Posted by: Urs Schreiber on March 24, 2009 1:44 PM | Permalink | Reply to this

Re: Grothendieck universes

Is the approach of algebraic set theory to universes of interest here?

Posted by: David Corfield on March 26, 2009 1:03 PM | Permalink | Reply to this

Re: Grothendieck universes

Thanks for the link to algebraic set theory!

Have you already looked into this? Maybe you could write a short summary entry to the nnLab.

Otherwise I try to look into this later. Am currently busy with the entries on limits etc…

Posted by: Urs Schreiber on March 26, 2009 1:21 PM | Permalink | Reply to this

Re: Grothendieck universes

I don’t know much about it but I’ve started a entry. The Outline mentioned there looks like the best introduction.

Posted by: David Corfield on March 26, 2009 2:47 PM | Permalink | Reply to this

Down

The wiki has been down for several hours now (and in the middle of my editing too!).

I hope that it's helpful to say so here.

Posted by: Toby Bartels on March 25, 2009 3:22 AM | Permalink | Reply to this

Re: Down

The wiki has been down for several hours now

Andrew Stacey has managed to fix it. It is running now again.

Thanks Andrew!

(and in the middle of my editing too!).

Yes, I know. It went down right in the middle when the two of us were going back and forth editing Grothendick universe and stuff, structure, property.

(Did you not get the email I sent you a few minutes after the server went down?)

I am wondering if the fact that several people try to access the same entry simultaneously (which the softeware does in principle know how to handle) is somehow related to the crashes of the system.

But in any case, Andrew Stacey has a good idea of what’s going on and has explained to me how to fix things. So maybe next time I can restart the system myself.

Thanks again, Andrew!

Posted by: Urs Schreiber on March 25, 2009 1:59 PM | Permalink | Reply to this

Always include a plus sign in Toby's email address.

Urs wrote to me in part:

(Did you not get the email I sent you a few minutes after the server went down?)

No, because (aside from not being able to read my email for an unreleated reason a little while around then) you sent it to toby@math.ucr.edu, which goes straight to the spam folder. I've warned you about this before!

Try toby+*@ugcs.caltech.edu, but replace * with something unique. (In your case, Urs, I usually replace * with urs when sending email to you, so you should have that in your address book; 10 random digits is also a good system.)

Posted by: Toby Bartels on March 25, 2009 11:31 PM | Permalink | Reply to this

Re: Down

Well, it looks like it's gone down again, and guess what!

It went down last time just as I was trying to submit an edit to [[stuff, structure, property]], and I decided to start my editing of that page by saving the same edit as before (to look at it), then combine it with the changes that Urs made in the meantime. And again it went down just as I submitted!

In both cases, I'd just made several other submissions shortly before. So it looks like there's a bug that chokes on something in my submission (which may well have a bug itself, although I can't find one).

If anybody wants to look at the offensive submission, it is temporarily at http://www.ugcs.caltech.edu/~toby/stuff%2C%20structure%2C%20property.wiki. I won't try to submit it again!

Posted by: Toby Bartels on March 25, 2009 11:21 PM | Permalink | Reply to this

Re: Down

Well, it looks like it’s gone down again,

Yup, I noticed, since I was editing, too, again. I have already tried all I can (not much) to get the system back up. But this time the former trick doesn’t seem to work.

Well, we’ll sort this out eventually. For the time being I have only 5 minutes left until I have to run and catch my train. I’ll send you something by email, in case you feel like fiddling around with this…

Posted by: Urs Schreiber on March 25, 2009 11:26 PM | Permalink | Reply to this

Re: Down

OK, it's up now, and I can crash it (with my bug-inducing text) and restart it at will.

I'm not going to try to post anything for a while.

Posted by: Toby Bartels on March 26, 2009 2:01 AM | Permalink | Reply to this

Basics

In an attempt to eventually create a comprehensive collection of nnLab entries for the basics of (higher) category theory, I have started (just started) to systematically create entry-linked keyword lists chapterwise for text books. I chose

- Kashiwara, Shapira, nnLab: Categories and Sheaves

for n=1n=1 and

- J. Lurie, nnLab: Higher Topos Theory

- J. Lurie, nnLab: Stable \infty-Categories

for n=(,1)n = (\infty,1) (which nicely matches in topics, I think).

So far chapter 1 and 2 of Kashiwara-Shapira are nnLabified fairly completely (but nnLab: Kan extension needs to be expanded further), the remaining chapters have just sporadic entries linked so far. Same for Higher Topos Theory and even more so for Stable \infty-Categories.

I am just mentioning this in order to say:

All help is appreciated!

Posted by: Urs Schreiber on March 26, 2009 9:48 PM | Permalink | Reply to this

On Weak Lie 2-Algebras; Re: nLab – General Discussion

Not sure what thread this should go to:

Title: On Weak Lie 2-Algebras
Author(s): Dmitry Roytenberg
Series: ESI preprints
Math. Subj.: 17B55
18G55
58H15
55U15
17B81

Abstract: A Lie $2$-algebra is a linear category equipped with a functorial bilinear operation satisfying skew-symmetry and Jacobi identity up to natural transformations which themselves obey coherence laws of their own.
Functors and natural transformations between Lie $2$-algebras can also be defined, yielding a $2$-category. Passing to the normalized chain complex gives an equivalence of $2$-categories between Lie
$2$-algebras and certain “up to homotopy” structures on the complex; for strictly skew-symmetric Lie $2$-algebras these are $L_\infty$-algebras, by a result of Baez and Crans. Lie $2$-algebras
appear naturally as infinitesimal symmetries of solutions of the Maurer–Cartan equation in some differential graded Lie algebras and $L_\infty$-algebras. In particular, (quasi-) Poisson manifolds, (quasi-) Lie bialgebroids and Courant algebroids provide large classes of examples.


Keywords: Categorification, Lie algebra, crossed module, Courant algebroid

Posted by: Jonathan Vos Post on March 27, 2009 10:12 PM | Permalink | Reply to this

Re: On Weak Lie 2-Algebras; Re: nLab – General Discussion

Not sure what thread this should go to:

We had a blog entry on this: Roytenberg on weak Lie 2-algebras.

A brief remark on this is also at nnLab: L L_\infty-algebra.

But it would be nice if somebody founnd the time to write a separete nnLab entry on this idea…

Posted by: Urs Schreiber on March 29, 2009 5:09 PM | Permalink | Reply to this

Thesis sample chapter Re: nLab – General Discussion

I’ve put a chapter from my thesis on my web at the nLab. Look for the section ‘Notes’ at the bottom of the page. From the introduction:

In this chapter we consider anafunctors [Makkai,Bartels] as generalised maps between internal categories [Ehresmann], and show they formally invert fully faithful, essentially surjective functors (this localisation was developed in [Pronk_96] without anafunctors). To do so we need our ambient category SS to be a site, to furnish us with a class of arrows that replaces the class of surjections in the case S=SetS = Set. The site comes with collections called covers, and give meaning to the phrase “essentially surjective” when working internal to SS. A useful analogy to consider is when S=TopS=Top, and the covers are open covers in the usual way. In that setting, `surjective’ is replaced by `admits local sections’, and the same is true for an arbitrary site - surjections are replaced by maps admitting local sections with respect to the given class of covers. The class of such maps does not determine the covers with which one started, and we use this to our advantage. A superextensive site (This notion is due to Toby Bartels and Mike Shulman) is a one where out of each cover {U iA|iI}\{U_i \to A|i\in I\} we can form a single map IU iA\coprod_I U_i \to A, and use these as our covers. A maps admits local sections over the original covers if and only if it admits sections over the new covers, and it is with these we can define anafunctors. Finally we show that different collections of covers will give equivalent results if they give rise to the same collection of maps admitting local sections.

I need some feedback, as my supervisors are not category theorists. Hopefully some experts here can spare some time to cast an eye over it.

Posted by: David Roberts on April 1, 2009 2:43 AM | Permalink | Reply to this

Re: Thesis sample chapter Re: nLab – General Discussion

Great idea, putting this thesis chapter on the nnLab! I should make my grad students do this sort of thing!

Posted by: John Baez on April 1, 2009 6:12 AM | Permalink | Reply to this

Re: nLab – General Discussion

nominally right now the count is: 1001 pages by 44 authors

the keyword list at Categories and Sheaves is beginning to converge

the keyword list at Higher Topos Theory is still woefully incomplete


meanwhile Tim Porter is creating a lexicon on differential graded algebra and related notions

differential graded algebra

David Roberts is beginning to add stuff related to his thesis, for instance at

bicategory of fractions

maybe one or the other bit of the latest activity makes you want to join the fun

Posted by: Urs Schreiber on April 1, 2009 10:11 PM | Permalink | Reply to this

combinatorial spectra

We are having a discussion at the entry

combinatotial spectrum.

One of the questions is something like: “What/how do people nowadays think of combinatorial spectra?”

I am not even sure about the terminology. By “combinatorial spectrum” I mean a \mathbb{Z}-graded pointed set with face and degenarcy maps which is to a topological spectrum like a simplicial set is to a topological space.

I have taken the definition from an old article by Ken Brown (see the above entry for all details and links), who just calls it “spectrum” at that time.

Mike suggested that this combinatorial notion of spectrum was dropped in favor of symmetric monoidal categories of topological spectra.

Does anyone know more? Is there any principle reason to disregard “combinatorial spectra”, or it is just that the notion has become forgotten?

Posted by: Urs Schreiber on April 6, 2009 10:53 AM | Permalink | Reply to this

Re: nLab – General Discussion

Is the coyoneda lemma Ex. 3 p. 62 of ‘Categories for the Working Mathematician’ interesting enough to merit a page?

Posted by: David Corfield on April 8, 2009 10:13 AM | Permalink | Reply to this

Re: nLab – General Discussion

Is the coyoneda lemma Ex. 3 p. 62 of ‘Categories for the Working Mathematician’ interesting enough to merit a page?

Sure. I don’t think we should worry a lot about entries not being relevant enough, for some reason. If somebody arrives at that entry by following a descriptive link to it, it must be relevant for that user. If he or she does not follow the link, then no harm is done by the existence of the entry behind it.

Posted by: Urs Schreiber on April 8, 2009 10:44 AM | Permalink | Reply to this

Re: nLab – General Discussion

I was also implicitly asking for someone to explain the lemma. Now they have the opportunity at Coyoneda lemma.

Posted by: David Corfield on April 8, 2009 11:44 AM | Permalink | Reply to this

coyoneda lemma

David recalled the statement of the “coyoneda lemma” and then asked

What does it mean?

Let me give that a try:

First recall what the Yoneda lemma “means”, in this way:

a presheaf X():S opSetX(-) : S^{op} \to Set is an attempt to define a generalized space XX by specifying its collections X(U)X(U) of probes by test spaces USU \in S.

But every test space UU can also be regarded as a generalized space in this sense, given by the assignment of its probes Y(U)():VS(V,U)Y(U)(-) : V \mapsto S(V,U).

So there is a portential problem with our thinking of a general presheaf X()X(-) as being a rule for how to probe a generalized space:

there are now two different sensible definitions for what it might mean to probe the generalized space XX by the test space UU.

On the one hand, there is the set of probes X(U)X(U), the “probes by defintion”.

But on the the other hand there are now also the probes in the full sense of generalized spaces [S op,Set](Y(U)(),X())[S^{op},Set](Y(U)(-),X(-)).

If these two sets don’t coincide, out attempt to interpret the presheaf X()X(-) as a generalized space would be inconsistent.

The Yoneda lemma says: don’t worry, it’s okay, the two sets are naturally isomorphic and the interpretation of a presheaf as a generalized space is consistent in this sense.

I am claiming one can read the coYoneda lemma as stating the analogous, now for generalized quantities: generalized functions.

A copresheaf

A:SSet A : S \to Set

we want to think of as a generalized collection of functions: to every test co-domain USU \in S it assigns the set A(U)A(U) of co-probes of AA by UU, to be thought of as the set of functions from some unspecified generalized space into UU.

Again, if that interpretation of co-presheaves as generalized collections of functions to be consistent, one would want that the maps-of-generalized functions from a generalized collection of functions AA to an ordinary collection of functions S(a,)S(a,-) on the test space aa

[S,Set](A,S(a,)) [S,Set](A,S(a,-))

to be naturally identified with the collection of ways that sends an AA-functions with values in dd to functions on aa with values in dd, such that this is compatible with changing the codomains.

Now take a pen and write out what an element in

[(*K),D](Δ a,Q) [(*\downarrow K),D](\Delta_a, Q)

looks like: it’s for each dd-valued function in AA, given by a map of sets

*A(d) * \to A(d)

a map

ad a \to d

in SS, hence a dd-valued function on aa. And naturality of this assignment for all cotest spaces dd means that this respects change of codomains.

So, I’d guess, the coYoneda lemma can be read as stating that the interpretation of co-presheaves as generalized collections of functions (quantities) is consistent, in this sense.

But I’d be grateful for opinions by others.

Posted by: Urs Schreiber on April 8, 2009 10:32 PM | Permalink | Reply to this

Re: coyoneda lemma

Interesting. I don’t think I ever noticed that excercise of Mac Lane’s. There is another completely different statement which I am accustomed to call the “co-Yoneda lemma” (I like that spelling and capitalization better than “Coyoneda”), namely the following “unital” property for tensor products of functors:

C(x,) CFFx C(x,-) \otimes_C F \cong F x

For comparison, note that the ordinary Yoneda lemma can be rephrased as the dual unitality property for a hom of functors:

hom C(C(,x),F)Fx \hom_C ( C(-,x) , F) \cong F x

These both also make perfect sense in an enriched context, whereas that seems unlikely for the one involving comma categories.

I honestly don’t remember whether I read the term “co-Yoneda lemma” for this property somewhere, or made it up myself. I have to say it feels like a more natural dual version of the Yoneda lemma to me than Mac Lane’s does. In Urs’ explanation I get lost somewhere around “the collection of ways that sends an AA-functions with values in dd to functions on aa with values in dd,” although it could just be that it’s late at night. But the lead-up before that really made it sound like one should want to have [S,Set](A,S(a,))A(a)[S,Set](A,S(a,-)) \cong A(a), which I don’t think is ever going to be true.

Posted by: Mike Shulman on April 8, 2009 11:56 PM | Permalink | Reply to this

Re: coyoneda lemma

Hi Mike,

it’s late for me, too, here, and I saw your message here only after typing a version of my piece into the nnLab. Feel free to criticize, but one comment: you write

the lead-up before that really made it sound like one should want to have [S,Set](A,S(a,))A(a)[S,Set](A,S(a,&#8722;))\simeq A(a), which I don’t think is ever going to be true.

I wouldn’t think that this lead-up piece suggests this: the lead-up piece says that we should want to think of [S,Set](A,S(a,))[S,Set](A,S(a,&#8722;)) as the set of maps of functions on some XX to functions on some aa.

Now, because functions want to pull-back and not be pushed forward, we do not expect that such a map comes from an aa-valued function f:Xaf : X \to a in A(a)A(a).

d d ϕA(d) ?!?S(a,d) X fA(a) a. \array{ d && d \\ \uparrow^{\phi \in A(d)} && \uparrow^{?!? \in S(a,d)} \\ X &\stackrel{f \in A(a)}{\to}& a } \,.

Posted by: Urs Schreiber on April 9, 2009 12:36 AM | Permalink | Reply to this

Re: coyoneda lemma

Well, allow me to cherry-pick some quotes from your lead-up. First regarding the Yoneda lemma:

a presheaf X():S opSetX(-):S^{op}\to Set is an attempt to define a generalized space XX by specifying its collections X(U)X(U) of probes by test spaces USU\in S. But every test space UU can also be regarded as a generalized space … there are now two different sensible definitions for what it might mean to probe the generalized space XX by the test space UU…. the set of probes X(U)X(U), the “probes by defintion.” … [and] the probes in the full sense of generalized spaces [S op,Set](Y(U)(),X())[S^{op},Set](Y(U)(-),X(-)). … The Yoneda lemma says: don’t worry, it’s okay, the two sets are naturally isomorphic…

Now regarding the dual (I’ve taken the liberty of making some notation consistent):

A copresheaf A:SSetA:S\to Set we want to think of as a generalized collection of functions: to every test co-domain USU\in S it assigns the set A(U)A(U) of co-probes of AA by UU, to be thought of as the set of functions from some unspecified generalized space into UU…. one would want … [S,Set](A(),Y(U)())[S,Set](A(-),Y(U)(-)) to be naturally identified with…

It feels very much to me like the natural completion of this sentence is

the “co-probes by definition” A(U)A(U).

But of course this is false.

The “real” Yoneda lemma effects a simplification. On the left we have [S op,Set](Y(U)(),X())[S^{op},Set](Y(U)(&#8722;),X(&#8722;)), an element of which consists of a lot of data: for every VSV\in S we have a function from Y(U)(V)=S(V,U)Y(U)(V)=S(V,U) into X(V)X(V), satisfying some compatibility conditions. On the right we just have the single set X(U)X(U). This simplification is obtained by the clever device of looking at the image of 1 UY(U)(U)1_U \in Y(U)(U) and observing that all the other data on the left is completely determined by this.

But Mac Lane’s “co-Yoneda lemma” does no simplification. On the left we have [S,Set](A(),Y(U)())[S,Set](A(-),Y(U)(-)), an element of which consists of functions from A(V)A(V) to Y(U)(V)=S(U,V)Y(U)(V)=S(U,V) for all VSV\in S, and hence of a morphism ϕ(x):UV\phi(x):U\to V for every VV and xA(V)x\in A(V). On the right we have [(*A),S](Δ U,Π)[(*\downarrow A), S](\Delta_U, \Pi), an element of which consists of a morphism ψ(x):U=Δ U(x)Π(x)=V\psi(x):U = \Delta_U(x)\to \Pi(x) = V for every VV and every xA(V)x\in A(V). The data on each side is evidently identical; there is no simplification and no clever device.

However, what I call the co-Yoneda lemma does effect a simplification. On the left we have S(U,) SXS(U,-) \otimes_S X, an element of which consists of a morphism α:UV\alpha:U\to V together with an xX(V)x\in X(V), considered modulo an complicated equivalence relation, which is not even given directly as an equivalence relation but is merely generated by the relation (α,X(β)(x))(βα,x)(\alpha, X(\beta)(x)) \sim (\beta\circ \alpha,x). On the right we have the single set X(U)X(U). And analogously to the Yoneda lemma, this simplification is obtained by the clever device of looking at the pairs (1 U,x)(1_U, x) on the left and observing that every pair is equivalent to a unique one of these.

Posted by: Mike Shulman on April 9, 2009 9:58 AM | Permalink | Reply to this

Re: coyoneda lemma

It feels very much to me like the natural completion of this sentence is

the “co-probes by definition” A(U).

But of course this is false.

Yes, but also it shouldn’t feel like that, if you pick up the heuristics that I am suggesting.

Well, if this doesn’t resonate with anyone else I’ll stop insisting on it, but just for the record one more attempt to clarify what I mean:

For presheaves Y(U)()Y(U)(-) and X()X(-) that we think of as spaces, Hom(Y(U)(),X())Hom(Y(U)(-),X(-)) should be maps of space UU into space XX, hence probes of XX by UU.

But for co-presheaves, S(a,)S(a,-) and A()A(-) that we want to think of as collections of functions on spaces aa and Spec(A)Spec(A), Hom(A(),S(a,))Hom(A(-),S(a,-)) should be maps of functions on Spec(A)Spec(A) to functions on aa. Such a map is indeed not naturally induced from a map Spec(A)aSpec(A) \to a, which we would want to think of as an element in A(a)A(a) – this is what you keep pointing out – rather, it is naturally thought of as coming from a map aSpec(A)a \to Spec(A): because functions pull back along maps of spaces and don’t push-forward.

So it’s all consistent.

I still feel this is an good heuristics that captures what’s going on, but I won’t insist on this if I remain the only one thinking so.

Posted by: Urs Schreiber on April 9, 2009 10:51 AM | Permalink | Reply to this

Re: coyoneda lemma

Ah-hah! I understand; I’m sorry for being dense. In fact, I seem to remember taking too long to figure this out once before, and then I must have forgotten about it.

In case I’ve managed to confuse anyone else, let me explain. A co-presheaf AA can, in fact, be considered as a “generalized object of SS” which is characterized by the maps from it to “honest” objects of SS, in a completely dual way to how a presheaf XX is a “generalized object of SS” characterized by the maps to it from honest objects of SS. The problem is that a morphism of co-presheaves goes in the wrong direction for this, since (as you say) sets of functions pull back, rather than push forward, along maps of spaces. So really the category of generalized spaces co-probed by objects of SS is [S,Set] op[S,Set]^{op}. This should also have been obvious to me from the get-go because the co-Yoneda embedding from SS lands not in [S,Set][S,Set] but [S,Set] op[S,Set]^{op}.

Now, just as in the case of presheaves, there are two sets that we want to call the co-probes of a co-presheaf AA by an object USU\in S, namely the set A(U)A(U) of “co-probes by definition” and the set [S,Set] op(A,Y(U))[S,Set]^{op}(A,Y(U)) of maps of generalized spaces from AA to UU. But now the (ordinary!) Yoneda lemma tells us that [S,Set] op(A,Y(U))[S,Set](Y(U),A)A(U)[S,Set]^{op}(A,Y(U)) \cong [S,Set](Y(U),A) \cong A(U) so we don’t have to worry.

So, my claim that your motivational argument led to a false guess was completely bogus. However, now it seems to me that your motivational argument really just motivates the ordinary Yoneda lemma again, Mac Lane’s “co-Yoneda lemma” being basically a tautology.

Posted by: Mike Shulman on April 9, 2009 9:00 PM | Permalink | Reply to this

Re: coyoneda lemma

That's just what I was telling David here: #. I wondered why you all were going on about a co-Yoneda Lemma!

Posted by: Toby Bartels on April 9, 2009 10:10 PM | Permalink | Reply to this

Re: coyoneda lemma

However, now it seems to me that your motivational argument really just motivates the ordinary Yoneda lemma again,

Maybe the main punchline of our discussion is: the term “coYoneda lemma” as used by MacLane (and maybe introduced by Kan?, to whom MacLane attributes the statement) is not really well chosen.

What has much more right to be called “co-Yoneda” is, as you said, the formula

C(x,) CFFx C(x,-)\otimes_C F \simeq F x

or more explicitly

F(x) cCC(x,c)F(c). F(x) \simeq \int^{c \in C} C(x,c) \otimes F(c) \,.

Mac Lane’s “co-Yoneda lemma” being basically a tautology.

Agreed.

Posted by: Urs Schreiber on April 10, 2009 9:46 AM | Permalink | Reply to this

Re: coyoneda lemma

Does that mean we can change the nlab page (and maybe rename it to “coYoneda lemma” while we’re at it?)

Posted by: Mike Shulman on April 11, 2009 10:53 AM | Permalink | Reply to this

Re: coyoneda lemma

I wrote up a tiny little outline of proof over at the nLab, but the real meaning should be expanded upon. I may come around to it later, but a passing thought is that this functor

(Set D) opSet D op:K(aNat(K,hom(a,)))(Set^D)^{op} \to Set^{D^{op}}: K \mapsto (a \mapsto Nat(K, hom(a, -)))

is part of an ambimorphic adjunction called “conjugation” by Lawvere. If we think of (Set D) op(Set^D)^{op} as the free completion of DD (at least in the case where DD is small), then this functor is the unique (up to isomorphism) continuous extension of the yoneda embedding

y D:DSet D opy_D: D \to Set^{D^{op}}

along the co-yoneda embedding D(Set D) opD \to (Set^D)^{op}, using of course the fact that Set D opSet^{D^{op}} is complete.

The left adjoint of conjugation is defined by a dual formula:

Set D op(Set D) op:HNat(H,D(,a)) op:D opSet opSet^{D^{op}} \to (Set^D)^{op}: H \mapsto Nat(H, D(-, a))^{op}: D^{op} \to Set^{op}

and this of course is the unique cocontinuous extension of co-yoneda along yoneda.

It’s pretty much the same situation we were considering once in a Comparative Smoothology thread, when we were discussing dualities of algebro-geometric type, following some remarks of Lawvere. On the “algebraic side” we consider full subcategories of Set DSet^D, e.g., when DD is the category opposite to finitely presented commutative rings, we consider the full subcategory of left exact functors DSetD \to Set and get back the category of commutative rings. On the “geometric side”, we consider a suitable sheaf subtopos of Set D opSet^{D^{op}}. Lawvere explains that basic dualities of algebro-geometric type are just restrictions of the conjugation adjunction described above. (I think over in that thread, we were considering the case where DD is a site of Euclidean spaces, considered as probes for smooth sets.)

I call it an ambimorphic adjunction because the contravariant adjunction between Set DSet^D and Set D opSet^{D^{op}} is induced by homming into hom D(,)hom_D(-, -), much as Galois connections between power sets PXP X and PYP Y are induced by homming into relations RP(X×Y)R \in P(X \times Y), seen either ambimorphically as either a PYP Y-valued function on XX or as a PXP X-valued function on YY.

I somehow feel these observations could be part of a larger story about co-yoneda.

Posted by: Todd Trimble on April 9, 2009 1:29 AM | Permalink | Reply to this

Re: coyoneda lemma

Thanks, Todd, that’s very useful.

Did you indeed mean to delete everything of rev 3 without a comment? Maybe we can at least re-install the comments in the Reference part and the pointer to the blog discussion here?

Posted by: Urs Schreiber on April 9, 2009 11:02 AM | Permalink | Reply to this

Re: coyoneda lemma

I deleted a bunch of stuff? No, certainly I didn’t mean to.

Perhaps what happened is that several of us were trying to edit at the same time (with me dawdling or tending to kids and Urs industriously getting stuff done, and then I was the last to hit ‘submit’), because the locking mechanism didn’t work or isn’t working.

Posted by: Todd Trimble on April 9, 2009 12:14 PM | Permalink | Reply to this

Re: coyoneda lemma

because the locking mechanism didn’t work or isn’t working.

Yes, it must have failed working. It certainly does work sometimes, maybe even most of the time (hard to tell without looking at logs).

In that “revision number 3” (see the entry’s history) I had added two things:

- a bunch of minor polishing, added links, etc. ;

- and added a “heuristics” part on how to think of Hom copresheaves(A(),S(a,))Hom_{co-presheaves}(A(-), S(a,-)) as a map between generalized function collections.

I had thought what I typed should be uncontroversial in its essence, if maybe sub-optimal in its form. Now with Mike’s reaction # it seems that maybe the form was sub-optimal to the extent of making the uncontroversial look controversial.

I don’t know. Maybe if your kids grant you a second, you could have a glance at that “revision number 3” and see what of it you deem appropriate to merge with your current version, rev 4.

Posted by: Urs Schreiber on April 9, 2009 12:27 PM | Permalink | Reply to this

Re: coyoneda lemma

I have combined the two versions once more.

(I identified Todd's changes by saving local copies of Todd's version and the version that both Urs and Todd edited, then running my own local diff on them. This turned out to be a change of notation that appeared twice and a new section that used this notation. Then I applied these changes to Urs's version.)

Posted by: Toby Bartels on April 11, 2009 1:46 AM | Permalink | Reply to this

Re: coyoneda lemma

Thanks, Toby!

Maybe we need yet another major revision now, after this discussion here. It seems to me that we should say:

1) MacLane called the following statement the coYoneda lemma.

2) It has a nice quick abstract nonsense proof in terms of lax pullbacks, as Todd indicated.

3) But maybe what rather deserves to be called by the name coYoneda is Fx=(C(x,) CF)F x = (C(x,-)\otimes_C F). Because…

4) And note also that of course there is the ordinary Yoneda lemma for opposite cateories.

5) Finally, here a few words on how to think of co-presheaves as generalized functions and how all of the above can be thought of from this perspective.

What do you all think?

Posted by: Urs Schreiber on April 11, 2009 11:46 AM | Permalink | Reply to this

Re: coyoneda lemma

Sounds good to me, except that if we are in agreement that Fx=C(x,) CFF x = C(x,-)\otimes_C F is a better thing to call the coYoneda lemma, I would suggest we start the page with that, and then only later (or elsewhere) have a discussion about the statement that Mac Lane gave that name to.

Posted by: Mike Shulman on April 11, 2009 12:58 PM | Permalink | Reply to this

Re: coyoneda lemma

I started reworking along the lines we said: co-Yoneda lemma

Posted by: Urs Schreiber on April 15, 2009 10:41 AM | Permalink | Reply to this

Re: coyoneda lemma

the real meaning should be expanded upon. I may come around to it later, but a passing thought is that this functor

(Set D) opSet D op:K(aNat(K,hom(a,))) (Set^D)^{op} \to Set^{D^{op}} : K \mapsto (a \mapsto Nat(K,hom(a,-)))

is part of an ambimorphic adjunction called “conjugation” by Lawvere.

There is a bit on dualizing objects and the like on the nnLab. In particular Andrew Stacey put some energy into the closely related entry Isbell envelope. We should expand on that eventually.

While I think I am following that, I am not sure in which sense this “explains” the coYoneda lemma, in particular its right hand side, with the functor category [(*downarrpowK),D][(*\downarrpow K),D] appearing.

Posted by: Urs Schreiber on April 9, 2009 11:53 AM | Permalink | Reply to this

Re: coyoneda lemma

As I say, it was a passing thought. :-) However, secretly I was thinking of connecting co-yoneda with some things we were discussing in email not long ago.

The proof of the result in that Mac Lane exercise is already an exercise in shuffling around tautologies, but here is one way of thinking about the result. I will write El(G)El(G) for the category of elements of GG, rather than (*G)(* \darr G).

  • We know that SetSet classifies discrete fibrations, in the sense that a functor G:DSetG: D \to Set classifies the discrete fibration

    Q=Π G:El(G)DQ = \Pi_G: El(G) \to D

    and natural transformations α:GF\alpha: G \to F correspond to maps of fibrations

    El(G)El(F)El(G) \to El(F)

    (i.e., functors which commute on the nose with the projections Π G,Π F\Pi_G, \Pi_F to the base category DD).

  • This applies in particular to F=hom(a,)F = hom(a, -). Notice the category of elements El(hom(a,))El(hom(a, -)) is the co-slice (aD)(a \darr D), with its usual projection Π\Pi to DD.
  • However, (aD)(a \darr D) is the lax pullback appearing in

    (aD) Π D Id * a D\array{ (a \darr D) & \stackrel{\Pi}{\to} & D \\ \darr & \neArrow & \darr Id \\ * & \underset{a}{\to} & D }

    and so a fibration map El(G)(aD)El(G) \to (a \darr D) corresponds exactly to a lax square

    El(G) Π G D Id * a D\array{ El(G) & \stackrel{\Pi_G}{\to} & D \\ \darr & \neArrow & \darr Id \\ * & \underset{a}{\to} & D }

    and thus we obtain the co-yoneda lemma in the sense of Mac Lane’s exercise.

It’s just more tautology-shuffling, but using buzzwords which are familiar in these here parts.

Posted by: Todd Trimble on April 9, 2009 5:36 PM | Permalink | Reply to this

Re: coyoneda lemma

It’s just more tautology-shuffling, but using buzzwords which are familiar in these here parts.

Very nice, I see. So there is something much much deeper going on than it seemed on first sight. I’ll have to think about this…

Which is difficult, because I also have to run to catch a train! Hopefully you or some other kind soul finds the energy to copy this jewel into the nnLab entry on co-Yoneda.

Posted by: Urs Schreiber on April 9, 2009 7:04 PM | Permalink | Reply to this

why (infinity,1)-categories?

Graduate students here at my institute are planning to have a “Journal Club” on (,1)(\infty,1)-categories. Sombody asked me to say a few words about why one should want to start to study this stuff.

I thought answering that question might be a good thing to try on the nnLab, so I started typing some thoughts into a new entry why (,1)(\infty,1)-categories?.

It’s not complete even by my standard, but I”ll have to run to catch my train into the Easter weekend now.

Maybe somebody would enjoy adding his or her own thoughts, or revising mine.

Posted by: Urs Schreiber on April 9, 2009 6:59 PM | Permalink | Reply to this

Axiom of Infinity

I just read the nLab, axiom of infinity page. Aside from the very cool sound of it (i.e. it would make a good name for a rock bad), it seemed to group a lot of my reservations about physics into one little concept. Here is the current description:

In the foundations of mathematics, the axiom of infinity asserts that infinite sets exist. Infinite sets cannot be constructed from finite sets, so their existence must be posited as an extra axiom.

This is fine when considering the foundation of mathematics, but when you start thinking about foundations of physics, I cannot think of a “physical” justification as to why infinite sets are posited at all other than by historic accident.

What does the world of n-categories and higher gauge theories look like when you do not introduce an “axiom of infinity”? I would think things would become far simpler and you would be hard pressed to find any consequences that were inconsistent with physical observations.

Posted by: Eric on April 11, 2009 7:16 PM | Permalink | Reply to this

Re: Axiom of Infinity

Continuous spectra differ from discrete spectra. To have a continuum, you have to introduce infinity.

Posted by: Jonathan Vos Post on April 11, 2009 7:32 PM | Permalink | Reply to this

Re: Axiom of Infinity

Good point.

However, if time were “finite” then spectra would be discrete. On large enough time scales, e.g. on the order of the age of the universe, the discrete spectra would appear to be approximately a continuum.

You have to assume an “axiom of infinity” in your description of time to get a spectrum that is a continuum. There is no way I can think of to discern a continuum spectra from a discrete spectrum when you are dealing with periods on the scale of the age of the universe.

It has always felt to me that the continuum was an approximation whose positing complicates the mathematics of something that is physically simple, i.e. finite. We generally think of “finite approximations”, but from the perspective of foundation physics, what if it is the continuum that is the approximation?

Posted by: Eric on April 11, 2009 8:23 PM | Permalink | Reply to this

Re: Axiom of Infinity

Physicists do sometimes retreat into the “discrete approximation” framework. For instance, although I know very little about quantum field theory, I believe that one of the ways they try to make sense out of the “path integral” is to approximate space by discreteness and then take a limit. And people also study “lattice gauge theories” in which space is modeled by a discrete graph.

That said, even if the physical objects of study turn out to be discrete, I think you would have a hard time saying anything much about them without the assumption of infinite sets in your mathematics. Even when approximating space by a discretum (is that the opposite of a continuum?), I believe that one still uses Lie groups and Lie algebras to describe the symmetry, and these depend on the use of continuum methods in mathematics. Even worse than that, if you don’t have the set of natural numbers, you don’t have any infinite sequences and so you can’t take any limits, including the limit as a discretum approaches a continuum!

Posted by: Mike Shulman on April 11, 2009 8:47 PM | Permalink | Reply to this

Re: Axiom of Infinity

Hi Mike,

You bring up another good point, but believe me, it would be difficult to come up with an example that would stump the basic idea.

That said, even if the physical objects of study turn out to be discrete, I think you would have a hard time saying anything much about them without the assumption of infinite sets in your mathematics.

I’m not so sure about that. The truth (or not) of that statement is the heart of the issue that I struggle with. See, I think that finite mathematics is more than sufficient to describe physics. It wasn’t until the paper I wrote with Urs that I became completely convinced.

Even when approximating space by a discretum (is that the opposite of a continuum?), I believe that one still uses Lie groups and Lie algebras to describe the symmetry, and these depend on the use of continuum methods in mathematics.

You are right. One still uses Lie groups, which imply an underlying continuum, but is that truly necessary? It may seem sacrilegious, especially around here, but I tend to think that even Lie groups are an approximation to something more fundamental. You can easily imagine finitary replacements for Lie Groups that could suffice for observable physics. I know it is the case for U(1) gauge theory, i.e. electromagnetism, but feel comfortable generalizing. It is feasible to me that Lie groups in physics appear as a phenomenological model after average over a more fundamental finitary model and are not necessarily part of the foundations of physics.

Even worse than that, if you don’t have the set of natural numbers, you don’t have any infinite sequences and so you can’t take any limits, including the limit as a discretum approaches a continuum!

And why should this be a problem for the foundations of physics? If the discretum is the correct model of nature, taking a continuum limit is purely for mathematical curiosity.

Again, I am distinguishing foundational mathematics from foundational physics. The introduction of an axiom of infinity seems, strictly speaking, unjustified for the latter based on observable phenomena although it is perfectly natural for the former.

Posted by: Eric on April 11, 2009 10:35 PM | Permalink | Reply to this

Hocney on regularization/renormalization; Re: Axiom of Infinity

I asked Dr. George Hockney about this. He points out that Physics and Engineering often use continuum mechanics even when we know it’s an approximation. As he said, we do engineering of a steel I-beam with differential equations, even though we know that the steel is not a continuum, but made of discrete iron and carbon atoms. The physicist needs to know how to eliminate non-physical solutions. Same with heat diffusion in shapes that have sharp edges or corners.

More subtle, he reminded me, as to Feynman loop integrals, that Gerardus ‘t Hooft and Martinus J. G. Veltman proved what Wilson didn’t know, which was that it doesn’t matter what regularization you use when you renormalize, dimensional regularization (which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions), or Pauli-Villars regularization (which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta), or, as mentioned in this thread, Lattice regularization, introduced by Kenneth Wilson (which pretends that our space-time is constructed by hyper-cubical lattice with fixed grid size).

Posted by: Jonathan Vos Post on April 12, 2009 4:29 AM | Permalink | Reply to this

Re: Hocney on regularization/renormalization; Re: Axiom of Infinity

Hi Jonathan,

Your example of the steel beam from Dr Hocney illustrates the essence of my point very nicely. You know the beam is made of atoms, yet the continuum provides a nice phenomenological model allowing the use of differential equations. But how does he solve the differential equations? For all but the simplest cases, he will need to solve them numerically which involves re-discretizing a continuum phenomenological model of an underlying fundamental finite system. See the issue? The continuum was introduced as an intermediate step merely to allow the translation of differential equations back into a linear algebraic finite model. Was the continuum really needed? I’m inclined to think not.

If you could develop a rigorous “finite” version of differential geometry, then the continuum would no longer be needed for that intermediate phenomenological modeling step.

The point is, I think, if you are a physicist interested in fundamental physics and you find yourself relying on the machinery of higher gauge theory and \infty-categories, etc, then perhaps the axiom of infinity is a scientific gremlin that is unnecessarily impeding progress. Or something.

Posted by: Eric on April 12, 2009 7:13 AM | Permalink | Reply to this

Re: Hocney on regularization/renormalization; Re: Axiom of Infinity

However, unless you have a computer that can represent all the atoms in that steel beam and you know, exactly, how many atoms there are, the two discreta will generally be different ones. So it seems to me that the continuum approximation will remain at least a very useful, and possibly indispensable, way to get from one to the other.

Posted by: Mike Shulman on April 12, 2009 8:23 AM | Permalink | Reply to this

Re: Hocney on regularization/renormalization; Re: Axiom of Infinity

In the steel beam example, the continuum played a role to allow the translation from one finite system of atoms, etc, to another finite system of computational domains for numerical analysis.

Now, if nature were fundamentally finite and if there was some primordial type of “lattice” representing all the subatomic particles of the steel beam, perhaps in the form of a Planck-scale simplicial complex, then it is fairly easy to “deresolve” the primordial complex into something approximate by aggregating individual cells into larger cells, effectively creating a new lower resolution cell complex. If the mathematics was defined on the primordial complex, then it would be defined on the new complex. This procedure could be repeated up to macroscopic scales to cells representing the geometric extremities of the steel beam and the surrounding environment that could then be modeled to arbitrary precision on a finite compute.

The converse situation is also interesting. If you model a steel beam with a cell complex of geometric cells that were introduced to “discretize” the beam and its surrounding, and if you do this correctly (as in “numerical analysis done right”), then your finite model should allow for cell refinement, i.e. starting with large cells and decomposing them into smaller cells in such a way that the same “finite calculus” carries over to the finer grid. This procedure could be repeated down to the primordial scale and at each step the mathematics would be rigorously defined and finite.

The process of refinement and its opposite, aggregation, means that you can model a system at any scale you want to, i.e. it is not necessary to model individual atoms on a computer to simulate a steel beam. The same mathematical framework should apply at all scales from the macroscopic to the Planck scale. It is not clear that this framework requires an axiom of infinity.

What is indispensable is the “differential geometry” used to describe the underlying physics and Urs and I and others before us (including John and his student James Gilliam) to various extents have shown that (abstract) differential geometry does not require an axiom of infinity.

I think computational models provide a good framework to think about this. Computers are, by necessity finite machines. If a computer can model a physical system to within the observational errors of any possible experiment, then the motivation to introduce the continuum, particularly if it complicates the mathematics, becomes suspect. This is especially true if you are interested in foundations of physics.

Posted by: Eric on April 12, 2009 4:23 PM | Permalink | Reply to this

Re: Axiom of Infinity

It’s not a priori obvious to me that the discretum used by a computer in modeling a system and the “underlying discretum” of a finitist physical theory will always be of the same sort, such that one can be reached from the other by a process of refinement. Does a computer modeling heat diffusion on a rod with a discrete approximation use exactly the same equations as it would if the rod were made up of a smaller number of very large atoms? (Even if the answer to that particular question is “yes,” it’s still not obvious to me that the answer to all such questions will be “yes.”)

If a computer can model a physical system to within the observational errors of any possible experiment, then the motivation to introduce the continuum, particularly if it complicates the mathematics, becomes suspect.

I disagree. In fact, I believe that practically any continuum theory with which I am familiar can be modeled by a computer with a sufficient discrete approximation, and we believe that only our lack of computing power (exacerbated in some cases by chaotic behavior) prevents such models from being accurate within observational error. The fact of modelability by a digital computer has nothing to do with what a physical theory tells us is “really” going on.

Also, my understanding of most current “accepted” physical theories is that use of the continuum does not complicate, but rather simplifies, and in fact makes possible, the mathematics. I understand that you are proposing a different sort of theory, and I have sympathy for such “discretist” theories, but I think that the contest between them and “continuist” theories should be decided by experiment, rather than by a philosophical predisposition. In particular, we should allow whatever sort of mathematics turns out to be appropriate and convenient to describe a particular theory. It may be that discretist theories triumph, and it may be that we will be able to describe all important discretist theories using finite mathematics, but I don’t see any a priori reason for this to be true.

Posted by: Mike Shulman on April 12, 2009 11:00 PM | Permalink | Reply to this

Re: Axiom of Infinity

I have sympathy for such “discretist” theories, but I think that the contest between them and “continuist” theories should be decided by experiment, rather than by a philosophical predisposition. In particular, we should allow whatever sort of mathematics turns out to be appropriate and convenient to describe a particular theory.

I agree with this, but I also think that it's appropriate for a research programme to be motivated by a philosophical predisposition. (Consider Einstein, motivated by philosophical predispositions at both the beginning and end of his career; I think that the failure at the end is justified by the success at the beginning.) So while Eric's arguments about the unnaturalness of infinite except by historical accident are (I think) overblown, I also think that he ought to follow up on his ideas.

Posted by: Toby Bartels on April 13, 2009 2:15 AM | Permalink | Reply to this

Re: Axiom of Infinity

I presume that in discrete physics, the real numbers that appear in physics would actually turn out to be rational numbers. Does this include mathematical constants like π\pi and ee? Or would discrete physics manage to avoid all appearances of π\pi and ee?

Posted by: Mike Shulman on April 12, 2009 8:06 AM | Permalink | Reply to this

Re: Axiom of Infinity

Perhaps I should read more about discrete gauge theory before responding. But my impression is that topological properties of Lie groups, such as π 1\pi_1, play an essential role in gauge theory, e.g. in the quantization of charge. Do your discrete replacements get around that somehow?

Note also that π 1(S 1)=\pi_1(S^1)=\mathbb{Z}, so without an axiom of infinity you can’t even talk about π 1\pi_1 of anything equivalent to S 1S^1 as a completed object. You could say “it’s the free group on one generator” but I don’t think this is really honest finite mathematics. The fact that you can encode some infinite objects with finite amounts of data doesn’t make them finite mathematics; if all the time you are secretly talking about \mathbb{Z}, why make your life complicated by refusing to let it “actually” exist?

Posted by: Mike Shulman on April 12, 2009 8:28 AM | Permalink | Reply to this

Re: Axiom of Infinity

You could say “it’s the free group on one generator” but I don’t think this is really honest finite mathematics. The fact that you can encode some infinite objects with finite amounts of data doesn’t make them finite mathematics;

Tell that to a finitist; this is exactly what they do. If (for the sake of argument) you can start with a (finite) Dynkin diagram and calculate (finitely many) generators and relations, then this is a problem in combinatorics; why should you think of this in terms of uncountably infinite sets? Especially if Eric's programme is successful, and you can still use geometric intuition?

if all the time you are secretly talking about \mathbb{Z}, why make your life complicated by refusing to let it “actually” exist?

First of all, I'm not sure that this complicates your life at all. But even if it does, there's a reason for it: once you admit that \mathbb{Z} actually exists, then you must admit that \mathbb{Z}^{\mathbb{Z}} actually exists (well, if you believe in function sets), even though you never wanted to work with that —and this has barely begun!

Remember, finitists believe in integers, although not in (arbitrary) infinite sequences. It's anachronistic (as he lived just long enough to obect to this sort of language), but I'd say that Kronecker believed that \mathbb{Z} was a proper class but not a set, and this saves you from having to form \mathbb{Z}^{\mathbb{Z}}. Interestingly, if you start with Morse–Kelley class theory and replace the axiom of infinity with an (induction-style) axiom of finiteness (for small sets), then you end up with Second-Order Arithmetic —aka Elementary Analysis!—, although I think that allowing quantification over classes (called ‘sets’ in SOA, where sets are called ‘numbers’) in comprehension (or induction) takes this example beyond finitism.

Actually, I think that you can be finitist and still think of \mathbb{Z} as a small set, if you reject function sets. Just as a predicativist can believe in a small set of truth values as long as one can't form power sets, so a finitist should be able to believe in a small set of integers as long as one can't form sets of infinite series. In other words, you can work in a WW-pretopos of sets (where by WW I really mean Coquand-style inductive types rather than Martin-Löf-style WW-types, since the latter don't really make sense without function types) and be finitist. The reason is indeed that every intuitively existing element of a set (object in such a category) can be described with a finite amount of data.

Posted by: Toby Bartels on April 12, 2009 9:07 PM | Permalink | Reply to this

Re: Axiom of Infinity

I think my point is that you should say what you mean. I think it’s silly to want to talk about \mathbb{Z} but to do it by talking about generators and relations because you want to avoid being able to talk about \mathbb{Z}^{\mathbb{Z}}. It’s more honest to have an axiom of infinity and talk about the real thing you are talking about, namely \mathbb{Z}, but deny function sets, since those are what you actually want to avoid.

Posted by: Mike Shulman on April 12, 2009 11:04 PM | Permalink | Reply to this

Re: Axiom of Infinity

Just as a predicativist can believe in a small set of truth values as long as one can’t form power sets

In doing so he would seem to me less predicative than someone who denies a set of truth values. The set of truth values is usually the power set of a singleton, so unless he’s avoided that somehow, he has at least one power set (or rather, at least two, since everyone has the power set of the empty set).

Posted by: Mike Shulman on April 12, 2009 11:31 PM | Permalink | Reply to this

Re: Axiom of Infinity

The set of truth values is usually the power set of a singleton, so unless he’s avoided that somehow, he has at least one power set (or rather, at least two, since everyone has the power set of the empty set).

Obviously predicativists aren't allergic to power sets, since they have one already, as you note. Similarly, constructivists have many sets with decidable equality. The question is whether these constructions can be carried out in general. I would argue that the spirit of finitism in practice (such as the work of Kronecker or Zeilberger) is similarly retained by allowing some infinite sets, as long as the methods for constructing them are suitably restricted.

For that matter, I would consider the boolean topos FinSetFin Set to be a predicativist framework, just as I would consider it constructive; in both cases, the objections are to extending to infinite sets reasoning that is uncontroversially valid for finite sets. (Of course, this doesn't address whether something larger than FinSetFin Set might still be finitistic.)

It's more honest to have an axiom of infinity and talk about the real thing you are talking about, namely Z\mathbf{Z}, but deny function sets, since those are what you actually want to avoid.

The difference between that and denying the axiom of infinity is largely one of language, such as sets vs proper classes. So while I too would rather do things this way, I think that it would still be finitist; and conversely, the finitist desire to avoid completed infinities can be understood and accomodated by a change in language.

In the same way, it's probably more honest for category theorists to use Grothendieck universes and talk directly about large categories, rather than to worry about sets and proper classes, but many people have learned to do the latter anyway, perhaps even learning that a proper class is ‘really’ just a formula in the formal language of material set theory. (At least generators and relations are more concrete than that!) But as an applied metamathematician, I can accomodate their prejudices by a change in language if necessary.

Posted by: Toby Bartels on April 13, 2009 2:48 AM | Permalink | Reply to this

Re: Axiom of Infinity

I agree with you about FinSetFinSet, although I think it’s important to remember that (as you know) in the absence of excluded middle, there are plenty of non-finite sets that don’t really deserve to be called ‘infinite,’ and the ‘power sets’ in FinSetFinSet only classify decidable subsets.

The difference between that and denying the axiom of infinity is largely one of language, such as sets vs proper classes.

Absolutely. But Eric’s original post was specifically about the axiom of infinity.

In the same way, it’s probably more honest for category theorists to use Grothendieck universes and talk directly about large categories, rather than to worry about sets and proper classes, but many people have learned to do the latter anyway, perhaps even learning that a proper class is ‘really’ just a formula in the formal language of material set theory.

They may pay lip service to that definition of a proper class, but they basically never talk as if they really believe that; they talk about proper classes as real objects. I would argue that they’re probably ‘really’ using a theory like NBG, which is equivalent, in a suitable sense, to ‘proper classes as formulas’ but honestly admits proper classes into its ontology. And of course the difference between NBG and Grothendieck universes is just how strong your axioms are for manipulating proper classes (and whatever difference there is between saying ‘proper class’ and ‘large set’).

Do finitists do the same thing with ‘okay’ infinite sets like \mathbb{Z}?

Posted by: Mike Shulman on April 13, 2009 10:54 AM | Permalink | Reply to this

Re: Axiom of Infinity

I agree with you about FinSet, although I think it’s important to remember that (as you know) in the absence of excluded middle, there are plenty of non-finite sets that don’t really deserve to be called ‘infinite,’ and the ‘power sets’ in FinSet only classify (decidable* subsets.

I'm not sure what you mean here. What do the non-finite non-infinite sets have to do with anything? And while it's true that the power sets in FinSetFin Set classify only decidable subsets, they in fact classify all of the subsets that are there in FinSetFin Set to be classified, so I don't see the problem. We have power sets, and yet we are predicative. We have decidability of all subsets, and yet we are constructive. Similarly, we might have Z\mathbf{Z} (although not in FinSetFin Set directly) yet be finite (in the sense of finitism).

If your point is that FinSetFin Set is not all of SetSet (not even a Grothendieck universe to a constructivist), then even that is not clear. It is a model of set theory without the axiom of infinity. In fact, if you start with set theory and replace the axiom of infinity with an axiom of finiteness, then it is the intended model (and, I think, the free model, depending on how you phrase the finiteness axiom).

The difference between that and denying the axiom of infinity is largely one of language, such as sets vs proper classes.

Absolutely. But Eric’s original post was specifically about the axiom of infinity.

As I've said, I do think that Eric will do better to accept that axiom (as far as it goes) but reject function sets instead. Hopefully Urs will help him to see how sets defined as well-behaved colimits (such as Z\mathbf{Z}, if not R\mathbf{R}) are natural. And after all, Z\mathbf{Z} is discrete, so I don't think that it bothers him! So there we agree.

Where we disagree, as far as I can see, is about whether this would be ‘finitist’. And my argument is that, if the difference is only one of language, then it is still finitist. Conversely, if Eric (or anybody) insists on dropping Z\mathbf{Z} too, then we may sigh at his choice to use language that matches ordinary modern mathematics less exactly, but we also know how to translate for him. It is the same mathematics.

In the same way, it's probably more honest for category theorists to use Grothendieck universes and talk directly about large categories, rather than to worry about sets and proper classes, but many people have learned to do the latter anyway, perhaps even learning that a proper class is ‘really’ just a formula in the formal language of material set theory.

They may pay lip service to that definition of a proper class, but they basically never talk as if they really believe that; they talk about proper classes as real objects. I would argue that they’re probably ‘really’ using a theory like NBG, which is equivalent, in a suitable sense, to ‘proper classes as formulas’ but honestly admits proper classes into its ontology. And of course the difference between NBG and Grothendieck universes is just how strong your axioms are for manipulating proper classes (and whatever difference there is between saying ‘proper class’ and ‘large set’).

If you drop the axiom of infinity from NBG, is that still finitist? I assume that you accept that ZF without infinity is finitist, yet NBG without infinity is conservative over ZF without infinity. So at the very least, NBG without infinity is surely finitist as long as you use it to prove results that can be phrased in ZF. So you can use Z\mathbf{Z} for that as well.

Do finitists do the same thing with ‘okay’ infinite sets like Z\mathbf{Z}?

I went to Zeilberger's homepage, looked at his papers, and started to read the first one listed (the latest paper). On page 3, this paper refers to a ‘Temperley–Lieb algebra’ and a ‘vector space’ without qualms; although it doesn't say so, these are infinite. Naturally, the paper defines them by (respectively) generators and relations and a basis, and they are (respectively) finitely presented and finite-dimensional. As the only coauthor (Arvind Ayyer) is a recent student of his, I doubt that any of this language displeased Zeilberger. In any case, as all of the data involved is finite, surely this counts as finite mathematics!

If his paper on ‘real’ real analysis is not too exaggerated, then Zeilberger should really be an ultrafinitist, but I don't see how you could consider this paper (which applies to arbitrary integers nn) ultrafinite. So Zeilberger may be an ultrafinitist in his heart of hearts, but he's merely a finitist in practice.

Posted by: Toby Bartels on April 14, 2009 3:26 AM | Permalink | Reply to this

Re: Axiom of Infinity

My point about FinSetFinSet was just that you said

the objections are to extending to infinite sets reasoning that is uncontroversially valid for finite sets

where I would have preferred you to say “extending to non-finite sets.” That’s all. (-:

I don’t think I am qualified to be venturing a definition of ‘finitist,’ so I think that you (Toby) and I probably agree on pretty much everything, except maybe whether finitists should be taken seriously in their claims that infinite mathematics is meaningless. Although, of course, I do think that finite mathematics is interesting, because it is the mathematics of FinSetFinSet (among other things).

Posted by: Mike Shulman on April 14, 2009 7:59 AM | Permalink | Reply to this

Re: Axiom of Infinity

Thanks for the reference to Kronecker and Zeilberger.

I’m sure I knew at one time, but forgot that Kronecker was a finitist. I had never heard of Doron Zeilberger. His paper linked from ultrafinitism looks completely fascinating:

“REAL” ANALYSIS Is A DEGENERATE CASE of DISCRETE ANALYSIS

REAL REAL WORLDS (Physical and MATHEMATICAL) ARE DISCRETE.

Continuous analysis and geometry are just degenerate approximations to the discrete world, made necessary by the very limited resources of the human intellect. While discrete analysis is conceptually simpler (and truer) than continuous analysis, technically it is (usually) much more difficult. Granted, real geometry and analysis were necessary simplications to enable humans to make progress in science and mathematics, but now that the digital Messiah has arrived, we can start to study discrete math in greater depth, and do real, i.e. discrete, analysis.

As someone with a background in computational physics, I share the excitement over the arrival of the digital Messiah. Computational science is an exciting field to be in these days.

It is also funny how he and I are saying opposite things (I know who I would bet on for being right). He is saying the discrete is more difficult. I guess it depends on the context. In some areas, the discrete is simpler.

Thanks again for opening up the manhole (from below) to a whole new world for me of ultrafinitism and constructivism.

Posted by: Eric on April 13, 2009 5:55 PM | Permalink | Reply to this

Re: Axiom of Infinity

I’m sure that Zeilberger has done some real mathematics, but a cursory glance at this paper and his web site reveals that he also scores quite highly on the crackpot index (and would probably score even higher if it were aimed more at mathematicians and less at physicists).

Posted by: Mike Shulman on April 14, 2009 12:50 AM | Permalink | Reply to this

Re: Axiom of Infinity

?!?!

I think you might want to have another look

Doron Zeilberger’s Awards

His presentation is informal (and refreshing) like others around here, but his serious papers seem quite good to me. Plus, I doubt Toby would link to him so frequently if he was a crackpot.

Posted by: Eric on April 14, 2009 1:09 AM | Permalink | Reply to this

Re: Axiom of Infinity

Zeilberger's opinions are deliberately provocative and should be taken with a grain of salt. (And an extra teaspoon if dated April 1!)

But I think that they're also worth reading, if only to refute.

Posted by: Toby Bartels on April 14, 2009 2:16 AM | Permalink | Reply to this

Re: Axiom of Infinity

Sorry, “has done some real mathematics” was too much of a dramatic understatement. (-: I certainly didn’t mean to impugn his credentials as a mathematician.

However it is honestly hard for me to take anyone seriously who says

Andrew Wiles’s alleged ‘proof’ of FLT, while a crowning human achievement, is not rigorous, since it uses continuous analysis, which is meaningless.

I am not sure how to ‘refute’ this. I also think my time would probably be better spent doing mathematics than trying to figure out how to refute it.

Posted by: Mike Shulman on April 14, 2009 7:30 AM | Permalink | Reply to this

Re: Axiom of Infinity

To clarify: I can understand someone wanting to know whether there is a finitary proof of FLT. But that doesn’t make Wiles’ proof meaningless.

Posted by: Mike Shulman on April 14, 2009 8:22 AM | Permalink | Reply to this

Re: Axiom of Infinity

From my understanding of Zeilberger’s views, the key words in the quote is “not rigorous” rather than meaningless. As you can gather from reading a lot of his writings, both “proper” and “opinions” he’s very much takes the view that (a) if a thing can be proven by mechanical calculation it should be, even if there’s an “insightful” proof because human insight is inevitably flawed at points and (b) continuous concepts which can be axiomatized by mechanically usable rules are just about tolerable, but that a continuous theory with axioms that depend on “insight” again are again prone to the unreliabiliy of human insight.

Everyone is familiar with the situation of having done some “abstract work” which doesn’t work when you try mechancial calculations (say, on a computer) because you’ve droppped numerical factors, or forgot different coordinates when going to concrete quantities, etc, so it undeniable that mechanical proof is more reliable than insightful proof for getting trivial issues right. What’s more controversial is whether there’s a significant amount of conceptually wrong insights in proofs (eg, is the incorrect Kempe “proof” of the four colour theorem a very rare case or is it more common (and maybe undetected)? I seem to recall the late Pertti Lounesto claiming that a significant number of papers on Clifford algebra contained wrong theorems that could be found out simply by taking a more mechanical viewpoint.) This seems to me to be a reasonable position one could take, which is then dressed up in provocative language to raise it above the mass of mathematical discourse.

Indeed, viewing mathematics as the search for true statements by any means necessary (rather than as “the kind of stuff I enjoy doing”, which happens to be “insightful proofs”), I’m sort of split on view (1). I could generally support it, except that maybe finding insightful proofs for things that can be proved in other ways acts as a “idea nursery” for growing components of insights for problems where currently there’s no feasible way of mechanically finding a proof. (Clearly (2) is a much more controversial viewpoint.)

I find myself often wanting to disagree with Zeilberger without being able to formulate truly compelling arguments as to why he’s wrong.

Posted by: bane on April 14, 2009 1:06 PM | Permalink | Reply to this

Zeilberger

bane wrote: without being able to formulate truly compelling arguments as to why he’s wrong.

mechanical proofs may be more fault-free,
but isn’t an insightful proof suggestive of what to prove next? how can a mechanical proof do that?

Posted by: jim stasheff on April 14, 2009 2:19 PM | Permalink | Reply to this

Re: Zeilberger

I lean towards the stance that the purpose of mathematics is insight.

Posted by: Jonathan Vos Post on April 14, 2009 2:48 PM | Permalink | Reply to this

Re: Zeilberger

That’s a valid point (although as you probably know, when talking about mechanical proof it’s not restricted to “mechanical theorem prover”-style proofs, but, eg, an algorithm which constructs some object given criteria, so thinking about the steps in the algorithm and how they might be varied is possible).

But there’s also the flip-side: more mechanical style investigations could lead to stuff that’s not “connected” to previous ideas, something that happens less with concept-generalisation styles of work.

I’m not saying I agree with Zeilberger (or even claiming to totally understand his position), just that I generally can’t find a killer argument against his positions.

Posted by: bane on April 14, 2009 2:50 PM | Permalink | Reply to this

Re: Axiom of Infinity

It’s difficult for me to see how, if continuous mathematics is really “meaningless,” that a proof using it could be “non-rigorous but meaningful.” Where did the meaning come from? Presumably it didn’t spring out of thin air. If I encountered a proof containing something I would call “meaningless,” like Grug pubbawup zink wattoom gazork, I would not describe the entire proof as “non-rigorous but meaningful.” Your argument seems to me to be rather that continuous mathematics is itself non-rigorous but meaningful.

I won’t dispute the usefulness of mechanical arguments as a check for insightful proofs. In fact, it’s common knowledge that almost all published mathematics is “non-rigorous” in the formal sense, and this certainly can and does lead to problems. However, if the formalization of mathematics takes off, in the sense of proof checking, then this argument will go away. And in any case I don’t see it as providing any evidence for a position that non-finite mathematics is ‘meaningless.’

Of course, meaning is in the eye of the beholder. Perhaps “grug pubbawup zink wattoom gazork” really does mean something in some language. But one source of meaning in mathematics is insight. Another is applicability. And non-finite mathematics has both of these. As I argued earlier, even if Zeilberger does turn out to be right that the physical world is fundamentally discrete and finite (a position which, as far as I can tell, there is no theoretical or experimental evidence for except “well, something funny is clearly happening at the Planck scale”), existing physical theories, which use continuous mathematics, will almost certainly continue to play an important role in applications.

Posted by: Mike Shulman on April 19, 2009 4:36 PM | Permalink | Reply to this

Re: Axiom of Infinity

It’s difficult for me to see how, if continuous mathematics is really “meaningless,” that a proof using it could be “non-rigorous but meaningful.” Where did the meaning come from?

Well, now I'm really going beyond my qualifications as a finitist. I'm really not a finitist (or a predicativist, for that matter) in the same way that I'm a constructivist, and I'm only a constructivist in a weak sense (I mean philosophically weak, not in a weak mathematical form such as accepting LEM but not full AC). I would never say that any non-finitist (or even non-constructivist) mathematics is meaningless.

But I'll try …

I think that you understand how a constructivist might object to a certain proof as ‘invalid’ (or even ‘meaningless’, although I wouldn't say that). Nevertheless, the constructivist can often extract validity or (more) meaning from the proof. Perhaps the proof can be made constructive by removing spurious double negations; perhaps the proof directly proves a statement that's classically trivially equivalent to the original theorem; perhaps something more drastic must be done but a good constructive meaning (something better than just ACpAC \Rightarrow p) can still be found.

Well, to a finitist, the Wiles–Taylor proof of FLT might be similar. If that proof is (or at least seems to be) amenable to being formalised in language that (by the usual metatheorems) can be rephrased in PA, then the finitist might regard the theorem as proved. Not to be too formalist about it, but if most mathematicians accept a proof whenever they have good reason to believe that it can be formalised it ZFC, then finitists should accept a proof whenever they have good reason to believe that it can formalised in PA (or maybe HA). And maybe this is at least a reasonable possibility with the Wiles–Taylor proof (including everything that it depends on); to the extent that this hasn't been shown, a gap remains, so it is non-rigorous.

Sometimes I also think of some classical mathematics as a non-rigorous approximation of constructive mathematics. Maybe I'll write more about that later.

Posted by: Toby Bartels on April 19, 2009 10:14 PM | Permalink | Reply to this

Re: Axiom of Infinity

I agree with all of that, but I would still argue that, by definition, you can’t extract meaning from something that is meaningless. Call it ‘invalid’ or ‘non-rigorous’ or any number of other words, but not ‘meaningless.’

Posted by: Mike Shulman on April 20, 2009 5:09 PM | Permalink | Reply to this

Re: Axiom of Infinity

I think Toby Bartels has a much better idea of finitism and the related issues. My much more simple point was more that one can have “proofs” within a framework one accepts which are rigorous and “proofs” which use elements one does not accept which one thus doesn’t view as rigorous and are in a strict sense might be said to be “not meaningful” but without wanting to ascribe the arguably loaded word “meaningless” to the whole argument rather than just the elements one disagrees with. (As an example, take some historical “proof” from before the clarification of the foundations of calculus: such a proof may contain useful ideas even if it doesn’t count as a rigorous argument these days.) I haven’t read enough of the Wiles-Taylor FLT proof to be able to say how much of the argument would be seen as valid in finitist terms, and whether the results a finitist would accept as being well-defined statements obtained from “meaningless” foundations could conceivably be obtained from a finitist-acceptable base. (As an example NOT from the FLT, one might establish a minimum of an integer function by using differential calculus and showing this value is an integer; if I were a finitist I wouldn’t naturally use the word meaningless in connection with the whole of a proof that used such a step.)

Incidentally, I wish I hadn’t used the term “insightful” in my original post as it doesn’t quite match what I meant. When I used “insightful” what I meant was “something which the mathematician perceives as true (by insight) but where things haven’t been formalised enough to be mechanically proved” (maybe I should have used a cumbersome term like “insight-only”). I didn’t mean to suggest that insight was to be avoided, even in mechanical proof, but rather that steps which are perceivable by insight-ONLY are problematic for some viewpoints.

Posted by: bane on April 22, 2009 1:59 AM | Permalink | Reply to this

Re: Axiom of Infinity

For some reason the Lounesto link in above comment didn’t come out (maybe capitalisation of the link), so here it is in plain text:

www.tkk.fi/~ppuska/mirror/Lounesto/counterexamples.htm

Posted by: bane on April 14, 2009 1:13 PM | Permalink | Reply to this

Re: Axiom of Infinity

As an excuse to point at the nLab, Toby has created a nice page on finite mathematics:

Finitism

In the foundations of mathematics, finitism is the philosophy that one should do only finite mathematics. In a weak sense, one should not assume the axiom of infinity; in a strong sense, one should even deny it by an axiom of finiteness. This make it impossible to do analysis as we normally understand it.

Finitism (in the weak sense of not accepting an axiom of infinity) is essentially the mathematics that can be done internal to an arbitrary topos (at least if one is not also being predicative). For constructive mathematics as usually practised, one goes beyond finitism by positing a natural numbers object.

Although often considered a form of constructivism, finitism in the strong sense (actually denying the axiom of infinity) can make excluded middle and even the axiom of choice constructively acceptable. This is because even constructivists agree that these are true in FinSet; it’s the extension of them to infinite sets that the first constructivists objected to.

Ultrafinitism is an even more extreme form of finitism, in which one doubts the existence even of very large numbers, numbers which in some sense it is not physically possible to write down. The theory of ultrafinite mathematics is not well developed.

For the opinionated espousal of finitism (and much else), one can hardly do better than Doron Zeilberger’s Opinions.

PS: As a sidenote, since I work in mathematical finance, it was fun to read Zeilberger’s opinion

Opinion 32: Mathematicians, and Math, Should Not Be Blamed for the Debacle of the Hedge Funds

You might think he is referring to the mess in the markets today, but he wrote that in 1998 right after the first major Nobel laureate populated quant hedge fund (Long Term Capital Management) blew up.

Posted by: Eric on April 13, 2009 7:30 PM | Permalink | Reply to this

Re: Axiom of Infinity

My opening line should have said, “As an excuse for me to point to the nLab, let me point out that Toby has created a nice page on finite mathematics

Toby obviously didn’t write that as an excuse to point at the nLab :)

Also, my imagery could probably use some clarification. The image I meant to convey regarding the “manhole” comment was a picture of me in a dark and dirty underworld and Toby lifting the manhole cover to a new and beautiful world to me. So, in the image, Toby was already above ground and and I was in the underworld :)

It’s good thing I never claimed to be a good writer :)

Posted by: Eric on April 13, 2009 7:44 PM | Permalink | Reply to this

Re: Axiom of Infinity

Also, even if we do start believing that ‘fundamental’ physics is discrete, continuum physics isn’t going to go away, for the same reason that Newtonian mechanics didn’t go away when relativity was discovered: it’s a useful and simpler approximation with a large domain of validity. If a computer needs to calculate the path of a projectile to first-order, it’d be silly to run a complicated simulation rather than simply say “it’s a parabola.”

Posted by: Mike Shulman on April 13, 2009 10:58 AM | Permalink | Reply to this

Axiom of Infinity

I am late to this discussion. Maybe I can still make a contribution.

An important point might be hidden in the observation that the atomic structure of a metal beam is a different discrete model of that beam than the finite-element approximation which a computer will apply in order to compute properties of the beam. Still, both asymptote to the same continuum description.

So it seems key points raised by Eric and Mike could be summarized as:

- the object of interest (for instance: nature) might be disctete

- but still a continuum description may be useful, since it is more universal.

There are many discrete versions of the same continuum model, and they all “sit inside” the continuum model in a well-defined way.

This is reminiscent for instance of the coalgebraic description of the real numbers that we talked about on other threads recently: the real interval [0,1][0,1] \subset \mathbb{R} is in a precise sense the universal solution to having an ordered set XX with distinct lowest and smallest element and a map from the set to the result XXX \vee X of gluing this set along its endpoints to itself: XXXX \to X \vee X.

There are many “discrete” solutions to this problem: take for instance X=X n={0,1,2,,n}X = X_n = \{0,1,2, \cdots, n\}. Then XX={0,1,2,,n,n+1,n+2,,2n}X \vee X = \{0, 1,2, \cdots, n, n+1, n+2, \cdots, 2n\} and XXXX \to X \vee X is multiplication by 2.

The universal solution to finding such XX, however, is the standard interval X=[0,1]X = [0,1] \subset \mathbb{R}.

It may well be that in some concrete application that we are looking at, we are dealing really with X nX_n. But since X nX_n is not all that different from X n+1X_{n+1}, many answers we are looking for concerning questions about X nX_n may essentially be just answers about [0,1][0,1], which unifies in it all the X nX_n.

By Adámek’s theorem terminal coalgebras can be computed by categorical limits. While I am a bit confused about if and how this works in the above case, this fact might point to a relation between the discussion about finite/discrete versus infinits/continuous in the broader context of issues like ind-objects and accessible categories:

if we are finitists/discretists we might strictly speaking be interested just in a category CC of finite objects, in some sense, but for the pursposes of understanding CC it may be useful to consider bigger categories that may be accessed by CC.

Posted by: Urs Schreiber on April 13, 2009 1:20 PM | Permalink | Reply to this

Re: Axiom of Infinity

Urs wrote:

By Adámek’s theorem terminal coalgebras can be computed by categorical limits. While I am a bit confused about if and how this works in the above case…

It doesn’t work in the above case. Adámek’s theorem, which constructs the terminal coalgebra for an endofunctor FF of a category C\mathbf{C}, works under the hypothesis that C\mathbf{C} has a terminal object and a certain sequential limit, and that FF preserves that limit. But in this case, C\mathbf{C} does not have a terminal object. So Adámek’s theorem does not apply.

As you may have realized, [0,1][0, 1] couldn’t possibly arise as a limit of discrete spaces, since then it would be totally disconnected.

This issue is a basic part of my work on self-similarity (lite introduction). When describing interesting spaces as terminal coalgebras, you typically have to use a construction rather more subtle than Adámek’s. That terminal coalgebras such as [0,1][0, 1] can’t usually be realized as limits is the content of Warning 4.2 of my paper A general theory of self-similarity I, though it won’t make much sense in isolation.

Posted by: Tom Leinster on April 13, 2009 3:37 PM | Permalink | Reply to this

Re: Axiom of Infinity

Thanks, Tom!

It’s too bad that the real interval is not a limit in this sense, but concerning the discussion with Eric and Mike, I’ll just note that what I wanted to say (not supposed to be particularly deep) is that continuum notions arise naturally in the study of discrete notions in terms of objects that have more universal properties:

the finitist/discretist might consider the operation XXXX \mapsto X \vee X on finite sets (with distinct tp and bottom element) and may wonder if there is any finite poset which is universally stable with respect to this operation. There is none in the finite world. But the finitist/discretist might still find it useful to reason about finite sets in a contxt of “generalized finite sets” where this universal object does exists, namely infinite sets, even if he or she doesn’t consider them to “really exist”.

This is of course a very common strategy, and I am mentioning this not to teach you or Mike, but since I am thinking that maybe Eric might find this a useful perspective:

frequently in category theory we are asking for certain universal constructions (for instance limits) in certain contexts (i.e. in certain categories), notice that they need not exist there, but that a generalized context does exists (for instance presheaves on categories) where the construciton is guaranteed to exist. Even if our original problem lives in the more restrictive context (for instance finite sets) it may be helpful to think of this as embedded in the generalized context (for instance possibly infinite sets) in order to reason about the more restrictive context.

I was thinking about the machinery of pro-objects/ind-objects and accessible categories as an example of this general strategy, but maybe in the context of the example of the real interval as a terminal coalgebra, I should have looked for other examples of this general strategy.

Is there a useful notion of “completion under terminal coalgebras”? I.e. given a category CC, does it make sense and is it of interest to consider freely adjoining to it all terminal coalgebras for all endomorphisms of CC?

Posted by: Urs Schreiber on April 13, 2009 4:03 PM | Permalink | Reply to this

Re: Axiom of Infinity

Thanks everyone for your comments. You’ve elevated the discussion higher than I could ever on my own.

I think Urs (as usual) strikes a good balance. In the nLab page on the axiom of infinity, it is made clear that the existence of infinite sets “must be posited as an extra axiom.” Whenever I see things like that, a red flag is raised and I wonder if it is really necessary. I’m biased because for the past 15+ years, I’ve been convinced that nature is somehow fundamentally finite. As a result, whenever I see physicists struggling over continuum theories or whenever issues about infinity come up, I tend to wonder whether effort is being misdirected.

There is a difference between what is fundamental and what is useful. Even if nature is fundamentally finite, the continuum is clearly a useful tool to describe it in some circumstances.

It has been my experience, and I think Urs was a bit surprised by it as well, that concepts formulated finitely (as in our paper) tend to illuminate and even simplify concepts in the continuum. For example, in Section 5.1 Lattice Yang-Mills Theory, you see:

But since the holonomy is GG-valued this suggests that the correct discrete version of the covariant derivative is

d AB:=[,B],d_A B := [\mathcal{H},B],

where \mathcal{H} is the discrete GG-valued 1-form which assigns the GG-holonomy H μ(x)H_\mu(\vec{x}) in question to every edge δ {x,x+e μ}\delta_{\{\vec{x},\vec{x}+\vec{e}^\mu\}}.

In particular the Yang-Mills field strength FF is now just the square of \mathcal{H}:

d AA 12[,] 𝒾 = 2. \begin{aligned} d_A A & \to \frac{1}{2} [\mathcal{H},\mathcal{H}]_{\mathcal{i}} \\ & = \mathcal{H}^2. \end{aligned}

It should be noted that even though \mathcal{H} is a 1-form, its square does in general not vanish, as it does in the continuum (cf. (2.136)). The non-commutativity precisely produces the desired field strength, by the above reasoning.

It is now easy to write down the general action S YMS_{YM} for GG-Yang-Mills theory on the discrete space:

S YM = 2| 2 =tr( 2 2) 𝒮. \begin{aligned} S_{YM} &= \langle \mathcal{H}^2|\mathcal{H}^2\rangle \\ &= \int \text{tr}\left(\mathcal{H}^{\dagger 2} \mathcal{H}^2\right)_{\mathcal{S}}. \end{aligned}

Note that this action is defined for arbitrary background metrics. It is maybe remarkable that the non-commutativity of 0-forms and 1-forms on the lattice drastically simplifies the notion of gauge covariant derivative to a simple algebraic porduct of the gauge holonomy 1-form with itself. The discrete gauge theory is in this sense conceptually actually simpler than the continuum theory. In order to illustrate the relevant mechanism in more detail let us restrict attention to a single plaquette as illustrated in figure 9 and work out the value of 2\mathcal{H}^2 on that plaquette in full detail.

Just as the Yang-Mills action simplified tremendously when formulated “finitely”, I would expect higher Yang-Mills theory, in particular, higher connections and nn-transport to work out much simpler as well.

So, to confess, when I recently saw Urs express some frustration over having to develop the mathematics as he goes along, my thoughts immediately jumped to “Why not just formulate things finitely first?” Then, when I saw the page on “axiom of infinity”, it pushed me over the edge and thought I would speak up. As we saw, when you formulate things finitely, the continuum limit usually presents itself in an obvious way.

The other way is not so obvious. It is not always obvious (although my entire graduate research program was geared to trying to do just this) how to begin with a continuum theory and “discretize” it. There are always choices that must be made along the way and some choices are better than others. However, if you develop a rigorous finite theory first, the continuum theory is generally clear once the details are worked out.

My opinion, for what it is worth, is that nn-category theorists and particularly physicists who find themselves immersed in nn-category theory, might be well served by concentrating on the finite theories first to work out their ideas to avoid complications stemming from the continuum and its infinities. Obviously, that is not necessary, but I believe that it would make life easier and would illuminate subtle issues more clearly. Or something.

PS: Regarding “discretization”, here is a comment from one of my papers back when I thought about this stuff full time:

Most financial models (that I am aware of anyway) are based on some stochastic processes. What discrete stochastic calculus allows us to do is to write down any financial model we like using the continuum version, turn the crank, and out pops a robust numerical algorithm that is guaranteed to provide solutions that converge to the continuum solutions. In fact, with some clever programming you could potentially automate this process. For example, you could enter expressions representing the stochastic process, then the code automatically generates the correct algorithm and provides a solution.

Discrete stochastic calculus provides a kind of meta algorithm. It is an algorithm for generating algorithms.

The technique applies equally to physical models described by differential forms. In fact, the original motivation was electromagnetic theory. The relation to stochastic calculus and finance was an afterthought. The point being that the framework Urs and I developed represents a “meta algorithm” for discretizing physical models. When the mathematics itself converges to the continuum mathematics, any solutions will converge to continuum solutions as well. At that point, you might ask, “Why bother with the continuum limit?”

Posted by: Eric on April 13, 2009 4:37 PM | Permalink | Reply to this

Re: Axiom of Infinity

It doesn’t seem very likely to me that restricting to finite things would appreciably simplify the study of nn-categories. For instance, finite 1-groupoids already include all finite groups, and finite group theory is difficult enough to occupy lots of people full-time!

More seriously, another problem with finitary category theory is that very basic constructions take you out of the finite world. The 2-category of finite categories (finitely many objects and finitely many morphisms) lacks finite colimits, while the 2-category of finitely presented categories lacks finite limits.

Posted by: Mike Shulman on April 13, 2009 7:07 PM | Permalink | Reply to this

Re: Axiom of Infinity

Hi,

This statement opened a can of worms for me:

The 2-category of finite categories (finitely many objects and finitely many morphisms) lacks finite colimits

Would someone be so kind as to help explain this? I’ve been digging around the nLab, the nCafe, wikipedia, and John Armstrong’s blog and STILL cannot seem to grok colimits.

It all seems tantalizing. Especially

Limits and Push-Forward

and

Basic Concepts of Enriched Category Theory.

I’m afraid it is a bit above my head though. Is there some simple way to understand why the 2-category of finite categories has no colimits?

(permalink)

There are various toy example consistency checks showing that this formalization does reproduce what one wants to see:

the ordinary space of states of a quantum particle is easily reproduced this way #.

More remarkably, the path integrals for finite group Chern-Simons theory # (known as Dijkgraaf-Witten theory) as well as # for finite 2-group Chern-Simons theory (known as the Yetter model) are reproduced this way.

This is the evidence that convinced me that the categorical push-forwad should indeed be the right abstract way to think about the path integral: if Chern-Simons comes out right, then we are bound to obtain its boundary CFT, too, and that’s all one can ask for.

I looked at more consistency checks of simple kind ## which seem to further support this, though at times the arguments are less than waterproof, in part due to some fuzziness on the technical details of the setup one should look at.

As you will have noticed, most of the toy examples I looked at involve finite categories. That’s how toy examples go. Ultiumately we dream about being able to handle real-world setups as they appear in cutting-edge physics. While I have close to no real results on this at the moment, I do have one curious observation:

the problems and their possible solutions that one faces when computing colimits (or “category cardinality”) for non-finite categories are exactly (see also this) of the same kind of flavor as those of renormalization in QFT #.

You could probably guess that when Urs refers to the “real world” as opposed to his finite toy models, my eyebrows rise. If there is anything to this finitism, maybe these toy models are already representing the real world and should be taken more seriously.

Posted by: Eric on April 13, 2009 9:59 PM | Permalink | Reply to this

Re: Axiom of Infinity

Unfortunately, I don’t have time right now to try to explain colimits in general, but I can address FinCatFinCat. It doesn’t have no colimits, it just doesn’t have all finite colimits. It certainly has coproducts, for instance. But it doesn’t have all coequalizers: the coequalizer of the two inclusions 121\;\rightrightarrows \; 2, where 22 is the walking arrow and 11 is the terminal category would be a category with one object and \mathbb{N} as its endomorphisms, which is not finite. (It doesn’t matter whether you mean a 2-categorical or 1-categorical coequalizer, either.)

This is a familiar thing in algebraic categories: colimits sometimes involve generating things freely and can thus take you out of the finite world. For instance, the coproduct (= free product) of two finite groups will not, in general, be finite.

Posted by: Mike Shulman on April 14, 2009 12:24 AM | Permalink | Reply to this

Re: Axiom of Infinity

The 2-category of finite categories (finitely many objects and finitely many morphisms) lacks finite colimits, while the 2-category of finitely presented categories lacks finite limits.

Are these the right 22-categories? What about the free 22-topos?

Well, of course that depends on how you define 22-topos, which you know more about than I do. But on the one hand, if the free 22-topos (for whatever the right notion turns out to be) lacks some (co)limits, then perhaps these aren't really necessary. And on the other hand, the free 22-topos with finite colimits does have finite colimits, so you can study that instead.

Ultimately, the point is that even a finite limit of finite colimits of finite categories can be described with a finite amount of data, so it is accessible to the finitist.

Posted by: Toby Bartels on April 14, 2009 2:20 AM | Permalink | Reply to this

Re: Axiom of Infinity

If the free 2-topos with finite colimits is okay, even though it contains objects like BB \mathbb{N}, then why not the free topos with NNO? Even supposedly objectionable things like \mathbb{N}^\mathbb{N} can still be described with “a finite amount of data.” In fact, nothing in mathematics requires an infinite amount of data to describe, since no one can write down more than finitely many symbols in a math paper.

Posted by: Mike Shulman on April 14, 2009 7:31 AM | Permalink | Reply to this

Re: Axiom of Infinity

If the free 22-topos with finite colimits is okay, even though it contains objects like BB\mathbb{N}, then why not the free topos with NNO? Even supposedly objectionable things like N N\mathbf{N}^{\mathbf{N}} can still be described with “a finite amount of data.” In fact, nothing in mathematics requires an infinite amount of data to describe, since no one can write down more than finitely many symbols in a math paper.

Sorry, this is my fault for inviting a formalist approach by talking about the free 22-topos.

In one sense, yes, this is all OK. This is the sense in which Hilbert was a finitist; he wanted to develop a finite formal system for Cantor's set theory, prove its consistency by finite methods, then work within that system forevermore. Thanks to Gödel, we now know that this is impossible, but if it had worked, then Hilbert the formalist could have had his finitist cake and eaten his Cantorian paradise too. (Mmm … paradise cake …)

Even before we knew that this was impossible, people like Brouwer were not satisfied with such a programme because they found it an empty game. So I should not bring up the free 22-topos (with or without finite colimits), anymore than I should allow you to bring up the free topos with NNO, without explaining the motivation for it. There are a lot of formal systems in the world, but only some of them deserve a research programme.

Consider the claim (whether Zeilberger made it or not) that the Wiles–Taylor proof of FLT is meaningless. Presumably, this proof can be formalised within ZFC (although it hasn't been), so we get the result that ZFC ⊢ FLT; that's a finite claim. But FLT itself is a different finite claim. Presumably, the Wiles–Taylor proof can also be formalised within ETCS as well, which gives us the finite result that FLT holds in the internal language of the free topos with NNO and choice. But FLT holds in the trivial topos too, and we don't consider that an interesting result!

Actually, in this case, consistency would be sufficient motivation. A priori, the free topos with NNO and choice might be trivial. But a statement of the logical form of FLT must be true if it holds in a nontrivial topos with NNO. So if ETCS is consistent and the Wiles–Taylor proof can be formalised within it, then FLT is true. This won't work for every proposition, of course, not even in number theory, but it works for FLT.

If Zeilberger believes (on the basis of a century of haphazard experiments) that ZFC is consistent, then he would conclude that FLT is true, although this would not constitute a proof. However, there are metatheorems that certain analytic techniques for proving number-theoretic statements can be (mechanically, in principle) rewritten in Peano arithmetic. I may be misinterpreting him, but I believe that Zeilberger would accept, as a ‘rigorous’ proof of FLT, a proof that used such (and only such) continuous reasoning.

So what I really meant to say (and this is reflected somewhat in my suggestion that maybe we don't want colimits anyway) is this: If a finitist, doing finite category theory, ever feels the need to take a finite colimit whose existence can't be proved (and especially if this finitist has good reason, in the form of the non-finitist's theorem that this colimit doesn't exist in FinCatFin Cat, to doubt that its existence is provable), then the purpose served by it might still be served by the colimit data. Given the right relative consistency result, it would even be valid to work, formally, in the free right exact completion of the 22-category of finite categories. On the other hand, if the only colimits that ever come up are those that are known to exist, then none of this formalism would be useful.

Posted by: Toby Bartels on April 15, 2009 4:22 AM | Permalink | Reply to this

Re: Axiom of Infinity

If a finitist, doing finite category theory, ever feels the need to take a finite colimit whose existence can’t be proved (and especially if this finitist has good reason, in the form of the non-finitist’s theorem that this colimit doesn’t exist in FinCat, to doubt that its existence is provable), then the purpose served by it might still be served by the colimit data.

Is this going along the lines of my suggestion further above that the finitist invoking infinit concepts, or rather handling finite structure that would generate infinite structure would one allow it to do so, is to be thought of as an example of the general situaiton where one embeds a category into ever larger completions of itself?

What you just wrote seems to say that the finitist may not believe in certain colimits in his category CC to exist, but he or she is still free to consider the category Ind(C)Ind(C) of Ind-objects of CC, whose purpose is precisely this.

In fact the standard examples of Ind-objects are of this finite-induces-inifinite type. Say in the category FinVectFinVect of finite dimensional vector spaces any infinite dimensional vector space VV can be realized as the Ind-object of the filtered system of its finite dimensional subspaces.

(One reason I am mentioning this is because I was just about to start working on expanding the nnLab entry on Ind-objects when I saw your message…)

Posted by: Urs Schreiber on April 15, 2009 8:01 AM | Permalink | Reply to this

Re: Axiom of Infinity

The binary tree (or 2-diamond) has some very neat properties. In one “limit”, you recover Brownian motion and stochastic calculus. In another “limit” you recovery (1+1)-d Minkowski space. In the latter case, the wave equation is solved exactly even prior to taking the “limit”.

I’ve always used the term “limit” loosely. The discrete mathematics becomes “practically” indistinguishable from the continuum mathematics for most “practical” problems, but Tom Leinster just informed us that the “limit” does not “really” take you to the continuum mathematics.

Would this be an example of a filtered colimit?

Recalling the nature of filtered colimits, this means that in particular chains of inclusions

c 1c 2c 3c 4c_1 \hookrightarrow c_2 \hookrightarrow c_3 \hookrightarrow c_4 \hookrightarrow \cdots

of objects in CC are regarded to converge to an object in indCind&#8722;C, even if that object does not exist in CC itself. Standard examples where ind-objects are relevant are categories C whose objects are finite in some sense, such as finite sets or finite vector spaces. Their ind-categories contain then also the infinite versions of these objects as limits of sequences of inclusions of finite objects of ever increasing size.

Explicitly, I showed in this paper (Equation (3.9)-(3.11)) that on a binary tree, we have the commutative relations

[dx,x]=(Δx) 2Δtdt [dx,t]=[dt,x]=Δtdx [dt,t]=Δtdt. \begin{aligned} &[dx,x] = \frac{(\Delta x)^2}{\Delta t} dt\\ &[dx,t] = [dt,x] = \Delta t dx\\ &[dt,t] = \Delta t dt. \end{aligned}

Continuum differential forms on (1+1)-d Minkowski space are characterized by

[dx,x]=0 [dx,t]=[dt,x]=0 [dt,t]=0. \begin{aligned} &[dx,x] = 0\\ &[dx,t] = [dt,x] = 0\\ &[dt,t] = 0. \end{aligned}

This is achieved by setting Δx=cΔt\Delta x = c \Delta t on the binary tree and letting Δt0\Delta t\to 0.

Continuum stochastic calculus (or a close noncommutative cousin) is characterized by

[dx,x]=dt [dx,t]=[dt,x]=0 [dt,t]=0. \begin{aligned} &[dx,x] = dt\\ &[dx,t] = [dt,x] = 0\\ &[dt,t] = 0. \end{aligned}

This is achieved by setting Δx=Δt\Delta x = \sqrt{\Delta t} on the binary tree and letting Δt0\Delta t\to 0.

In each case, we have a nice passage from discrete calculus to continuum calculus, but the “limit” takes you out of the category of binary trees (BinTree?), a.k.a. n-diamonds (nDiamond?). Is the limit I’ve been talking about a “filtered colimit” (or maybe a “filtered limit”)?

Posted by: Eric on April 15, 2009 4:41 PM | Permalink | Reply to this

Re: Axiom of Infinity

the “limit”

Some remarks:

- there is not really the limit in the sense that you seem to be implying here: given a diagram in any category CC, it may have a limit or not. The nature and purpose of these limits may be wildly different as one looks at different categories CC.

As one indication of this, a limit in a category D opD^{op} is a colimit in the category DD!

- the categorical notion of “limit” is only rather loosely related to the notion of limit in analysis, which would be the one you have had in mind when saying “continuum limit”. Meaning: one can find certain contexts, i.e. certain categories CC and certain kinds of diagrams in them, such that the categorical limit over these diagrams computes a limit in the sense of analysis. But in the generic category the notion of limit is nothing like a limit in analysis.

On the other hand, it’s precisely the projective limits and, dually, the inductive colimits which do compute “limiting objects” in roughly an everyday sense of the word (and I guess it’s these cases that the term limit in category theory got its name from) . But it’s stil not the notion of limit from analysis here.

- While Tom in his comment pointed out that the real interval is not computed as a particular limit, it IS computed as another kind of limit. Namely, it is a terminal object in a certain category. A terminal object is a limit over an empty diagram! Alternatively, it is the colimit of any diagram containing it. So you see, the continuum limit in that case you were asking about is some kind of categorical limit. But there are many kinds of these: one for every category there is.

Posted by: Urs Schreiber on April 15, 2009 9:09 PM | Permalink | Reply to this

Re: Axiom of Infinity

A terminal object is… the colimit of any diagram containing it.

Taken literally, this is false. For instance, the coproduct 111\sqcup 1 is the colimit of a diagram that ‘contains’ the terminal object 11 (twice, in fact) but it is not, in general, itself a terminal object. It is true that if DCD\subset C is a full subcategory containing 11, so that 11 is terminal in DD as well, then 11 is the colimit of the inclusion functor DCD\hookrightarrow C; maybe this is what you meant to say?

On the other hand, any object xx of any category at all is the colimit (and the limit) of the diagram consisting only of xx and no nonidentity arrows. (-:

Posted by: Mike Shulman on April 19, 2009 3:46 PM | Permalink | Reply to this

Re: Axiom of Infinity

maybe this is what you meant to say?

Yes, I meant to mean terminal objects in the diagram category, which includes the case you mention next, of the diagram consisting only of a single object.

Now, I am afraid that didn’t quite serve to help Eric see clearer about limits and colimits.

I was wondering if it would make sense to short-circuit Eric’s desire to learn about limits and colimits with Jocelyn Paine’s desire to explain basics of how they work.

I am envisioning that we’d have a long list of examples linked to from the nnLab entry on limits, starting from very elementary.

Maybe somebody feels inspired to start filling this nnLab entry with content

nnLab: limits and colimits by examples.

(I may find time myself later today.)

Posted by: Urs Schreiber on April 20, 2009 8:12 AM | Permalink | Reply to this

Re: Axiom of Infinity

Is this going along the lines of my suggestion further above that the finitist invoking infinit concepts, or rather handling finite structure that would generate infinite structure would one allow it to do so, is to be thought of as an example of the general situaiton where one embeds a category into ever larger completions of itself?

Yes, I think so. That is, whether the finitist wants to think of it this way or not, if the finitist approaches (what we would normally regard as) infinite structures in this way, then we can explain what they're doing in these terms.

What you just wrote seems to say that the finitist may not believe in certain colimits in his category CC to exist, but he or she is still free to consider the category Ind(C)Ind(C) of IndInd-objects of CC, whose purpose is precisely this.

Not quite! As a finitist, they would (presumably) only really believe in FinInd(C)Fin Ind(C), the category of formal colimits of finite filtered diagrams.

In fact the standard examples of Ind-objects are of this finite-induces-inifinite type. Say in the category FinVectFin Vect of finite dimensional vector spaces any infinite dimensional vector space VV can be realized as the IndInd-object of the filtered system of its finite dimensional subspaces.

So in particular, this technique is not going to get you anything new.

Posted by: Toby Bartels on April 17, 2009 12:31 AM | Permalink | Reply to this

Re: Axiom of Infinity

Unfortunately, there are not many interesting finite filtered colimits. Any finite filtered category DD must have a terminal object, so colimits over DD are just evaluation at that terminal object, and FinInd(C)CFinInd(C)\simeq C for any category CC.

Posted by: Mike Shulman on April 19, 2009 3:56 PM | Permalink | Reply to this

Re: Axiom of Infinity

Unfortunately, there are not many interesting finite filtered colimits.

Good point! I think that this just may get to the heart of the matter.

I think that a finitist should accept the existence of (some) infinite filtered categories, such as the directed set NN of natural numbers (although they may wish to consider this large). But they should not accept the existence of the category of formal NN-indexed colimits (in an arbitrary given category CC), just as they should not accept the existence of the set of infinite sequences (in an arbitrary given set SS).

So all of this business about pro-objects and ind-objects won't do anything for them.

Posted by: Toby Bartels on April 19, 2009 9:33 PM | Permalink | Reply to this

Re: Axiom of Infinity

So I should not bring up the free 2-topos (with or without finite colimits), anymore than I should allow you to bring up the free topos with NNO, without explaining the motivation for it.

I agree completely.

I’m not entirely sure what you mean, though, by talking about whether FLT is ‘true.’ Do you mean ‘Peano arithmetic \vdash FLT?’ Also, my understanding of ‘finite’ is probably off-base, but is FLT really a ‘finite claim?’ I don’t see how a computer could verify it in a finite amount of time (as opposed to a proof that ‘Peano arithmetic \vdash FLT’).

And on a completely unrelated note, I’m not sure that ‘the free topos with choice’ exists, in the sense usually understood. At least, I don’t know how to construct it, since the statement of choice involves an unbounded quantifier and as such is not a statement of HOL.

Posted by: Mike Shulman on April 19, 2009 4:16 PM | Permalink | Reply to this

Re: Axiom of Infinity

I’m not entirely sure what you mean, though, by talking about whether FLT is ‘true.’

Well, I'm channeling Zeilberger here, so I may not know either. But I think that one might regard, as an empirical fact, that x,y,z>0n>2x n+y nz nx, y, z \gt 0 \;\Rightarrow\; n \gt 2 \;\Rightarrow\; x^n + y^n \ne z^n comes out as true whenever you calculate it (which of course you can, as a boolean value) for any four natural numbers xx, yy, zz, and nn, even though you can never verify for it for all of them. This is in a different league than is a statment that's claimed to be true for all infinite sequences of natural numbers or for all real numbers. In the infinite-sets-are-like-large-categories analogy, this would be like stating (as a non-finitist) that something is true for all sets, even if you're not willing to accept the existence of an uncountable Grothendieck universe.

Actually, Zeilberger's got a concept of ‘symbolic natural number’ that he would apply here. I really don't understand that, I'm afraid.

I’m not sure that ‘the free topos with choice’ exists, in the sense usually understood.

It doesn't sound any harder to me than the free category with coproducts, which also requires universal quantification over objects to state. Maybe the problem is that choice morphisms aren't unique, even in the appropriate categorial sense? OK, I think that I understand now.

Posted by: Toby Bartels on April 19, 2009 10:02 PM | Permalink | Reply to this

Re: Axiom of Infinity

My problem with ‘FLT is true’ is, I think, unrelated to finitism. It’s just that a statement like “for any four natural numbers” depends on knowing exactly what a natural number is. Many mathematicians seem to believe in ‘the real’ set of natural numbers, an attitude which seems quite puzzling to me post-Godel. It’s no better or worse than saying that something is true of ‘all sets’ before you have chosen a particular model of set theory.

Posted by: Mike Shulman on April 20, 2009 5:13 PM | Permalink | Reply to this

Re: Axiom of Infinity

Are you worried about nonstandard models of arithmetic? But that's formalism again. I can tell the difference between a natural number and (say) a polynomial, even if Peano arithmetic can't.

We seem to have an intuitive understanding of what a natural number is. And while Frege's intuitive understanding of what a set (or class) is turned out to be incoherent, it's not so clear that there's such a problem in this case. That is, it's much easier to take a realist attitude toward such entities.

I wouldn't do that myself. But I'm not surprised that other mathematicians do. Actually, much as it's easier for a finitist to accept excluded middle, it's probably easier for a finitist to be a mathematical realist.

Posted by: Toby Bartels on April 21, 2009 2:33 PM | Permalink | Reply to this

nonstandard Physics”; Re: Axiom of Infinity

Foundational Physics finds nonstandard models of arithmetic to be:
(a) necessary;
(b) worse than useless;
(c) irrelevant

Give examples. Show your work. Turn in before end of class, or take home for half-credit. You may use your calculator, but it won’t help much.

Posted by: Jonathan Vos Post on April 21, 2009 2:58 PM | Permalink | Reply to this

Re: Axiom of Infinity

Are you worried about nonstandard models of arithmetic?

Sort of, except that ‘nonstandard model’ implicitly assumes the existence of a ‘standard model,’ something that isn’t at all clear to me. How do you distinguish the ‘standard’ model from the ‘nonstandard’ ones? Certainly if you start by picking a certain model you can construct others that look nonstandard by comparison, but that’s only a relative notion. It does seem unlikely that our intuitive beliefs about natural numbers are inconsistent like Frege’s notion of set, but they do seem to me to be at best circular, and thus incapable of uniquely identifying an intended model.

Of course, as Jonathan points out (at least, if I read his post correctly), the distinction is completely irrelevant for applications of mathematics; one model of arithmetic is just as good as another. But when we start trying to distinguish ‘truth’ from provability, I think it pays to be clear about to what extent ‘truth’ has any meaning.

Posted by: Mike Shulman on April 22, 2009 5:20 AM | Permalink | Reply to this

Re: Axiom of Infinity

Also, I’m not really sure that the free right exact completion of FinCatFinCat is where you would want to work, since the inclusion of FinCatFinCat in that 2-category destroys the many finite colimits that do exist in FinCatFinCat.

Regardless, I expect you’re probably right about how a finitist should do category theory. However, my comment about the difficulties with colimits was mainly directed at Eric’s suggestion that it would simplify the lives of nn-category theorists to concentrate on finite things first.

Posted by: Mike Shulman on April 19, 2009 7:02 PM | Permalink | Reply to this

(oo,1)-Kan extension

I had intended to start writing an nnLab entry on Kan extensions for (,1)(\infty,1)-functors following section 4.3, p. 215, but I am getting stuck right at the beginning.

Can anyone help?

I am certainly being dense, but I keep being thrown on the top of p. 224 where that “induced diagram” appears.

Maybe somebody could just say in words what’s going on, so as to put me back on track.

Posted by: Urs Schreiber on April 16, 2009 10:09 AM | Permalink | Reply to this

Legal to define a sub-category using universal properties?

Is it legal to define a sub-category using universal properties?

Say I have the category of graphs G. For each g in G I can turn a g into g* (the Kleene-closure (monoid)) of g (which is identical to a category) and thus transform G into G*.

Only some objects in G* will have a (universal) initial element and that subset can be called IG* and be back-formed to IG.

Can I just say IG exists with some handwaving? Or is it hard or impossible to prove?

Defining things in terms of properties is generally not kosher but does it work for universal properties?

Posted by: RodMcGuire on April 16, 2009 11:02 AM | Permalink | Reply to this

Re: Legal to define a sub-category using universal properties?

Can I just say IG exists

I am not sure that I understood the question entirely.

Is the question if some category which is expected be defined along the lines of “the category of all xyz” is problematic due to size issues?

If so, the standard solution is to say: “the large category of all small xyz”.

Where a small xyz (for instance a small graph) is an xyz whose underlying set is small (with respect to one fixed universe).

Posted by: Urs Schreiber on April 16, 2009 12:35 PM | Permalink | Reply to this

Re: Legal to define a sub-category using universal properties?

Defining things in terms of properties is generally not kosher

Defining subcategories in terms of properties of the objects generally is kosher if you want a full subcategory. So in this case, you can define IGI G as the full subcategory of GG whose objects are those graphs gg whose Kleene-closure g *g^* has (when thought of as a small category) an initial object.

Even if you don't want a full subcategory, that is if you want to consider only certain graph morphisms between these graphs, then you can still define this by a property, but you'll have to specify which property of graph morphisms you want.

Actually, the only part that seems hand-wavy here is the definition of G *G^*! Fortunately you don't need that to define IGI G (if I understand correctly what that's supposed to mean).

Posted by: Toby Bartels on April 17, 2009 12:01 AM | Permalink | Reply to this

Down Again

<EOM>

Posted by: Mike Shulman on April 19, 2009 7:07 PM | Permalink | Reply to this

Re: Down Again

It’s back!

Posted by: Toby Bartels on April 19, 2009 8:54 PM | Permalink | Reply to this

The Future of Science

No doubt someone has mentioned it alreay, but came across Michael Nielsen’s The Future of Science, and it is a fascinating read, all about online scientific collaboration, etc.

Posted by: Bruce Bartlett on April 19, 2009 8:19 PM | Permalink | Reply to this

A name for Gamma

We have the simplex category, the globe category, the cube category, and I am currently pushing for the cycle category. I presume that the Moerdijk-Weiss category Ω\Omega (the domain category for dendroidal sets) should be called the “tree category” and that Joyal’s category Θ\Theta (the domain for cellular sets) should be the “cell category.” But what should we call Segal’s category Γ op\Gamma^{op}?

Posted by: Mike Shulman on April 22, 2009 3:02 PM | Permalink | Reply to this

Re: A name for Gamma

Γ\Gamma is to Δ\Delta as the theory of commutative monoids is to the theory of monoids. So perhaps we should call Γ\Gamma the commutative simplex category.

I’m using Γ\Gamma in the sense that Segal originally used it. (Some later authors decided that it would be better to reverse directions and call it Γ op\Gamma^{op}. I disagree.)

Posted by: Tom Leinster on April 23, 2009 2:12 AM | Permalink | Reply to this

Re: A name for Gamma

What about other skewsimplicial groups ? I mean if cycle category for cyclic homology, then what is say Y for dihedral homology ?
Maybe dihedron category ?

Posted by: Zoran Skoda on April 24, 2009 12:26 AM | Permalink | Reply to this

Re: A name for Gamma

I’ve never heard of dihedral homology. Do the objects of this category have an interpretation as ‘dihedrons?’

It sounds like you have other such categories in mind (what does ‘skewsimplicial group’ mean?) – is there a list somewhere?

Posted by: Mike Shulman on April 24, 2009 2:32 PM | Permalink | Reply to this

Getting xypic diagrams into nLab by converting to PDF then SVG

In the old nLab General Discussion page, at the end of the section Basic Math Entries, Diagrams, John says:

Since all my papers in LaTeX use LaTeX macro package like xypic to draw diagrams, it’s quite sad that we need to use something like SVG here. It means I can’t take material from my expository papers and put it on the nLab without redrawing all the diagrams.

I’ve been Googling terms such as “LaTeX category theory diagram SVG” to see whether anyone has solved this. I may have found a solution in the Wikipedia page User:Ryan Reich. This has a section entitled How to make SVG versions of LaTeX diagrams. Ryan starts by writing:

It’s easy enough, using LaTeX and perhaps the xy package, to create very intricate mathematical diagrams. Unfortuantely, they will all be output as .dvi or .pdf files, whereas the best format for uploading to Wikipedia is .svg (Scalable vector graphics). Figuring out how to make the conversion is a major pain …

He goes on to explain how to do this by converting your LaTeX file to a PDF — one necessity is to tweak your fonts, so they look good when scaled — cropping the PDF to remove space around the picture, and then running a free PDF-to-SVG conversion utility called pdf2svg. The resulting SVG file will, I presume, be a text file that you can paste into nLab.

I say “I presume” because I’m running Windows, and wasn’t able to do this. By running LaTeX from TeXnic Center, an editor that comes with the proTeXt distribution of MiKTeX, I was able to generate a PDF of the associator pentagon in Aaron Lauda’s commutative-diagrams-with-xypic page. (He’s got lots of other nice examples here, where that page is linked from, including braids, knots, and cobordisms.)

But, I couldn’t turn the PDF into an SVG. The aforementioned pdf2svg page suggests that pdf2svg was written for Linux, and the commands given there to install it are Unix ones. I might be able to run them under the Cygwin Linux shell for Windows, but such installation attempts can be tricky to get working. And I haven’t yet found a free PDF to SVG converter for Windows. However, maybe the info above will be useful to Unix users. Installing pdf2svg if you are running Unix should be straightforward.

Posted by: Jocelyn Paine on April 24, 2009 12:46 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

When I follow Ryan’s instructions (on Ubuntu Linux) for a simple commutative square, I get an SVG file that, when I put it in here or instiki, just looks like a huge black box. When I load it up in Inkscape or directly in Firefox, the lines forming the square are there, but no arrowheads or text labels.

Anyone else have better luck?

Posted by: Mike Shulman on April 24, 2009 2:21 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

The SVG code that I end up with also looks much different from the ones that Ryan has made; mine looks like it has a bunch of png images included as ‘mask’s. Clearly this is not what we want. What am I doing wrong?

Posted by: Mike Shulman on April 24, 2009 2:54 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

the lines forming the square are there, but no arrowheads or text labels.

Perhaps somewhere in LaTeX or xypic is an option that, if set, makes them hive off text as separate little PNG image files? But if not set, causes them to treat it just as part of the overall bitmap? I’m only guessing, because I don’t know much about how they work: but if there is such an option, and it’s somehow got set, and if they treat arrowheads as text, would that be consistent with what you’re seeing?

When I used to teach at Oxford, we had some very knowledgeable TeX-perts in the University Computing Service who were willing, and frequently very able, to sort out such problems, as if LaTeX macros were their bedside reading. Perhaps someone on the Café with a similarly capable computing support department could take the problem there?

Posted by: Jocelyn Paine on April 24, 2009 3:54 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

Hey… shouldn’t that be TeXnician? That’s about the only thing I remember from when I narrowly failed to avoid having a close encounter of the third kind with the TeX Book a few years ago. Something like: “Exercise 1. When you’re finished reading this book, will you be (a) a TeXpert or (b) a TeXnician?”. Anyhow, the method you’ve mentioned about getting xypic into MathML sounds interesting.

Posted by: Bruce Bartlett on April 24, 2009 9:38 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

Yep, you’re right. I can’t remember all that much of the TeX book either, having vowed never to get too close to any document-description language capable of universal computation (exemplified by Ian Timourian’s Generating Hall-of-Mirrors Effect with Recursive LaTeX Code and Images), but I Googled an answer. It’s TeXnician, because the X in TeX is such that TeXnician sounds more like technician than TeXpert sounds like expert.

Posted by: Jocelyn Paine on May 5, 2009 3:17 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

I can’t remember all that much of the TeX book either, having vowed never to get too close to any document-description language capable of universal computation

If you want to use TeX at all, then you can can still read the earlier chapters; just don't get too close to Chapter 20.

Posted by: Toby Bartels on May 5, 2009 5:42 PM | Permalink | Reply to this

Re: Getting xypic diagrams into nLab by converting to PDF then SVG

I posted a comment on this earlier but it seems to have gotten lost in cyberspace. I’ve started a thread on this over at the n-Forum. Turns out that there is a way to do this, but not from xy-pic.

Posted by: Andrew Stacey on May 7, 2009 10:52 PM | Permalink | Reply to this
Read the post nLab - More General Discussion
Weblog: The n-Category Café
Excerpt: Discussing nLab
Tracked: May 6, 2009 9:18 AM

Post a New Comment