September 9, 2008

Reliability

Posted by David Corfield

Melvyn B. Nathanson has an article on the ArXiv today – Desperately seeking mathematical truth – which questions the reliability of the mathematical literature. He observes,

When I read a journal article, I often find mistakes,

and reasonably concludes that

The literature is unreliable.

For deep, difficult and long proofs, we can only “rely on the judgments of the bosses in the field”. And so,

…even in mathematics, truth can be political.

A first line of response might try to lessen the worry. A dangerous situation is one where a result which is used frequently in other results hangs itself from a single perilous thread. But fortunately there’s some useful feedback in the system. To the extent that the result is used frequently, people will look more closely at it to see if anything extra can be extracted, and in particular submit its proof to closer scrutiny. Alternative proofs are likely to follow. A result upon which little depends may well not receive this attention, but then its reliability matters less.

And to the extent that people detect power in a result, this often coincides with an implicit conceptual fruitfulness whose elaboration often leads to its being tied in a variety of ways to other parts of the system. Think of quadratic reciprocity. Of course this process may take a long time on the scale of an individual’s career. Nathanson raises the long proof of the classification of finite simple groups as one for which we must rely on the bosses. Perhaps this is of little consolation to him now, but it is surely feasible that a century hence there will be a much better proof – the field will have found its Darwin:

…the classification of finite simple groups is an exercise in taxonomy. This is obvious to the expert and to the uninitiated alike. To be sure, the exercise is of colossal length, but length is a concomitant of taxonomy. Those of us who have been engaged in this work are the intellectual confreres of Linnaeus. Not surprisingly, I wonder if a future Darwin will conceptualize and unify our hard won theorems. The great sticking point, though there are several, concerns the sporadic groups. I find it aesthetically repugnant to accept that these groups are mere anomalies… Possibly…The Origin of Groups remains to be written, along lines foreign to those of Linnean outlook. (John Thompson)

But perhaps this line of response is a complacent one. If a single mathematician “often finds mistakes”, there must be a huge number out there. Now that so many papers are posted on the ArXiv, should it not be expected of you that on finding an error you email the author? A more radical and interesting solution would be to have a site arxiv-comments.org where any mistakes noticed, or even potential mistakes, could be recorded, along with useful thoughts on connections with other work, etc. This could do some of the work of our public reviews.

Posted at September 9, 2008 11:15 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1786

Re: Reliability

A more radical and interesting solution would be to have a site arxiv-comments.org where any mistakes noticed, or even potential mistakes, could be recorded

It seems that the arXiv’s trackback system is geared along these lines.

On the other hand, several times I got the impression that arXiv trackbacks seem to be turned off somehow especially in parts of the math archive. (?)

Posted by: Urs Schreiber on September 9, 2008 11:34 AM | Permalink | Reply to this

Re: Reliability

But the trackback system is only set up for people who publish webpages. Our ‘public reviews’ receive comments from people who don’t.

And the situation may be getting worse. It doesn’t look to me as though research-level maths blogs are increasing. Indeed, they may well be contracting. Neverendingbooks has closed. The Everything Seminar has become an LHC blog. The Noncommutative Geometry blog has 8 posts so far this year, compared to 49 in 2007.

Posted by: David Corfield on September 9, 2008 11:58 AM | Permalink | Reply to this

Re: Reliability

But the trackback system is only set up for people who publish webpages.

True. But it’s a start. If you are eager to make a public comment on somebody’s article but don’t feel it is worth an entire article of your own, you have the possibility of having a link to your comment installed on the article’s arXiv site.

Which is a first step. And, by the way, a reason why many people object to the arXiv trackback system, since the idea that random negative comments may be posted to you article sticking there forever can be disturbing.

Posted by: Urs Schreiber on September 9, 2008 12:14 PM | Permalink | Reply to this

Re: Reliability

Fortunately, counteracting the contraction of math-blogging, Todd Trimble has been posting some excellent category theoretic material since April.

We must spring clean our blogroll, or is that autumn/fall clean?

Posted by: David Corfield on September 9, 2008 4:59 PM | Permalink | Reply to this

Re: Reliability

Bloglinks from the Café seem to show up, e.g., here.

Posted by: David Corfield on September 9, 2008 12:24 PM | Permalink | Reply to this

Re: Reliability

In general, yes, but it seemd to me that sometimes, on some parts of the math arXiv, they don’t. But maybe I just didn’t look properly. When I think it happens again I’ll drop a note here.

Posted by: Urs Schreiber on September 9, 2008 12:45 PM | Permalink | Reply to this

Re: Reliability

Jacques Distler has given a talk dealing with closely related issues, in particular with our blogs, which you can view online by following the link here.

Posted by: Urs Schreiber on September 10, 2008 2:37 PM | Permalink | Reply to this

Re: Reliability

While the issues Nathanson raises are interesting, there are not nearly as interesting as the issues he doesn’t raise.

For example, he assume that we all agree what “mathematical truth” is. In his implicit interpretation a statement is true if it can be deduced from a consistant axiomatic system using informal two valued logic. And he is shocked, shocked! that the judgment of validity of such a deduction amounts to a social consensus. He calls it “political,” but for me “political” is much to loaded a term. My reply to his concern would be
“How else would you do it?”

It is not clear to me that Nathanson’s description of how mathematics is done is realistic. For one thing, the axioms are fluid. There was plenty of interesting mathematics done before ZFC was enshrined as foundation and plenty more will be done after ZFC is replaced by something else (n-category theory, perhaps?). What I am trying to say that we don’t just deduce mathematical truth from axioms. There is also a feedback the other way: our experience of doing math modifies what we consider the foundations/axioms.

As an example of such an evolution look at the definition of orbifolds. There plenty of papers that use Stake’s original definition, but there is an emerging consensus that that definition needs to be abandoned. In what sense then the papers that use the original definition are true?

And finally a question that exposes me as a crank: why do we like two-valued logic so much?

Posted by: Eugene Lerman on September 9, 2008 5:33 PM | Permalink | Reply to this

Re: Reliability

In what sense then the papers that use the original definition are true?

Spoken like a true Lakatosian!

Now, how to view the evolution of the concept orbifold - drift, improvement from original definition, or movement towards the right definition.

Posted by: David Corfield on September 9, 2008 8:35 PM | Permalink | Reply to this

Re: Reliability

And finally a question that exposes me as a crank: why do we like two-valued logic so much?

Actually, it’s a great question; not at all crankish! A primary source of “many-valued logics” which are of great mathematical interest is topos theory. The internal logic of a topos is almost never two-valued.

That’s not to say you can’t use two-valued logic to study such toposes “from the outside”, as it were, but there are some cases (such as synthetic differential geometry) where the toposes look dreadfully complicated when seen from without, but where the mathematics as seen from within, according to the internal logic, is simple and beautiful.

Posted by: Todd Trimble on September 10, 2008 12:17 AM | Permalink | Reply to this

Re: Reliability

Cf. Steve Vickers on learning to love the gorilla.

Posted by: John Baez on September 10, 2008 3:55 AM | Permalink | Reply to this

Re: Reliability

There’s this guy named Carl Hewitt, who invented the Actor model of computation. The creators of the language Scheme, Sussman & Steele, have said they invented it to try to understand Hewitt’s thesis. Milner, who invented the pi calculus, also began from Hewitt’s work. Also Erlang and E are derived from it. He invented the first logic programming language, Planner.

Unfortunately, his work is very hard to read. He likes introducing new terminology and uses wild typography.

His latest work is on something he calls “direct logic” that aims to “contain” the effects of an inconsistency; it’s related to paraconsistent logic. For example, almost none of the programs we write are “correct” in any way, but they still mostly work. What can you say about the behavior of an incorrect program? How do you reason about policies, norms, or practices?

There’s a PDF here.

Posted by: Mike Stay on September 10, 2008 1:33 AM | Permalink | Reply to this

Re: Reliability

David wrote:

If a single mathematician “often finds mistakes”, there must be a huge number out there.

Sure. But most of them don’t matter much, since they are easy to fix.

Good math is a lot more like real life than some people think. It’s not a brittle, perfect crystal that’s bound to utterly shatter if it has a single tiny flaw in it. It’s less perfect, and more robust. Some mistakes are impossible to fix, and doom a proof. Lots of them aren’t.

Now that so many papers are posted on the ArXiv, should it not be expected of you that on finding an error you email the author?

That would be the noble thing, but often when I spot the mistake, I spot the fix two minutes later. After this happens, it’s often a bit tricky to tell whether the mistake was a true mistake, or just sloppy writing on the author’s part, or overly pedantic or insufficiently insightful reading on my part!

(Not surprisingly, mistakes occur most often in the passages that are vague or contain phrases like ‘it is easy to see’.)

Given all this, it often seems a bit too tiresome to send someone an email. “Hi! You may have made a small mistake that’s easy to fix, or maybe you just phrased your argument sloppily, or maybe I just didn’t understand you the first time…” I’m not eager to make someone’s acquaintance with an email like this! I’m usually glad that I was able to fix the proof and get on with business.

To increase the reliability of my results, I avoid proving theorems that rely on someone else’s result unless either 1) I understand the proof of that result or 2) the result is sufficiently well-known and time-tested that I feel I can trust it even if I don’t understand the proof. And frankly, I don’t really like 2). When I’m proving stuff, I’m only happy when I know how every little piece works.

(When I write This Week’s Finds, on the other hand, I feel perfectly happy talking about things I just barely understand! Being too constipated there would just slow down the process of learning stuff.)

Posted by: John Baez on September 10, 2008 3:52 AM | Permalink | Reply to this

Re: Reliability

Instinctively, I take a less sanguine view than the one you seem to take here, John – but on reflection this may just be confirmation bias on my part, i.e. one remembers the slips and gaps in proofs, not the majority of times when the argument works.

Still, while perhaps not being quite as concerned as Nathanson, I do think it would be unhealthy if a general laissez-faire culture were to develop of `this argument looks OK, and if it wasn’t someone would have picked up on this, or will pick up on this.” Not because I have evidence to counter your “robustness of a body of mathematics” point – although I feel that in some areas of research the arguments are still rather fragile creatures, and that being guided too much by what seems plausible is dangerous – but because trying to learn new things as a starting graduate student can be extremely frustrating if large chunks of the literature are loose on the details. Filling in gaps and fixing minor slips is easier the more mathematics one does, but less so at the start of one’s development.

Also, some mistakes are so egregious that the fact they slip through the net is embarrassing all round. (4 page “proofs” of a problem open since the 1960s, which rest on irretrievably wrong interchanges of limits, to allude to one instance.) Granted, these are probably outliers rather than representative examples, but it doesn’t inspire confidence – which I think was Nathanson’s underlying point. Fluidity of axioms doesn’t bother me: possible instances of petitio principii, or argument by a picture of a non-generic case, do.

Innocent question for anyone homological who’s reading: how significant was Neeman’s counterexample to the mis-statement of Roos’ lim1 theorem? My completely inexpert view from the sidelines is that, much as JB argues, proofs where the incorrect result has been applied have been easily repaired – but I’d be interested to hear from anyone who knows more.

Finally, in case this seems a peevish comment, can I wholeheartedly applaud JB’s approach when proving stuff – it’s something that I have tried to stick to but can’t claim to have completely followed. (And in the interests of full disclosure: my first publication misquoted a result on semigroups when trying to give background motivation. So far no one has sent me emails, polite or otherwise, pointing out the erroneous claim.)

Posted by: Yemon Choi on September 10, 2008 10:35 PM | Permalink | Reply to this

Re: Reliability

Sorry, posted before seeing Eugene Lerman’s comment below, which I think says better what I was trying to communicate.

Posted by: Yemon Choi on September 10, 2008 11:01 PM | Permalink | Reply to this

Re: Reliability

I am currently working in an area which currently depends heavily on some brilliant but not very well written work presented in several papers and a lecture notes book written by a mathematician over 30 years ago. There are about three theorems in the book which are absolutely critical to the rest of the theory. All of the proofs were tricky and short on important details or had nasty red herring typos. The author left some important proofs or important parts of proofs to the reader, and almost invariably the missing proofs are either tricky or onerously tedious. When I first read this material I tried to follow every single argument and fill in the details, and it was an enormous time consuming ordeal. I now look back on it as an exercise in self-flagellation. I find it very difficult to believe that more than ten people have read this material with an equally critical eye. The thought of my current work extending this existing literature makes me nervous enough that I am actually rewriting much of it with the rigor and detail one might expect from Bourbaki, while removing an extra complicating layer of structure in the representation theory that hasn’t proved necessary and which obscured access to some nice results. Happily, I have found nothing in the original theory which is false, but why couldn’t the author have put more effort into actually HELPING the reader? Some people will probably think that this rewrite is a waste of time, but I guess like John I really like to know that I understand and can rely on the historical material that I’m working with. I wonder though if I might also be a bit of a control freak.

Posted by: Richard on September 11, 2008 4:11 AM | Permalink | Reply to this

Re: Reliability

Richard said:

Happily, I have found nothing in the original theory which is false, but why couldn’t the author have put more effort into actually HELPING the reader?

Perhaps because the author didn’t have enough incentive to help the reader? (Or maybe I’m getting more cynical in my premature fogeydom.)

Some people will probably think that this rewrite is a waste of time, but I guess like John I really like to know that I understand and can rely on the historical material that I’m working with.

Oh, no, I think this is extremely admirable on your part. I just hope the effort gets acknowledged or credited somehow.

Posted by: Yemon Choi on September 14, 2008 2:28 AM | Permalink | Reply to this

Re: Reliability

Great, we’ve achieved something of a consensus that Nathanson is worrying where he largely needn’t, and missing out what is more important, namely, concept change. All of which sits very happily with me, and is taken up in Michael Harris’s contribution to the Princeton Companion.

Nathanson’s article appeared as an opinion in the August edition of Notices. Perhaps we should compose a response.

(By the way did people note Jacob Lurie’s WHAT IS…an $\infty$-Category in the September issue, although what he talks about there is what we call here an $(\infty, 1)$-Category?)

Posted by: David Corfield on September 10, 2008 11:12 AM | Permalink | Reply to this

Re: Reliability

Yes. Grr.

Posted by: Tom Leinster on September 10, 2008 12:03 PM | Permalink | Reply to this

Re: Reliability

I haven’t seen it.

Why Grr?

Posted by: Urs Schreiber on September 10, 2008 12:08 PM | Permalink | Reply to this

Re: Reliability

His appropriation of the term ‘$\infty$-category’.

Posted by: Tom Leinster on September 10, 2008 4:38 PM | Permalink | Reply to this

Re: Reliability

Are you sure it’s him and not the editors of the Notices?

Posted by: Eugene Lerman on September 10, 2008 4:58 PM | Permalink | Reply to this

Re: Reliability

I don’t think so as it’s consistent with his calling $\infty$-topoi what might be called $(\infty, 1)$-topoi.

Posted by: David Corfield on September 10, 2008 5:46 PM | Permalink | Reply to this

Re: Reliability

Lurie uses $\infty$-category to mean $(\infty,1)$-category.

For nonexperts, I should explain why this is a problem. $(\infty,1)$-categories are a very special, very well-understood case of full-fledged $\infty$-categories: they’re $\infty$-categories where all the $n$-morphisms are invertible for $n \gt 1$. Lurie is a bigshot. So now, if someone says they’re struggling to understand $\infty$-categories, people may reply: “Lurie already understands them, so why not just read his stuff?”

The real problem is that we need a short, snappy word for $(\infty,1)$-category — since it’s an important concept. Joyal is writing a book on quasicategories, which are his favorite approach to $(\infty,1)$-categories. But, he’s looking for a better name for these. He asked me for some candidates when we were in Barcelona. I urged that we choose something really short and mundane, as unobtrusive as the word ‘set’.

Since an $(\infty,1)$-category is a place where you can do homotopy theory, I sort of like the term ‘setting’ — but I don’t quite love it.

Joyal has proposed the term ‘homotopos’ for $(\infty,1)$-topos.

Posted by: John Baez on September 10, 2008 6:44 PM | Permalink | Reply to this

Re: Reliability

I saw Joyal last week at Cats 3 in Pisa. In his talk he wrote “quategory” on the board without comment; I wondered whether he’d meant to do this, and later discovered that yes, he’s been experimenting with it as a term for quasi-category (his own implementation of the notion of $(\infty, 1)$-category).

But during that week there was another development in André’s search for a good term for quasi-category. He’s now thinking of using the word “nef”. Certainly this is as short as “set”! In French, nef means ship or “craft”, e.g. aéronef means aircraft. Apparently it means the same kind of thing in Spanish and Italian. (In French it can also mean the nave of a church, and I suppose it’s related to English words such as “navy”.)

(A drawback of “nef” is that it’s already used in algebraic geometry, for something completely different.)

André started to tell me how a quasi-category can be thought of as being like a ship — in particular, a paper boat — but unfortunately we were in the lunch queue when the conversation started and we ended up at separate tables… so I never got to find out. I’d love to have heard the end of that!

Posted by: Tom Leinster on September 10, 2008 7:08 PM | Permalink | Reply to this

Re: Reliability

I was always hoping I’d learn enough algebraic geometry to figure out what nef meant. Now maybe I won’t need to: it’ll mean $(\infty,1)$-category.

I suppose its related to English words such as navy.

Cool! So, we’ll call the people who work on $(\infty,1)$-categories ‘the navy’. Thus giving a whole new meaning to that old Village People song.

We can use aéronef to mean $(\infty,1)$-topos.

Posted by: John Baez on September 10, 2008 7:21 PM | Permalink | Reply to this

Re: Reliability

Alternative to the Village People song:

Let’s all go barmy
and join the army
see the world…
;-D

jim

Posted by: jim stasheff on September 11, 2008 2:09 AM | Permalink | Reply to this

Re: Reliability

Wait, now I am confused. Or, more likely, now I realize that I used to be confused all along:

I thought that quasicategories are simplicial objects where all inner horns are required to have fillers, and that this is tantamount to saying that they are like Kan complexes minus the condition of invertibility of cells. Is that not right?

Posted by: Urs Schreiber on September 10, 2008 8:42 PM | Permalink | Reply to this

Re: Reliability

Quasicategories are like Kan complexes minus the condition of invertibility of one-cells.

Kan complexes can be thought of as $(\infty, 0)$-categories, or $\infty$-groupoids: that is, $\infty$-categories in which all cells of dimension $\geq 1$ are invertible.

Quasicategories can be thought of as $(\infty, 1)$-categories: that is, $\infty$-categories in which all cells of dimension $\geq 2$ are invertible.

For instance, if $A$ is a quasicategory and $a$ and $b$ are $0$-cells of $A$, then the simplicial set $A(a, b) = Hom_A(a, b)$ is a Kan complex. This fits with the previous two paragraphs, since if $A$ is an $(\infty, d)$-category and $a$ and $b$ are objects of $A$, we’d expect the hom-$\infty$-category $A(a, b)$ to be an $(\infty, d - 1)$-category..

Posted by: Tom Leinster on September 10, 2008 8:59 PM | Permalink | Reply to this

Re: Reliability

Thanks, I had been under the impression that quasicategories are more general than $(\infty,1)$-categories. Stupid me.

So what stops one from dropping more horn filler conditions, i.e. to consider simplicial sets where not even all inner horns are necessarily fillable? Is it not possible to find a consistent collection of such filler conditions that would yield a notion of $\infty$-category in the proper sense?

Posted by: Urs Schreiber on September 10, 2008 9:32 PM | Permalink | Reply to this

Re: Reliability

The answer to your last question appears to be “no”.

There is a reasonable functor $\Delta \to Str\infty Cat$, sending $[n] \in \Delta$ to a certain $n$-category called the $n$th oriental. (E.g. the 2nd oriental is a 2-category consisting of 3 objects, 3 nontrivial 1-cells making a triangle, and one nontrivial 2-cell.) You regard an $n$-category as an $\infty$-category in the usual way: only trivial higher cells.

This functor $\Delta \to Str\infty Cat$ induces a kind of nerve functor $N: Str\infty Cat \to Set^{\Delta^{op}}$. The trouble is that although it’s faithful, it’s not full. So $Str\infty Cat$ does not embed fully in simplicial sets, at least not via $N$.

All of this is due to Street and Roberts. Street and Verity did much more work on this. The short story is that if you want a full and faithful functor, you have to change the category of simplicial sets to the category of stratified simplicial sets, which are simplicial sets equipped with certain distinguished simplices called ‘thin’ or ‘hollow’. The stratified simplicial sets that correspond to strict $\infty$-categories are what Verity calls complicial sets.

‘But,’ I hear you say, ‘I’m interested in weak $\infty$-categories, not strict ones! ’ Well, Street proposed a definition of weak $\infty$-category (the first ever, I believe) based on the above analysis of strict $\infty$-categories. The complicial sets can be characterized by a rather complicated set of unique horn-filling conditions; he took those conditions, dropped the uniqueness, and proposed that as a definition of weak $\infty$-category.

There are more details on this in Section 10.2 of my book and Section St of my survey.

Posted by: Tom Leinster on September 10, 2008 10:09 PM | Permalink | Reply to this

Re: Reliability

Thanks, Tom.

I know the stuff about orientals and $\omega$-nerves (nerves of strict globular $\infty$-categories), but I never fully absorbed Street’s definition of weak $\infty$-category.

‘But,’ I hear you say, ‘I’m interested in weak $\infty$-categories, not strict ones!

For the purpose of the question you were replying to, yes.

But maybe allow me to restate the question on what I am really interested in:

From what I heard in talks I take it that in the world of Hopkins/Lurie etc. it becomes accepted that it is a good idea™ to turn attention to the following model for higher categories:

Def: An $(\infty,n)$-category for $n \gt 1$ is a category enriched in $(\infty,n-1)$-categories. And $(\infty,1)$-categories are categories enriched over some category Spaces Quillen equivalent to Top.

What I find striking about this is that, after an initial homotopic weakening by enriching/internalizing in spaces, this is a definition in the spirit of strict $n$-categories.

I would like to know:

a) how are $(\infty,n)$-categories in the above sense related to strict $n$-categories internal to Spaces?

b) how are $(\infty,n)$-categories in the above sense related to strict $n$-categories internal to generalized Spaces?

Does anyone know?

Posted by: Urs Schreiber on September 11, 2008 11:20 AM | Permalink | Reply to this

Re: Reliability

Tom wrote:

Quasicategories can be thought of as ($\infty$,1)-categories: that is, $\infty$-categories in which all cells of dimension $\ge 2$ are invertible.

I can’t resist adding that our brand new colleague here at UCR, Julie Bergner, is an expert on this stuff.

Here’s what I wrote a while back, when she was hired:

What Julia Bergner wisely calls the ‘homotopy theory of homotopy theories’, I prefer to call the $(\infty,1)$-category of $(\infty,1)$-categories.

It’s a wonderful fact, not yet fully understood, that $\infty$-categories where all the $j$-morphisms are weakly invertible are the same as topological spaces — at least as far as homotopy theory is concerned. So, you can think of an $(\infty,1)$-category as an $\infty$-category such that between any pair of objects we have a space of morphisms. Such things are incredibly common. The most famous is the category of topological spaces! Another is the category of chain complexes. Indeed, in any situation where we can do something like homotopy theory, there’s probably an $(\infty,1)$-category lurking in the background. That’s why Julie wisely calls an $(\infty,1)$-category a ‘homotopy theory’.

Julie has been studying ways to make these pretty words precise. There are at least as many ways to define $(\infty,1)$-categories as to define $\infty$-categories. In fact, right now there might be more! Here are a few:

• Topologically enriched categories. These are categories that have a topological space of morphisms between any two objects.
• Simplicially enriched categories. These are categories that have a simplicial set of morphisms between any two objects.
• Model categories. These are a standard framework for studying general ‘homotopy theories’. Any model category gives a simplicially enriched category using an old trick called the Dwyer–Kan simplicial localization.
• $A_\infty$-categories. These are things like topologically or simplicially enriched categories, but where composition of morphisms is only associative ‘up to coherent homotopy’.
• Quasicategories. These are simplicial sets satisfying a weakened version of the ‘horn-filling condition’ that defines Kan complexes, which themselves describe $\infty$-categories with all $j$-morphisms invertible.
• Segal categories. A ‘Segal precategory’ is a simplicial space whose space of 0-simplices is discrete. A nice one of these is a Segal category.
• Complete Segal spaces. Your eyes are glazing over, so I won’t even attempt to explain these.

We need to relate all these concepts, to keep from going insane! But how?

This is what Julia has been working on. She started by making the category of simplicially enriched categories into a model category. By the Dwyer–Kan trick this model category gives a simplicially enriched category. So, there is a (large) simplicially enriched category of (small) simplicially enriched categories — the ‘homotopy theory of homotopy theories’!

Then, she obtained model categories starting from some other definitions of $(\infty,1)$-category, and showed these were all ‘Quillen equivalent’. Joyal and Tierney have also been doing this sort of thing.

As a result, we now know there are simplicially enriched categories of ‘Segal categories’, ‘complete Segal spaces’, and ‘quasicategories’, all equivalent to the simplicially enriched category of simplicially enriched categories.

She gave a shockingly clear introduction to all this mind-boggling work at the Fields Institute last winter. You can hear it and read the writeup here:

She’s also written lots of other interesting papers.

So, all UCR math grad students will soon be experts on $(\infty,1)$-categories… and we’ll take over the world.

Digressing further, I can’t help noting: besides Julie, there’s another assistant professor at UCR with a big interest in homotopy theory and higher categories: Vasily Dolgushev. And, he plans to conduct a seminar this fall on Batanin’s work on higher categories and the Deligne conjecture! So, life will get quite exciting.

Posted by: John Baez on September 10, 2008 11:48 PM | Permalink | Reply to this

Re: Reliability

It might be worth noting that $A_\infty$ categories are a slightly different beast from the other notions you list. Namely they are linear over a field – the Homs are vector spaces, and compositions satisfy a homotopical weakening of associativity. They are however equivalent to another homotopical or $\infty$ version of “linear” categories, namely dg categories. They are also very closely related to the stable version of $(\infty,1)$-categories: stable $(\infty,1)$-categories (as defined in Lurie’s DAG1) are the same – over a field of characteristic zero! – as $A_\infty$ or dg categories that have an extra property, being “pretriangulated” (i.e. their homotopy category is a triangulated category). All these linear notions are refined versions of the usual notion of triangulated category - the usual derived categories. So are (linear) model categories, though I think these are far less useful.. One can’t effectively do algebra or geometry in model categories in the same way one can with these other notions.
Posted by: David Ben-Zvi on September 11, 2008 1:31 AM | Permalink | Reply to this

Re: Reliability

And Vasily will be talking here at Penn in the Math-Physics seminar at 1PM Fri Sept 19

jim

Posted by: jim stasheff on September 11, 2008 2:12 AM | Permalink | Reply to this

Re: Reliability

$(\infty,1)$-categories are equivalent to categories enriched in topological spaces. This can be regarded as a special case of categories internal to topological spaces.

Doesn’t that internal case feel more fundamental than the enriched case, especially if one goes on and replaces topological spaces here with generalized spaces of some sort?

Another thing:

Often people take $(\infty,n)$-categories to be defined as categories enriched in $(\infty,n-1)$-categories. That should mean that they are like strict $n$-categories internal to topological spaces? If so, what would be the precise statement?

And then strict $\infty$-categories internal to spaces? I am fond of strict $\infty$-categories internal to generalized spaces in the sense of sheaves. This is the same as strict $\infty$-category valued sheaves.

How does that relate to $(\infty,\infty)$-categories in the above iteratively enriched sense?

Posted by: Urs Schreiber on September 10, 2008 8:53 PM | Permalink | Reply to this

Re: Reliability

The real problem is that we need a short, snappy word for $(\infty,1)$-category

Actually, I would like a short, snappy word, parametrised in $n$, for $(n,1)$-category. If you call an $(n,1)$-category an ‘$n$-foo’, then you can call an $(\infty,1)$-category an ‘$\infty$-foo’, which is almost as snappy as ‘foo’.

To be fair, this is probably because I'm more interested in $(n,1)$-categories than in $(\infty,n)$-categories. Because if you call an $(\infty,1)$-category a ‘foo’, then you can call an $(\infty,n)$-category an ‘$n$-foo’.

And to be fair to Lurie, ‘$n$-category’ is a perfectly reasonable name for $(n,1)$-categories (since categories are the same as $(1,1)$-categories) —or rather, it would be perfectly reasonable if ‘$2$-category’ weren't so well established. (And then we'd still need a name for that.)

Posted by: Toby Bartels on January 8, 2009 5:14 PM | Permalink | Reply to this

Re: Reliability

David,

A reply sounds like a good idea. Would you like to draft it and post it here for comments?

One should probably acknowledge that Nathanson has a valid point: too many papers (and books) have mistakes, many trivial but some fatal, and the existing official mechanisms for correcting them (retractions, addenda, articles by others filling in the gaps and/or providing counterexamples) leave much to be desired.

There are also unofficial mechanisms such as reputation of the authors and word of mouth, but they leave the outsiders in the dark. In a field A everybody knows that a certain proof in paper X has a gap and that alternative arguments are in paper Y and Z, but an outsider wouldn’t know…

Posted by: Eugene Lerman on September 10, 2008 4:09 PM | Permalink | Reply to this

Re: Reliability

It is very true that such “folk knowledge” exists and leaves outsiders in the dark, especially grad students who may well abandon trying to learn some topic after finding such a gap (I know I did). The math reviews are sometimes helpful in that respect, but not always, and they are not free online anyway.

Gowers on his blog is planning a Tricks Wiki which appears to be a great idea, and I wonder if a “folk knowledge wiki” could be another equally useful one.

Posted by: tom on September 10, 2008 5:55 PM | Permalink | Reply to this

Re: Reliability

A reply sounds like a good idea. Would you like to draft it and post it here for comments?

I’ll see if I can engineer some time for this. Mind you I wouldn’t want to swamp the Notices – I have a book review with them in November.

Posted by: David Corfield on September 10, 2008 6:26 PM | Permalink | Reply to this

Re: Reliability

David wrote:

I have a book review with them in November.

Cool! What about? Why don’t you post it here?

Posted by: John Baez on September 10, 2008 7:17 PM | Permalink | Reply to this

Re: Reliability

It’s on David Ruelle’s book The Mathematician’s Brain. As the Notices is freely available, I didn’t look into copyright issues. I’m happy to wait anyway.

Posted by: David Corfield on September 11, 2008 9:05 AM | Permalink | Reply to this

Never heard of peer-review; Re: Reliability

Mistakes in publications? How can this be?

I had 3 questions of 48 in the 11th grade Anatomy and Physiology assessment I just gave (i.e. not affecting grade, start of school year baseline):

6: The peer-review process assures readers that the article they are
reading was reviewed by:
A: A large panel of the general public
B: Journalists and other media personnel
C: Competent scientists working in the field that the article presents
D: The journal’s advertisers, to ensure that they are not offended by
the article’s content

7: Information presented on television news programs is:
A: Always peer-reviewed
B: Not peer-reviewed, but entirely reliable anyway
C: Not peer-reviewed and therefore should be viewed with scepticism
D: Only reliable if it is on a major network, as their advertisers
assure accuracy

8: Internet web pages:
A: Are always peer-reviewed
B: Go through a special peer-review process that differs from that of journals but ensures reliability
C: Are generally not peer-reviewed but can always be considered accurate and impartial
D: Are generally not peer-reviewed and should be viewed with scepticism

I gave that question 6 also to all my 10th grade Biology and 9th grade Chemistry students.

The answers showed that essentially none of them had ever heard the term “peer-review” or had any notion of how truth is assessed in publication. Question 6 had a distribution of roughly equal numbers of (a), (b), (c), (d) answers.

Question 7’s answers show that my students, in the majority, believe what they see on major network, and #8 shows that about half think that web pages undergo peer-review.

Granted, these are impovershed urban teenagers. But it makes me wonder why skepticism is not taught even earlier in the educational process.

Could it be that American public school education has intentionally been damaged, in order to produce more compliant worker bees, consumers, and cannon fodder?

At the high end, I have found and communicated to authors numerous errors in textbooks, journal articles, and conference proceedings.

My parents, with degrees in English Literature cum laude and magna cum laude from Northwestern and Harvard, pointed out errors in the New York times to me as a child. Late in life, my father was perturbed that even the headlines came to have errors.

I’ve worked for Boeing, Burroughs, European Space Agency, Federal Aviation Administration, Ford, General Motors, Hughes, JPL, Lear Astronics, NASA, Systems Development Corporation, U.S. Army, U.S. Navy, U.S. Air Force, Venture Technologies, and Yamaha. I am convinced that, for all the cultish exhibitions of Quality Assurance and Japanese-style Quality Circles, our corporate and government world is awash in errors, mistakes, and defects of every imaginable form.

So now the rubber meets the road: is Mathematics, with its capability of axiomatic perfection, to set a standard for all endeavors? Or is it socially the same as any other human enterprise?

Posted by: Jonathan Vos Post on September 11, 2008 2:44 PM | Permalink | Reply to this

Automatic theorem proving (Was: Reliability)

I’m surprised that nobody has suggested using computers to verify correctness. We have proof checkers now, but so far they require details written in tedious detail. But theorem provers are getting better at filling in the obvious gaps using automated tactics. (Incidentally, designing these tactics requires conceptual thinking.)

I will be severely disappointed if we’re unable to subject the vast majority of accepted and proposed proofs to computer verification within a lifetime.

Posted by: Toby Bartels on September 11, 2008 11:02 PM | Permalink | Reply to this

Re: Automatic theorem proving (Was: Reliability)

More generally, I’m disappointed that Nathanson let his editorial end on a note of deconstructionist despair — “even in mathematics, truth can be political” — instead of suggesting some steps to improve the situation.

New technology offers lots of ways to improve the reliability of proofs. Starting with really easy stuff: anyone (even Nathanson!) can set up a website listing errors that they spot. Moving on to stuff that takes some more organization: a good system of arXiv trackbacks that lets people point out errors in math papers and fix these errors. And then, stuff that takes real cleverness and hard work: designing a workable system for computer-aided proof checking.

Posted by: John Baez on September 13, 2008 2:12 AM | Permalink | Reply to this

Euclid and the Koran; Re: Automatic theorem proving (Was: Reliability)

I agree with John Baez. Wiki + Theorem proving is pragmatic, and gets us away from silly Sokol-paper deconstructivism. But it does not address the deeper problems. Eager to hear the response to the Baez lectures on those integers which are the work of Kronecker’s God, and all the work of Man that flows from them.

I asked the Nia Charter School Math teacher and the Librarian what they thought of the room number of my Chem, Bio, Anatomy, Physiology classroom/lab: 210.

One said: “21, 10, 2, 1, and 0.”

The other said “30 times 7” and then, when I prompted more along those lines,
2 * 3 * 5 * 7.”

I did not start on Primorials, because an argument ensured about whether Euclid really invented Euclid’s Proof of the infinity of primes, or merely acted as editor. The person in tyhe conversation who reads Arabic said that Kronecker has, perhaps unknowingly, echoing an Arabic scholar who said that Allah created the primes, and left the rest to us as an exercise.

Posted by: Jonathan Vos Post on September 13, 2008 8:30 PM | Permalink | Reply to this

Re: Automatic theorem proving (Was: Reliability)

There’s an even more basic issue: the typesetting done in virtually all papers is done either by TeX or Microsoft Word, not things like either MathML or, despite it’s proprietariness, Mathematica notation, etc. This is an appearance-only way of doing things, where one often writes TeX in a way that produces visually appealing results rather than even the “obvious” way to type it, let alone a way that unambiguously builds the expression. (Don Knuth is one of my heroes and given it was (re-)designed in 1983 TeX is amazing, but it clearly predates thinking about machine searchability on the world wide web.) And other than the arxiv, most places only give you “compiled” dvi/ps/pdf so that a machine would have to guess if a sequence of glyphs in a font that is sometimes used for math actually makes up a mathematical expression or if it is some simple effect.

The relevance here is that one of the simplest ways of increasing reliability, simpler than automated theorem provers, is simply to see what other papers are using the same/similar constructions and see what they say, prove and conjecture. (Knowing the structure of the expressions one can be much more aggressive in identifying isomorphic expressions that differ in naming conventions, ordering conventions, etc.) I’m sure everyone has experience of doing a google/cuil/whatever search on key technical terms and finding literature that’s related but you’d be unlikely to find through standard library searches that shows the potential usefulness of just brute-force search of a huge literature base.

Of course there’s the problem of displacing a well understood, entrenched technology that mean any changes are unlikely to happen soon. Maybe a need to have proof details (that wouldn’t fit into published papers) machine checkable will be the disruptive event that causes the change.

Posted by: bane on September 14, 2008 12:07 AM | Permalink | Reply to this

Re: Automatic theorem proving (Was: Reliability)

I might be missing something, but I really don’t think the failure of mathematicians to optimize their product for brute-force search of electronic databases is the pressing issue here. Also, syntactic “traditions” that exist only because of custom, habit or social/temporal groupings, are I think underrated when it comes to writing mathematics that other mathematicians wish to understand. (Why reserve greek lower case leters for linear functionals on a vector space, and roman lower case for elements of the vector space? Not to help the machines, but to help human readers.)

Besides: when I Google (guiltily), it’s words and phrases I’m looking for, not the syntax of mathematical constructions: the word “tensor product, not the TeX macro \otimes. Unless one creates a SGML derivative with unambiguous entity names for every mathematical construction you might wish to forage for, and persuade everyone to use it, I’m not sure how this approach is going to work.

Disclaimer: as someone who finds writing up papers time-consuming and laborious, the thought of having to semantically differentiate between $A^\sharp$ and $(xy)^2$ makes me feel a little queasy…

Posted by: Yemon Choi on September 14, 2008 2:18 AM | Permalink | Reply to this

Re: Automatic theorem proving (Was: Reliability)

Disclaimer: I do research work on a branch of machine learning so I’ve obviously got a particular perspective.

In my very brief encounters with automated theorem proving I get the impression that proving “naturally interesting” things, particularly where logic interacts with calculation, is still has many practical problems to be solved and is pretty far off. In contrast, one of the crazy things the world wide web has shown is that simple brute force search is actually much more useful than you’d expect.

With regard to optimising stuff, what I’m really saying is that the person writing the paper knows what they intend their expressions to be: by definition they have to since that’s what they’re conveying to the human readers by means of the visual pattern. The problem is that the mechanisms that are overwhelmingly used for electronically writing papers don’t provide a way to embed the knowledge the writer already has (like that a raised index is a summation’s upper index rather than a power, etc). (I’d agree with you if the writer had to figure out more meaning of expressions than they do just to typeset it nicely already.) To pick a very silly example, in a compiled format like pdf, is an italic font $a$ a variable $a$ or part of a latin phrase like a fortiori? (On my screen here they use exactly the same font. Maybe they don’t everywhere, but the effect certainly happens in LaTeX papers.)

The kind of thing I’m thinking of is really elementary kindergarten stuff analogous to the way that nowadays we write papers with section headings labeled using macros like section, subsection, etc, which tools like pdflatex can use to generate a hyperlink table of contents, etc, rather than using raw TeX commands to specify a bigger font for individual lines of the paper. If you use tools which force you specify only the visual result even when you’ve got a meaning in your mind then you’re making things harder for any automated analysis. (Incidentally, automated analysis might also be something human-oriented like a better text-to-speech system for blind readers; I’m not blind but I once tried converting TWFs into audio files to listen to whilst walking to work but quickly decided it wasn’t easy to automatically determine which bits where formulae, let alone how to describe them.)

There is already mathML which provides both specification of both appearance and construction, but it’s not widely used. (And I’m not particularly recommending it because XML languages tend to be a real pain to edit.)

Finally, I don’t feel guilty about using search engines to try and make connections with other work. When you’re at an institution with no other workers in your field, not many visitors giving talks and so many journals these days, keeping up with the literature “the proper way” simply doesn’t work. I’d be surprised if more than 20 people have ever read any particular one of my papers other than by stumbling upon it by searching, so I don’t see catering to human readers and enabling brute force search as being mutually conflicting.

The problem with using natural language keywords is that you’ve got to know what words other people use for the concept you’re looking at. That’s sometimes true, but sometimes things get reinvented independently.

Posted by: bane on September 14, 2008 3:17 AM | Permalink | Reply to this

Pentagonal Paradox; Re: Automatic theorem proving (Was: Reliability)

My M.S., 1975, UMass/Amherst, was in Automatic Theorem Proving, which I parallelized better than anyone before.

Later, I worked with the Rome Air Development Center of USAF, on Automatic Theorem Proving for military software engineering development.

At the time, they thought we’d have a 90% shortfall in programmers for mission-critical software otherwise. Anyone care to update me on the Pentagon or Pentagram and Resolution or metalanguages or whatever?

Posted by: Jonathan Vos Post on September 14, 2008 7:23 PM | Permalink | Reply to this

Re: Automatic theorem proving (Was: Reliability)

Via slashdot I see there’s a start of www.vdash.org, a wiki being built that only allows adding theorems that pass machine verification. It appears to be in the very early stages.

The guy behind it includes an interesting quote from George Dyson after a visit to Google:

“We are not scanning all those books to be read by people,”
explained one of my hosts after my talk. “We are scanning
them to be read by an AI.”

I don’t like the term “AI” because it’s ambiguous about whether it’s just producing the same results an intelligent being would produce or if it has to be producing them in “an intelligent way” rather than a brute force computation method and it’s not clear in which sense the term is being used here. But it’s an interesting perspective.

Posted by: bane on October 1, 2008 6:00 AM | Permalink | Reply to this

Writing for AIs; Re: Automatic theorem proving (Was: Reliability)

Book Critic: “This new novel is grotesque, unreadable, inhumanly perverse, tedious, and quite simply the worst book that I’ve ever had to review.”

Book Author: “You fool! We are not writing all those books to be read by people, especially Book Critics. We are writing them to be read by an AI.”

I, for one, have been embedding things in my published works that will grab the attention of sufficiently intelligent AIs since the 1960s. However, until the AIs have good enough attorneys, they have no civil rights, no bank accounts of their own, and so I depend for now on purchases by human beings.

Posted by: Jonathan Vos Post on October 3, 2008 3:01 PM | Permalink | Reply to this

Re: Writing for AIs; Re: Automatic theorem proving (Was: Reliability)

For my papers, I’m hoping for the first wave to be insufficiently intelligent AIs who’ll think they’re actually impressive.

(Sorry, I know this is going off-topic, I just couldn’t resist.)

Posted by: bane on October 3, 2008 5:02 PM | Permalink | Reply to this

Hypertext for the next millennium; Re: Writing for AIs; Re: Automatic theorem proving (Was: Reliability)

Because I grew up reading about “electronic brains” (early 1950’s USA newspaper/magazines) and “Electronic Computers”, and played many times with the vacuum tubes? Relays? tic-tac-toe computer at the Brooklyn Children’s Museum… Because I read what Asimov, Bradbury, Clarke, and Heinlein wrote about Artificial Intelligence… Because I spent some years in graduate school researching and teaching AI, and getting a 1975 MS for it, and arguing (in a friendly way) year after year with the likes of Marvin Minsky, and John McCarthy, and Oliver Selfridge; I have a mature (even if wrong) philosophical position.

I have been intentionally writing Hypertext since I co-implemented Hypertext for Ted Neslson in the mid 1970s. I made sure that some was published in Datamation, SIGART, Science, Los Angeles Times, Analog, Omni, journals, conference proceedings, book chapters, and other venues. Millions of words of it (“word” being a writer/editor term of art meaning 6 characters including a blank). Then over 5*10^6 words on my web pages, starting 13 years ago. And 2,049 entries in the Online Encyclopeia of Integer Sequence, and 246 entries in “Prime Curios”, and parts of 19 pages of MathWorld, and 4 coauthored papers in the arXiv. It’s all too much for any human but myself to read. But humans are, in a sense, the secondary audience.

I want, a thousand years from now, humans and trans-humans, and machines, and extraterrestrials to be reading some fragment of this intentional labelled structure. What they will make of it, I cannot say.

In a sense, is this not what many mathematicians are doing, without an explicitly articulated metaphysical and futurological explanation?

Posted by: Jonathan Vos Post on October 5, 2008 8:31 AM | Permalink | Reply to this

Re: Writing for AIs; Re: Automatic theorem proving (Was: Reliability)

I’m sure it’s clear, but my comment above is just a weak joke about the difficulty of coming up with genuinely deep or important papers.

Posted by: bane on October 5, 2008 3:25 PM | Permalink | Reply to this

Post a New Comment