Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 2, 2009

Last Person Standing

Posted by David Corfield

Tim Gowers is engaged in a new venture in open source mathematics. As one might expect from a leading representative of the ‘problem-solving’ culture, Gowers has proposed a blog-based group problem solving challenge.

He motivates his choice of problem thus:

Does the problem split naturally into subtasks? That is, is it parallelizable? I’m actually not completely sure that that’s what I’m aiming for. A massively parallelizable project would be something more like the classification of finite simple groups, where one or two people directed the project and parcelled out lots of different tasks to lots of different people, who go off and work individually. But I’m interested in the question of whether it is possible for lots of people to solve one single problem rather than lots of people to solve one problem each.

Coincidently, Alexandre Borovik, my colleague on the A Dialogue on Infinity project, came to Kent on Wednesday and spoke about the classification of finite simple groups in his talk ‘Philosophy of Mathematics as Seen by a Mathematician’. Alexandre expressed his fears that parts of mathematics may be becoming to complicated for us. He considered the classification of finite simple groups, whose proof is spread over 15000 pages. There is a project afoot to reduce this to a ‘mere’ 12 volumes of around 250 pages each, so 3000 pages. But these will be extremely dense pages, not at all the kind of thing people will find it easy to learn from.

So what happens when the generation of mathematicians involved in the classification project retires? Apparently, Inna Capdeboscq is the youngest person out of the few people who understand things thoroughly enough to repair any discovered breaches in the proof. Alexandre estimated that in twenty years she will be the last such non-retired mathematician in the world. He doubts any youngsters will take on the baton.

Now is this a (belated) bout of turn of the century pessimism, or is there some intrinsic complexity in the classification which cannot be compressed? Do we have examples of long proofs from earlier times which have been enormously simplified, not just through redefinition?

On a positive note, Alexandre mentioned something we’ve discussed before at the Café, that a radical rethink of this area may be possible. Recall the suggestion that some of the sporadic simple groups are flukes, and should rather be seen as belonging to a larger family, some of whose members ‘happen’ to be groups.

Monstrous moonshine is associated with many of the sporadics, and there is evidence for the participation of some of those thought to be outside its scope – the pariahs. All this ties in with a
fantastic amount of mathematics, so just possibly new ideas will emerge to allow for the classification of the larger family.

However, I had the impression from Alexandre that none of this will help enormously simplify the original classification. So I wonder what would be the effect on future mathematicians using the result if there was nobody left alive who understood it fully. Would it be used with misgiving?

Posted at February 2, 2009 9:46 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1904

29 Comments & 0 Trackbacks

Re: Last Person Standing

I was extremely intrigued by Freek Wiedijk’s article on the computerization of formal proof-checking in the December 2008 issue of the Notices, in which he suggests that

In a few decades it will no longer take one week to formalize a page from an undergraduate textbook. Then that time will have dropped to a few hours. Also then the formalization will be quite close to what one finds in such a textbook.

When this happens we will see a quantum leap, and suddenly all mathematicians will start using formalization for their proofs. When the part of refereeing a mathematical article that consists of checking its correctness takes more time than formalizing the contents of the paper would take, referees will insist on getting a formalized version before they want to look at a paper.

Perhaps this may provide an answer? Certainly in the particular case of finite simple groups, the timing seems unlikely; it would probably require some youngsters to take on the massive project of understanding and formalizing the proof even once formalization becomes tractable. But at least in principle, having a long proof completely formalized could enable us to continue using it with confidence even after no living human fully understands it any more.

Posted by: Mike Shulman on February 2, 2009 6:25 PM | Permalink | Reply to this

Re: Last Person Standing

… even after no living human fully understands it any more.

This leads into Doron Zeilberger's opinion: that any mathematics simple enough to be understood by mere humans will be an utterly trivial part of the mathematics understood by computers in less than a century.

http://www.math.rutgers.edu/~zeilberg/Opinion36.html

Although I don't know Zeilberger's opinion of the classification of finite simple groups, I suppose he might consider it the far edge of what humans can grasp and be only too glad that we have already realised that it's not worth our time to keep it fully understand by humans.

Posted by: Toby Bartels on February 6, 2009 9:49 PM | Permalink | Reply to this

Re: Last Person Standing

My worry with the classification is not so much the reliability of the proof, since any errors are like not to mess up anything too significant. (Which is to say, if we’re missing a simple group we’re probably only missing one, and it probably has all the same properties as the others. Fixing any results that depend on classification won’t require understanding the proof so much as understanding the new example.)

What worries me is more sociological. How did this project virtually kill off finite group theory as a topic for young researchers, and how can other fields avoid this fate? A research program that 60 years later finds itself with no young researchers who understand it is very sad. Mathematicians who did so much great work deserve the immortality of having their work live on, and in this case it looks like they may not.

Posted by: Noah Snyder on February 2, 2009 7:02 PM | Permalink | Reply to this

Re: Last Person Standing

I think a corollary to the sociological problem that Noah mentioned is the potential loss of interesting ideas and thought processes that are captured in this body of work. I’m speculating wildly here, but some of these may be useful in broader stretches of mathematics, and without experts to navigate the literature, they become much less available. It reminds me of a theory I had heard about the lack of mathematical progress in the Roman empire - many people had access to the literature, but there was no research community to help explicate it.

Posted by: Scott Carnahan on February 2, 2009 10:55 PM | Permalink | Reply to this

Re: Last Person Standing

While we’re talking about finite groups, anyone know a readable exposition of the odd order theorem? I tried looking at the original paper once and it was very intimidating.

Posted by: Noah Snyder on February 2, 2009 7:06 PM | Permalink | Reply to this

Re: Last Person Standing

Some Wiki articles are well-informed. This article traces group theory papers from 17 pages, to 255 pages, to over 1000 pages. —-


http://en.wikipedia.org/wiki/Feit-Thompson_theorem

“The simplified proof has been published in two books:
(Bender and Glauberman 1995) and (Peterfalvi 2000).
This simplified proof is still very hard, and is
about the same length as the original proof (but is
written in a more leisurely style).

(Gonthier et al. 2006) have begun a long-term project
to produce a computer verified formal proof of the
theorem… It takes a professional group theorist
about a year of hard work to understand the proof
completely…”

Posted by: Stephen Harris on February 2, 2009 9:14 PM | Permalink | Reply to this

Re: Last Person Standing

Are there any signs of conceptual connections to other branches?

Does the local analysis of the group theorists have anything do do with other forms of localization in mathematics.

Is there anything natural about quasithin-ness?

It is always possible that Atiyah was right:

So I don’t think it makes much difference to mathematics to know that there are different kinds of simple groups or not. It is a nice intellectual endpoint, but I don’t think it has any fundamental importance.

Though he did later say:

FINITE GROUPS. This brings us to finite groups, and that reminds me: the classification of finite simple groups is something where I have to make an admission. Some years ago I was interviewed, when the finite simple group story was just about finished, and I was asked what I thought about it. I was rash enough to say I did not think it was so important. My reason was that the classification of finite simple groups told us that most simple groups were the ones we knew, and there was a list of a few exceptions. In some sense that closed the field, it did not open things up. When things get closed down instead of getting opened up, I do not get so excited, but of course a lot of my friends who work in this area were very, very cross. I had to wear a sort of bulletproof vest after that!

There is one saving grace. I did actually make the point that in the list of the so-called “sporadic groups”, the biggest was given the name of the “Monster”. I think the discovery of this Monster alone is the most exciting output of the classification. It turns out that the Monster is an extremely interesting animal and it is still being understood now. It has unexpected connections with large parts of other parts of mathematics, with elliptic modular functions, and even with theoretical physics and quantum field theory. This was an interesting by-product of the classification. Classifications by themselves, as I say, close the door; but the Monster opened up a door.

Posted by: David Corfield on February 3, 2009 9:34 AM | Permalink | Reply to this

Re: Last Person Standing

If young mathematicians aren’t learning the proof techniques behind the classification of finite simple groups, maybe it’s not because these techniques are too hard. Maybe it’s because they don’t see interesting new results that can be proved using these techniques.

Wiles’ proof of Fermat’s Last Theorem was hard, and so was Perelman’s proof of the Poincaré conjecture — but those didn’t scare off the youngsters. In the first case, there were bigger problems sitting nearby, still left to tackle: for starters, the Taniyama–Shimura–Weil conjecture, now called the Modularity Theorem because it’s been proved by a group of mathematicians including Wiles’ student Richard Taylor, who helped fill a hole in Wiles’ proof. In the second case, Perelman didn’t fill in the details of an important step, his ‘Theorem 7.4’. Three groups jumped in to give different proofs of this, completing his proof not just of the Poincaré conjecture but of Thurston’s Geometrization Conjecture.

I don’t understand the remaining open problems to which the ideas developed by Wiles and Perelman apply. But I’m pretty sure they exist! For example, see this Ricci flow conference held in Paris last summer, or this conference on workshop on modularity held at MSRI in the fall of 2006.

So what about finite simple groups? As David notes, there are big open problems sitting next to the classification of finite simple groups: we need to more deeply understand Monstrous Moonshine… and Moonshine Beyond the Monster. Let’s hope that these problems eventually force people to revisit the classification theorem, and either find a simpler proof, or find ways to use the existing proof techniques to do new and interesting things.

Posted by: John Baez on February 4, 2009 2:38 AM | Permalink | Reply to this

Re: Last Person Standing

Is there evidence that the sporadics and non-sporadics are very different beasts? Moonshine would suggest so, unless there’s an equivalent of moonshine for non-sporadics, perhaps known but not thought as such.

Then there’s the odd property of the large Schur multiplier of PSL(3, 4), and the thought this is an indication that something sporadic in nature happened to fall into a non-sporadic family.

Hmm, so does PSL(3, 4) have moonshine?

Posted by: David Corfield on February 4, 2009 9:49 AM | Permalink | Reply to this

Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

In Fall 2006 I discussed the following with MICHAEL ASCHBACHER
(Shaler Arthur Hanisch Professor of Mathematics, Caltech; B.S., California Institute of Technology, 1966; Ph.D., University of Wisconsin, 1969):

==============

What Kind of Thing is a Sporadic Simple Group?
September 24th, 2006 by Walt

David Corfield discusses some speculation originally from Israel Gelfand:

Sporadic simple groups are not groups, they are objects from a still unknown infinite family, some number of which happened to be groups, just by chance.


(In David’s terminology, that means that sporadic finite simple groups are not a natural kind.)

I used to believe this very same thing, so I find it interesting that others have speculated the same thing. A couple of years ago, though, I came across a
remark by Michael Aschbacher that made me rethink my view: the classification of finite simple groups is primarily an asymptotic result. Every sufficiently
large finite simple group is either cyclic, alternating, or a group of Lie type.

Results that are true only for large enough parameter values are common enough that the existence of small-value counterexamples does not require special
explanation. For example, the classification of simple modular Lie algebras looks completely different over
small characteristics than it does over large characteristics. The best known results for number theoretic results such as Waring’s problem and Goldbach’s conjecture are asymptotic. Small numbers
are just bad news.

==============

Later, richard borcherds Says:
May 30th, 2007 at 1:32 pm

Computers might be able to do real math eventually, but they still have a very long way to go. They are really good at certain restricted problems, such as running algorithms to evaluate special classes of sums and integrals (as in Zeilberger’s work) or checking lots of cases (as in the 4 color theorem or the Kepler conjecture) or even searching
for proofs in very restricted first order theories, but none of these problems come anywhere near finding serious mathematical proofs in interesting theories such as Peano arithmetic.

Rather than find proofs by themselves, computers might be quite good at finding formal proofs with human assistance, with a human guiding the direction of the proof (which computers are bad at), and the computer filling in tiresome routine details (which humans are bad at). This would be useful for something like the classification of finite simple groups, where the proof is so long that humans cannot reliably check it.

==============

In response to that, I spoke again with Prof. Aschbacher.

Jonathan Vos Post Says:
May 30th, 2007 at 2:43 pm

Yes. I have discussed this recently with Michael Aschbacher, at Caltech where I got my Math degree, he being author of “The Status of the Classification of the Finite Simple Groups”, Notices of the American Mathematical Society, August 2004. They’ve apparently (I have to take their word for it) filled the gaps of the proof initiated so long ago, and when John H. Conway was on the team (I’d spoken to him then).

Coincidently, I was at a Math tea at Caltech yesterday, joking about the 26 sporadic groups being a “kabbalistic coincidence” — or perhaps examples of some incompletely glimpsed set of more complicated mathematical objects which are forced to be Groups for reasons not yet clear. Some people deny that there are “coincidences” in Mathematics.

Gregory Chaitin insists that Mathematics is filled with coincidences, so that most truths are true for no reason. That sounds like the beginning of a human-guided deep theorem-proving project to me. Humans supply gestalt intuition that we don’t know how to axiomatize or
algorithmize. Humans did not stop playing Chess when Deep Blue became able to beat a world champion. The computer is crunching fast; the human looks deep. The human has the edge in Go, which takes deeper search.

So, as I say, “yes.” I agree with you that we should each (human and machine) be doing what we’re best at, together. After all, that’s what the right-brain / left-brain hemisphere architecture does. When John Mauchley (and J. Presper Eckert) built the BINAC under top security for the USAF, delivered 1949, it was the first dual processor. Mauchley told me that the brain hemisphere structure had evolved, and was probably good for more than we knew.

He and I were introduced by Ted Nelson, father of Hypertext, in 1973, while I was developing the first hypertext for PC’s (before Apple, IBM, and Tandy made PCs). We demoed our system at the world’s first
personal computer conference in Philadelphia, 1976. So the human-computer teamwork is something I’ve been working in for 40 years. Do you suspect that a human/computer partnership (Including
you, of course) will get to the bottom of quantum field theory?

==============

Morally, all the towers of concepts in this thread are related, and experts have opined on how some combinations of future people and future computers may accomplish the true purpose of Mathematics: insight.

Until then, we are a school of multitentacled invertebrates in the ocean of theorems, blinded by a black cloud of our own ink, speculating on the ocean in which we are immersed.

Posted by: Jonathan Vos Post on February 7, 2009 12:18 PM | Permalink | Reply to this

I’m confused

Jonathan: I found your comment confusing.

In Fall 2006 I discussed the following with MICHAEL ASCHBACHER (…):

This is followed by a quotation from post by Walt, the author of Ars Mathematica. (I’m not sure why you link to the blog but not to the particular blog post.) So I reckon this is the thing you discussed with Aschbacher. What did he say?

Later in your comment, you say

In response to that, I spoke again with Prof. Aschbacher.

Jonathan Vos Post Says:

May 30th, 2007 at 2:43 pm

Yes. I have discussed this recently with Michael Aschbacher, at Caltech where I got my Math degree, he being author of “The Status of the Classification of the Finite Simple Groups”, Notices of the American Mathematical Society, August 2004. They’ve apparently (I have to take their word for it) filled the gaps of the proof initiated so long ago, and when John H. Conway was on the team (I’d spoken to him then).

[snip]

This looks like it might have been taken from an email to someone other than Michael Aschbacher, or a comment on another blog about classification of finite simple groups – it’s hard to tell. In any case, I don’t see anything from Aschbacher himself, or from Conway for that matter; it would have been interesting to hear what these experts might have to say on the topic being discussed in Last Person Standing.

What I do see is (1) a kabbalistic joke, (2) a general thought of Chaitin’s, (3) Vos Post responding “yes” to something, I can’t tell precisely to what or to whom, (4) a remark by Mauchley from a private conversation, and, finally, (5) the question “Do you suspect that a human/computer partnership (Including you, of course) will get to the bottom of quantum field theory?” apparently addressed to someone, perhaps a blog author or email recipient, but I don’t know who it’s supposed to be.

I don’t mind the philosophical ruminations as such, but what did Aschbacher actually say on either of the two occasions you mentioned?

Posted by: Todd Trimble on February 7, 2009 2:01 PM | Permalink | Reply to this

I was half asleep, sorry; Re: I’m confused

Todd Trimble:

I guess that I proved the lemma that when your dog wakes you up at 4:00 a.m., after 4 hours sleep, you should not try cutting and pasting from old emails cut and pasted from blogs.

Aschbacher never emailed me. I told him of David Corfield’s remarks on Israel Gelfand, face-to-face. Or maybe gave him a printout. Aschbacher agreed with Gelfand’s speculation as posible, and added that there may be other kinds og Simple Groups that we’re just not smart of enough to have conceived.

Then there was a blog comment to Borcherds, which I omitted, and his emailed reply. I’d sent him:

About
May 4th, 2007
http://borcherds.wordpress.com/about/#comment-36

I am Richard Borcherds, a mathematician, currently trying to figure out what quantum field theory is.
[truncated]

Laurens Gunnarsen Says:
May 29th, 2007 at 5:36 pm

May I ask what you think of so-called “experimental mathematics?” In particular, do you agree with Doron
Zeilberger (c.f. the current issue of the MAA journal, FOCUS) that we should expect and welcome the advent of software that will relieve us of the “burden” of
proving things?

Oh, and once we’re relieved of this “burden,” will we really have very much left to do?

[truncated]

# Jonathan Vos Post Says:
May 30th, 2007 at 2:53 am

Doron Zeilberger has provided work which is delightful and deep. But my background [takes off math hat briefly] as a Science Fiction author makes me see the
human-machine future in a more nuanced way.

I prefer the definition and examples of Experimental Mathematics by Jonathan Borwein et al. What is see is lovely, creative, and not sausage-making. It is,
analogously to the space program, or a symphony orchestra, good teamwork between humans and machines.

See the definitional material, examples, and editorial board of:

Experimental Mathematics

I do not see computers through a Terminator lens. I prefer Utopia to Dystopia. Software, I hope, will not leave us “With Folded Hands.” [a reference to the famous Jack Williamson story of human incentives destroyed by over-helpful robots]

Then the Gregory Chaitin remark came from a long conversation we’d had at the International Conference on Complex Systems, which was more about Leibnitz’s way of telling if one is in a “lawful universe” by counting the number of natural laws.

Again, morally, everything in this n-Category Thread and everything that I mentioned are braided together. But I did a bad job of indicating the connections.

I saw Aschbacher again last week, but we mostly talked about 100 people having just been laid off at Caltech, whether there would be a genuine hiring freeze for faculty and post-docs, and how little fiction authors are usually paid. And the ex-Combinatorist in Obama’s administration.

Posted by: Jonathan Vos Post on February 7, 2009 4:48 PM | Permalink | Reply to this

Re: Last Person Standing

Aschbacher’s comment reminds me of a question I’ve been wondering about for a few years, ever since I started teaching group theory: How much shorter is the proof that there are only finitely many sporadic groups than the proof of the full classification theorem?

Posted by: James on February 8, 2009 7:39 AM | Permalink | Reply to this

Re: Last Person Standing

I asked a group theorist. His reply: “Not shorter at all. We would like to be able to say much shorter but there is no way with present methods.”

Posted by: James on February 10, 2009 9:36 PM | Permalink | Reply to this

Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

Regarding coincidence in mathematics, I believe in that. Zeilberger, again, has a good opinion addressing it.

I would define a ‘coincidence’ as a situation that has no simpler explanation than the bare facts (although our intuition may look for one). Thus coincidences can occur in objective circumstances like mathematics. (Ironically, I came to this definition reading the rantings of a paranoid schizophrenic explaining why nothing is a coincidence.)

Posted by: Toby Bartels on February 12, 2009 12:01 AM | Permalink | Reply to this

contingent beauty

I’d like to hear Zeilberger distinguish between ‘contingent beauty’ and ‘contingent ugliness’. I assume he must have the latter concept. Surely there are ‘brute facts’ which are not pretty, do not fit into any pattern, and do not have an interesting explanation.

Mind you his threshold is quite low. Is it really a beautiful fact that in e decimal digits 3-6 are repeated in digits 7-10?

e = 2.718281828459…

If we had evolved with 8 fingers, there’d be very little chance we’d count it as beautiful.

Posted by: David Corfield on February 13, 2009 8:39 AM | Permalink | Reply to this

Re: contingent beauty

I’m not sure there’s a contingent ugliness implied by a contingent beauty, but rather the concepts are ‘connected beauty’ and ‘contingent beauty’? Things that don’t exhibit any quality which we ascribe as beautiful don’t seem to be divided into those things that have incredibly horribly convoluted connections and those that don’t have any connections of any kind at all (which is probably impossible anyway).

Inicidentally, there’s another way of looking at the coincidences situation: for situations that are deeply connected one could “search intelligently” by modifying bits of the reasoning to expand the set of things that are connected and hence discover new relationships (although obviously they can also be discovered by chance observation and later analysed). If there are lots of beautiful (and potentially useful) relationships that are purely coincidental then the only way to discover them is by someone for some reason generating enough of both of the things that someone sees they look to be “beautifully related”. For some reason I find that thought mildly depressing.

Posted by: bane on February 13, 2009 1:55 PM | Permalink | Reply to this

Re: contingent beauty

If not a ‘contingent ugliness’ there must be a ‘contingent non-beauty’, otherwise what is the ‘beauty’ bit doing? Why not just say ‘contingent’ contrasted with ‘connected’? (I still like my ‘happenstantial’.)

Zeilberger almost seems to find the contingent more beautiful than the related. But surely we have to stop somewhere. That the seventeenth and eighteenth decimal digits of ee and π\pi are identical is surely not beautiful, though this two digit matching happens earlier than expected.

So what makes for the beautiful aspect of contingent beauty?

Posted by: David Corfield on February 16, 2009 11:43 AM | Permalink | Reply to this

Re: contingent beauty

I expect Zeilberger used the 18281828\dots18281828\dots example because most people would agree that that at least is just a coincidence. Then he can say that Ramanujan's theorem mentioned next to it may be similarly just a coincidence.

But yeah, this does beg the question of what makes a coincidence (just or otherwise) beautiful. Perhaps beauty is whatever our intuition expects an explanation for (rather subjective, as beauty is often thought to be), so contingent beauty is that which we expect to have an explanation but which is really just a coincidence.

Posted by: Toby Bartels on February 16, 2009 11:25 PM | Permalink | Reply to this

Einstein’s theological take on this; Re: contingent beauty

Am I the only one here who watched “House” tonight and heard the quote (which I’ve verified)?:

“Coincidence is God’s way of remaining anonymous.”

– Albert Einstein [The World As I See It].

That leads to the (to me) annoying part near the end of the novel (not the film) “Contact” by Carl Sagan, where the graphic of an enormous circle is found hidden in the digits of pi, and this is cited as evidence of divinity? Annoyed me because Sagan was playing games with us, after a previously reasonable debate between Science and Faith. The film had parts which annoyed me too, and my wife, but were the favorite parts of an Irish Catholic friend of mine.

Well, these discussions on beauty, creation, contingency, and Math are not likely to reach a perfect consensus. But fun!

Posted by: Jonathan Vos Post on February 17, 2009 6:23 AM | Permalink | Reply to this

Re: contingent beauty

…so contingent beauty is that which we expect to have an explanation but which is really just a coincidence.

I think you’re on to something with that. Maybe if I thought ee’s digit repetition could have an explanation I would find it prettier. Changing ideas of what’s plausibly explicable should affect aesthetics then.

On the other hand, there’s a similar case where we do have something of an explanation – the repetition of the first two pairs of digits in

2=1.41421356..., \sqrt{2} = 1.41421356...,

which could be put down to

2=7/5×(1+1/49) 1/2. \sqrt{2} = 7/5 \times (1 + 1/49)^{1/2}.

Not a terribly beautiful explanation I admit, twice 49 being close enough to 100.

André Joyal’s example of a miracle, an extreme case of an unexplained event which one would expect to be explicable, was that the algebraic closure of \mathbb{R} is an extension of such small degree.

Posted by: David Corfield on February 17, 2009 9:36 AM | Permalink | Reply to this

A prime is a prime is a prime; Re: contingent beauty

I may be mangling Prof. Gregory Benford’s aphorism through faulty memory, but a near paraphrase is:

“Don’t rely on rules of thumb, when other intelligent beings may have a different number of thumbs.”

There are never-ending argument on the Web between people who think they’ve found deep truths in something that others dismiss as mere artifacts of writing numbers base 10. Things true in ANY base are more likely to matter.

Once, after some slight of hand I showed my smarter-than-me Physics professor wife, she asked “is that true if the primes are in some other base?” Then we both fell silent, wondering who would be the first to say (it doesn’t matter; a prime’s a prime.”

But why limit ourselves to conventional bases? Knuth promotes base -3. Factorial bases are nice. Natural log is more natural than log base 10, isn’t it?

I recently added something to the OEIS:

A155967 Binary transpose primes. Integers of k^2 bits which, when written row by row as a square matrix and then read column by column, are primes once transformed.

I could not tell if I’d found something in a “sweet spot” at the shallow end of the pool: elementary, original, nontrivial, or whether nobody cares because binary as contingent. But then R. J. Mathar wrote a nice little MAPLE program and extended my list of examples, and so I wonder. This could have been dreamed up any time in the past couple of centuries. Have I found an interesting transformation, or tripped on a random lump of rock?

Yes, Zeilberger’s essay is a gem. But the epistemolocial and ontological conundra of Mathematics are arguable enough already, so that when Aesthetics is added to the mix, anything can happen!

Posted by: Jonathan Vos Post on February 14, 2009 3:44 AM | Permalink | Reply to this

Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

Toby wrote:

I would define a ‘coincidence’ as a situation that has no simpler explanation than the bare facts (although our intuition may look for one).

That sounds related to the Kolmogorov-Chaitin definition of a bit string of length LL as ‘algorithmically random’ if it cannot be printed by a program written in binary code with length less than LkL - k for some constant kk.

Everyone points out that the constant kk depends on the programming language — but it also depends on how much of a paranoid you are: the more paranoid, the smaller your kk. If your kk is negative, you’re really in trouble, because you think nothing is random: you’ll even be satisfied with an explanation more complicated than the facts to be explained.

There’s some danger in having too large a value of kk, too, but people don’t talk about that as much.

Posted by: John Baez on February 15, 2009 1:42 AM | Permalink | Reply to this

The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

The question is not “Are you paranoid?” but “Are you paranoid ENOUGH?”

I agree that Toby’s definition “sounds related to the Kolmogorov-Chaitin definition of a bit string of length L as ‘algorithmically random’ if it cannot be printed by a program written in binary code with length less than L−k for some constant k.”

This opens the door to epistemological (“what do we know, how do we know it, and do we know that we know it) and ontological (does this pattern exist in the real world, or just in my mind) applications of the cluster of theories in “Advances in Minimum Description Length: Theory and Applications”
by Peter D. Grünwald, In Jae Myung, Mark A. Pitt, 2005, 444 pages.
“The book concludes with examples of how to apply MDL in research settings that range from bioinformatics and machine learning to psychology.”

I think that the mathematical philosophers in the n-Catgory Cafe are going a levl deeper into foundational abstraction than this book. And, if we think about coincidences between coincidences, an infinite number of levels in the limit.

Woody Allen joked that the best treatment for the paranoid is to hire people to follow him around, because now he is by definition non-delusional, and thus cured.

Papers such as “Paranoia and Narcissism in Psychoanalytic Theory: Contributions of Self Psychology to the Theory and Therapy of the Paranoid Disorders” by Thomas A. Aronson, M.D. support the conjecture that a certain minimum degree of paranoia is essential to the development of the child’s identity, in partitioning the self from the parents, perhaps beginning when the child realizes that the parents sometimes lie. The child develops a “theory of mind.”

My work for at least a half-dozen years with Professor Phil Fellman on Mathematical Disinformation Theory probes this reality of multiple agents with complex motives who are not just unreliable but actively machiavellianly giving the most misleading signals possible.

I think that Fitch’s Paradox of Knowability can be resolved by the next step beyond Arbitrary Public Announcement Logic (APAL)

Posted by: Jonathan Vos Post on February 15, 2009 6:09 PM | Permalink | Reply to this

fixed URL; Re: The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

[sorry, I screwed up the last sentence and its link]:


I think that Fitch’s Paradox of Knowability can be resolved by the next step beyond Arbitrary Public Announcement Logic (APAL) a dynamic logic that extends epistemic logic – if we add paranoia by agents as to levels of
distrust of what’s said by other agents.

Posted by: Jonathan Vos Post on February 15, 2009 7:20 PM | Permalink | Reply to this

3rd try to get URL right;Re: fixed URL; Re: The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing

3rd time’s the charm?

Undecidability for Arbitrary Public Announcement Logic by Tim French 1 Hans van Ditmarsch 2
1School of Computer Science and Software Engineering, University of Western Australia
2Computer Science, University of Otago, and IRIT, France

Arbitrary Public Announcement Logic (APAL) is a dynamic logic that extends epistemic logic with a public announcement operator to represent the update corresponding to a public announcement; an arbitrary announcement operator that quantifies over
annoucements.
APAL was introduced by Balbiani, Baltag, van Ditmarsch, Herzig, Hoshi and de Lima in 2007 (TARK) as an extension of
public announcement logic. An journal version (‘Knowable’ as ‘known after an announcement’) is forthcoming [JVP: Out as hardcopy now] in the Review of
Symbolic Logic….

The link I gave is an extension that differs from mine; which (the one linked to, in 36 powerpoint pages) summarizes and concludes:

1 action model execution is a refinement
2 decidable (and for extensions too)
3 expressivity known
(via encoding to bisimulation quantified logics, roughly comparable with mu-calculus)
4 complexity open
5 axiomatization open and hard
(in quantifiying over a more general set of announcements we sacrifice the witnessing formulas that were used in the
APAL axiomatization)

Posted by: Jonathan Vos Post on February 15, 2009 7:46 PM | Permalink | Reply to this

Re: Last Person Standing

Do we have examples of long proofs from earlier times which have been enormously simplified, not just through redefinition?

What about Gödel’s theorems? I don’t know what form his original proof of the Completeness theorem took, but surely Henkin’s version of the proof is at least conceptually much simpler, and gives the Löwenheim-Skolem theorem as a trivial consequence. And for the Incompleteness theorem, haven’t the results of Turing and others resulted in a serious simplification of the proof, even if not a shortening?

And haven’t there also been serious improvements on the Erdös/Selberg elementary proofs of the prime number theorem in the past 50 years?

Posted by: Kenny Easwaran on February 10, 2009 9:29 PM | Permalink | Reply to this

Re: Last Person Standing

From an interview with Atiyah:

Has the time passed when deep and important theorems in mathematics can be given short proofs? In the past, there are many such examples, e.g., Abel’s one-page proof of the addition theorem of algebraic differentials or Goursat’s proof of Cauchy’s integral theorem.

ATIYAH I do not think that at all! Of course, that depends on what foundations you are allowed to start from. If we have to start from the axioms of mathematics, then every proof will be very long. The common framework at any given time is constantly advancing; we are already at a high platform. If we are allowed to start within that framework, then at every stage there are short proofs.

One example from my own life is this famous problem about vector fields on spheres solved by Frank Adams where the proof took many hundreds of pages. One day I discovered how to write a proof on a postcard. I sent it over to Frank Adams and we wrote a little paper which then would fit on a bigger postcard. But of course that used some K-theory; not that complicated in itself. You are always building on a higher platform; you have always got more tools at your disposal that are part of the lingua franca which you can use. In the old days you had a smaller base: If you make a simple proof nowadays, then you are allowed to assume that people know what group theory is, you are allowed to talk about Hilbert space. Hilbert space took a long time to develop, so we have got a much bigger vocabulary, and with that we can write more poetry.

Posted by: David Corfield on February 16, 2009 4:44 PM | Permalink | Reply to this

Tao on Lax as Miraculous; Re: Last Person Standing

In Bulletin of the AMS, Vol.46, No.1, Jan 2009, p.10, of Terry Taos’s wonderful survey “Why Are Solitons Stable?” he says of the inverse scattering approach:

“This is a vast subject that can be viewed from many different algebraic and geometric perspectives; we shall content ourselves with describing the approach based on Lax pairs, which has the advantage of simplicity, provided that one is willing to accept a rather miraculous algebraic identity….”

So, beauty from something that looks at first like a weird coincidence, which on further analysis is so deep that it appears a miracle, even to a genius such as Tao!

Surely this matters very much, both in the Physics and the Mathematics persepctives.

Posted by: Jonathan Vos Post on February 19, 2009 5:40 PM | Permalink | Reply to this

Post a New Comment