Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 20, 2008

Mathematical Robustness

Posted by David Corfield

In his paper The Ontology of Complex Systems William Wimsatt explains how he chooses to approach the issue of scientific realism with the concept of robustness.

Things are robust if they are accessible (detectable, measureable, derivable, defineable, produceable, or the like) in a variety of independent ways.

The robustness of Jupiter’s moons was precisely up for debate when Galileo let leading astronomers of his day look towards the planet through his telescope. Even if telescopes had proved their worth on Earth, allowing merchants to tell which ship was heading towards port beyond the range of the naked eye, this did not completely guarantee its accuracy as an astronomical instrument. How do we know that light travels and interacts with matter in the same way in the superlunary realm as down here on Earth? How could we trust this device when what appeared to the eye to be a single source of light was split in two in the telescope’s image?

Now we have robustness for the moons established, and can send probes close to their surfaces to report back on phenomena such as the Masubi Plume on Io. And we have an array of means to tell us that many stars are binary, so know that Galileo’s telescope was reliable.

Does anything like robustness happen in mathematics?

Well, let’s see what Michiel Hazewinkel has to say in his paper Niceness theorems:

It appears that many important mathematical objects (including counterexamples) are unreasonably nice, beautiful and elegant. They tend to have (many) more (nice) properties and extra bits of structure than one would a priori expect…

These ruminations started with the observation that it is difficult for, say, an arbitrary algebra to carry additional compatible structure. To do so it must be nice, i.e., as an algebra be regular (not in the technical sense of this word), homogeneous, everywhere the same, … . It is for instance very difficult to construct an object that has addition, multiplication and exponentiation, all compatible in the expected ways.

He lists five phenomena:

A. Objects with a great deal of compatible structure tend to have a nice regular underlying structure and/or additional nice properties: “Extra structure simplifies the underlying object”.

I suppose we saw this with our discussion of the real numbers as the unique irreducible locally compact topological group with no compact open subgroups.

B. Universal objects. That is mathematical objects which satisfy a universality property. They tend to have:

  • a nice regular underlying structure
  • additional universal properties (sometimes seemingly completely unrelated to the defining universal property)

Hazewinkel’s ‘star example’ is Symm, the ring of symmetric functions in an infinity of indeterminates.

Symm is an object with an enormous amount of compatible structure: Hopf algebra, inner product, selfdual (as a Hopf algebra), PSH, coring object in the category of rings, ring object in the category of corings (up to a little bit of unit trouble), Frobenius and Verschiebung endomorphisms, free algebra on the cofree coalgebra over Z (and the dual of this: cofree coalgebra over the free algebra on one element), several levels of lambda ring structure, … .

The question arises which ones of these have natural interpretations in the other nine incarnations occurring in the diagram (and whether the isomorphisms indicated are the right ones for preserving these structures).

The last three phenomena are:

C. Nice objects tend to be large and inversely large objects of one kind or another tend to have additional nice properties. For instance, large projective modules are free.

D. Extremal objects tend to be nice and regular. (The symmetry of a problem tends to survive in its extremal solutions is one of the aspects of this phenomenon; even when (if properly looked at) there is bifurcation (symmetry breaking) going on.)

E. Uniqueness theorems and rigidity theorems often yield nice objects (and inversely). They tend to be unreasonably well behaved. I.e. if one asks for an object with such and such properties and the answer is unique the object involved tends to be very regular. This is not unrelated to D.

Might it be that all of A-E are ‘not unrelated’?

A key question is whether we should adopt the Zeilberger approach to these phenomena, invoking “our human predilection for triviality, or more politely, simplicity”, since “human research, by its very nature, is not very deep”. Or whether we take these phenomena in a Wimsattian way as robustness indicating reality.

Posted at November 20, 2008 9:41 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1855

9 Comments & 2 Trackbacks

Pythagorus, Wigner, Tao; Re: Mathematical Robustness

Wigner said “reality” but wondered why. Pythagorus said so too, but we don’t trust cults as much these days.

Article in the New York Times, and maths education

Terry Tao:

“…surprisingly often the pursuit of one goal can lead to unexpected progress on other goals (cf. Wigner’s ‘unreasonable effectiveness of mathematics’). See also my article on ‘what is good mathematics?’.”

Posted by: Jonathan Vos Post on November 20, 2008 3:02 PM | Permalink | Reply to this

Re: Mathematical Robustness

In the maths excerpts I don’t see a direct reference to “independent ways” (and haven’t read the paper to see if the independent bit is important). The one obvious example of a mathematical-ish nature is computability. This was addressed independently by Turing using universal machines, Church using lambda calculus, Post using his rewriting systems before the equivalence of all was noticed. I don’t know the original work well enough to know if (a) they figured out existence of non-computable functions independently and (b) if each approach did it by basically noting you can set things up to use a Cantor-style diagonal argument to show this (so it’s arguably not really independent) or if there’s more to showing non-computability in, say, lambda calculus or Post systems.

Anyway, that’s a vague thought.

Posted by: bane on November 20, 2008 4:48 PM | Permalink | Reply to this

Re: Mathematical Robustness

Everything depends on what we call an object. The kind of things I see as ‘robust’ are for example some particular objects from geometry. For instance, the projective spaces (or more generaly, the Grassmanians) are robust in the sense they make sense in very different contexts, namely different geometries: it is almost sufficient to make sense of GL nGL_n. This makes some other complicated objects robust as well: singular cohomology, de Rham cohomology, K-theory and cobordism are robust in the sense they make sense in very different contexts. This might mean that what we call geometry is quite robust as well: it can be seen in very wide contexts (algebraic geometry, differential geometry, analytic geometry, Faltings almost algebraic geometry, but also tropical geometry, Toën and Vaquié geometry under Spec()Spec(\mathbb{Z}), Durov geometry over commutative monads, and I don’t speak of their derived versions…

Another example is the theory of categories (say 11-categories): this is a very robust theory in the sense that the theory of (,1)(\infty,1)-categories is just some kind of reincarnation of it: most of the basic statements from 11-category theory extend in a straightforward way to statements in (,1)(\infty,1)-category theory.

This kind of way of being robust is certainly related to mathematical reality (in the sense of Lautman, say). This is so real (I mean in the practice of the mathematician) that this is also the kind of things we might even hope to axiomatize: for instance, an axiomatic approach to geometries (e.g. J. Lurie already started to do so). This is where the expressive power of category theory becomes wonderful.

This also makes sense about another kind of robustness: there are some very particular objects which spreads everywhere (i.e. in a lot of robust theories as above). For instance, SL 2SL_2 or the absolute Galois group of \mathbb{Q}. But also, sometines, some objects which might look silly at first sight, like the Fibonacci sequence, which we find by counting generating cells in iterated loop spaces.

But I don’t thing we remain attached to robust ideas only because of our poor minds. It is not that there are some part of reality which are simpler: we make our perception of the reality being simple and robust. And before we even try to look at reality, reality is not simple nor complicated. Without an observer, it is just not determined. Simplicity is a matter of choice.

Posted by: Denis-Charles Cisinski on November 21, 2008 11:49 AM | Permalink | Reply to this

Re: Mathematical Robustness

You remind me that I seem to be arguing against myself by concentrating on objects when in a recent talk I stressed the need to look to ideas to approach reality.

Maybe there’s a duality of sorts: an idea manifesting itself in many places, many ideas manifesting themselves in the same object.

Your comments on simplicity I very much agree with, as I do Polanyi’s:

It is legitimate, of course, to regard simplicity as a mark of rationality, and to pay tribute to any theory as a triumph of simplicity. But great theories are rarely simple in the ordinary sense of the term. Both quantum mechanics and relativity are very difficult to understand; it takes only a few minutes to memorize the facts accounted for by relativity, but years of study may not suffice to master the theory and see these facts in its context. Hermann Weyl lets the cat out of the bag by saying: ‘the required simplicity is not necessarily the obvious one but we must let nature train us to recognize the true inner I simplicity.’ In other words, simplicity in science can be made equivalent to rationality only if ‘simplicity’ is used in a special sense known solely by scientists. We understand the meaning of the term ‘simple’ only by recalling the meaning of the term ‘rational’ or ‘reasonable’ or ‘such that we ought to assent to it’, which the term ‘simple’ was supposed to replace. The term ‘simplicity’ functions then merely as a disguise for another meaning than its own. It is used for smuggling an essential quality into our appreciation of a scientific theory, which a mistaken conception of objectivity forbids us openly to acknowledge.

Posted by: David Corfield on November 21, 2008 1:10 PM | Permalink | Reply to this

Re: Mathematical Robustness

Maybe there’s a duality of sorts: an idea manifesting itself in many places, many ideas manifesting themselves in the same object.

That may be in accord with Mac Lane’s philosophy (as enunciated in Mathematics: Form and Function). One sees the idea of the Viergruppe for example manifesting itself in many places where there are two levels of duality. Or, one can see a great confluence of ideas entering the study of a single object, such as the etale topos, or the algebraic closure of \mathbb{Q}. Mac Lane emphasizes over and over the protean aspects of mathematical reality.

Posted by: Todd Trimble on November 21, 2008 2:42 PM | Permalink | Reply to this

Re: Mathematical Robustness

Physicist viewpoint: we often construct examples or counter-examples either by constructing something with a lot of structure or, in an attempt to construct something with no structure, by defining something random. As an example of the last, see Shannon’s use of random codes, the use of random matrix theory to model excited states in nuclei, use of random states in quantum information theory, use of random graphs as expanders, and many, many more. However, usually when you do something random, you again find structure for other reasons: random codes are a way to reach the Shannon capacity bound, random matrix theory has close ties to integrable systems, random states and random graphs also do really well at meeting certain bounds, and so on. So, I think our problem is that we don’t know how to do something _slightly_ unstructured, as it were.

Posted by: matt on November 21, 2008 3:15 PM | Permalink | Reply to this

Re: Mathematical Robustness

Random matrix ensembles seem to display plenty of universality. I wonder if that’s linked to their being kinds of maximum entropy distribution, the sort of thing we discussed here.

Posted by: David Corfield on November 21, 2008 4:41 PM | Permalink | Reply to this

Minimum Description Length sharpens Ockham’s razor; Re: Mathematical Robustness

I am fascinated by Wimsatt’s metatheory.

“… But Ockham’s razor (or was it Ockham’s eraser?) has a curiously ambiguous form–an escape clause which can turn it into a safety razor: How do we determine what is necessary? With the right standards, one could remain an Ockhamite while recognizing a world which has the rich multi-layered and interdependent ontology of the tropical rain forest–that is, our world. It is tempting to believe that recognizing such a world view requires adopting lax or sloppy standards–for it has a lot more in it than Ockhamites traditionally would countenance.”

However, this is foundationally vague, as Ocham’s Razor in its traditional form has a plethora of ambiguities and puzzles.

I lean strongly (after writing a 100-page unpublished monograph, out of which two referred papers have so far emerged) towards belief in the axiomatization of Ockham by MDL.

April 2005
8 x 10, 454 pp., 73 illus.
$50.00/£32.95 (CLOTH)

ISBN-10:
0-262-07262-9
ISBN-13:
978-0-262-07262-5
Series
Bradford Books
Neural Information Processing

Advances in Minimum Description Length
Theory and Applications Edited by Peter D. Grünwald, In Jae Myung and Mark A.
Pitt

===========
see also:

MML, MDL, Minimum Encoding Length Inference
http://www.csse.monash.edu.au/~dld/MELI.html

many useful hotlinks
===========

The Minimum Description Length Principle
by Peter D. Grünwald

ISBN13: 9780262072816
ISBN10: 0262072815

The minimum description length (MDL) principle is a powerful method of inductive inference, the basis of statistical modeling, pattern recognition, and machine learning. It holds that the best explanation, given a
limited set of observed data, is the one that permits the greatest compression of the data. MDL methods are particularly well-suited for dealing with model selection, prediction, and estimation problems in situations where the models under consideration can be arbitrarily complex, and overfitting the data is a
serious concern. This extensive, step-by-step introduction to the MDL Principle provides a comprehensive reference (with an emphasis on conceptual issues) that is accessible to graduate
students and researchers in statistics, pattern classification, machine learning, and data mining, to philosophers interested in the foundations of
statistics, and to researchers in other applied sciences that involve model selection, including biology, econometrics, and experimental psychology.

Part I provides a basic introduction to MDL and an overview of the concepts in statistics and information theory needed to understand MDL. Part II treats universal coding, the information-theoretic notion on which MDL is built, and part III gives a formal treatment of MDL theory as a theory of inductive inference based on universal coding. Part IV provides a comprehensive overview of the statistical theory of exponential families with an emphasis on their
information-theoretic properties.

The text includes a number of summaries, paragraphs offering the reader a fast track through the material, and boxes highlighting the most important concepts.

The text includes a number of summaries, paragraphs offering the reader a fast track through the material, and boxes
highlighting the most important concepts.

ISBN:
9780262072816
Author:
Grünwald, Peter D.
Publisher:
Mit Press
Author:
Grunwald, Peter D.
Foreword:
Rissanen, Jorma
Subject:
Mathematical and Statistical Software
Copyright:
2007
Series:
Adaptive Computation and Machine Learning
Publication Date:
May 2007
Binding:
Hardcover
Language:
English
Illustrations:
Y
Pages:
703
Dimensions:
9.14x7.25x1.52 in. 2.74 lbs.

Posted by: Jonathan Vos Post on November 22, 2008 4:49 PM | Permalink | Reply to this

Levins, MacArthur, Lewontin, E. O. Wilson; Re: Mathematical Robustness

When Wimsatt cites “R. Levins” he is acutely pointing to a foundational argument in mathematical biology, very much on the matter of Occam’s razor as inapplicable.

Richard Levins is discussed thus by the Philosopher of Science Jay Odenbaugh:

The Strategy of “The Strategy of Model Building in Population Biology”
Jay Odenbaugh
Department of Philosophy
Environmental Studies
Lewis and Clark College
Portland, Oregon 97219
jay@lclark.edu

I. Introduction. This essay is an historical exploration of the methodological underpinnings of Richard Levin’s classic essay ‘The Strategy of Model Building in Population Biology’ in which I argue for several theses. First and foremost, his essay constitutes a statement and defense of a more ‘holistic and integrated’ theoretical population biology that grew out of the informal and formal collaborations of Levins, Robert MacArthur, Richard Lewontin, E. O. Wilson, and others. Second, Levins’ essay and the views introduced would be used as a response to the rise of systems ecology in the 1960s against the background of the International Biological Program. Third, the arguments Levins employs are best construed as ‘pragmatic – a point that is sometimes unnoticed by contemporary scientists and philosophers. Finally, I turn to the contemporary and consider the similarities and differences between the limitations of population biology of the 1960s and that of 2005 raising open questions about the applicability of Levins’ analysis.

II. ‘Simple Theorists’ or a Prolegomena to a New Population Biology. In the 1960s, Levins, Richard Lewontin, Robert MacArthur, E. O. Wilson, Leigh Van Valen, and others were interested in integrating different areas of population biology mathematically. Apparently they met on several occasions at MacArthur’s lakeside home in Marlboro, Vermont discussing their own work in population genetics, ecology, biogeography, and ethology and how a ‘simple theory’ might be devised.

In an interview in the early seventies, E. O. Wilson describes the methodological program of the ‘simple theorists’, Biologists like MacArthur and myself, and other scientists at Harvard, Princeton, and the University of Chicago especially, believe in what has come to be called ‘simple theory’, that is, we deliberately try to simplify the natural universe in order to produce mathematical principles. We think this is the most creative way to develop workable theories. We don’t even try to take all the possible factors in a particular situation into account, such as sudden changes of weather or the effects of unusual tides. (Chisholm 1972, 177).

He goes on to compare this program against the competition,

On the one hand, you’ve got the hard ecologists like MacArthur and myself, who, as I’ve explained, believe in simplifying theory as much as possible. You can call us the simple theorists. But in the last five years E. O. Wilson (1994) has written about the meetings that occurred; however, little has been written on the substance of those meetings.

At the time, MacArthur was at Princeton and Lewontin and Levins were at the University of Chicago…. so a group has developed, around people like Paul Ehrlich at Stanford, C. S. Holling at British Columbia in Canada, and Kenneth Watt at Davis, who are also mathematical ecologists, but who believe in complex theory… They say that because ecosystems are so vastly complex, you must be able to take all the various components into account. You really must feed in a lot of the stuff that we simple theorists leave out, like sunsets and tides and temperature variations in winter, and the only way you can do this is with a computer. To them, in other words, the ideal modern ecologist is a computer technologist, who scans the whole environment, feeds all the relevant information into a computer, and uses the computer to simulate problems and make projections into the future. (Chisholm 1972, 181-2)…

Posted by: Jonathan Vos Post on November 22, 2008 10:27 PM | Permalink | Reply to this
Read the post Looking for Compatible Structure
Weblog: The n-Category Café
Excerpt: Can Fraisse limits support algebraic structure?
Tracked: January 8, 2010 11:54 AM
Read the post Inevitability in Mathematics
Weblog: The n-Category Café
Excerpt: When is it unavoidable that a construct be found?
Tracked: June 28, 2010 12:15 PM

Post a New Comment