Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 18, 2007

Duality between Probability and Optimization

Posted by David Corfield

One of the reasons I have an interest in what we find out about mechanics in different rigs is that many machine learning algorithms are expressible in thermodynamic form, as the tutorial, Energy-Based Models: Structured Learning Beyond Likelihoods, by Yann LeCun made abundantly clear. We even see there the difference between working in the +\mathbb{R}^{+} rig, where you keep the whole probabilistic apparatus, and the min\mathbb{R}^{min} rig, where you just seek an optimal solution. The former typically takes greater effort to work with. Only in special situations is it analytically tractable, so one might use Monte Carlo techniques instead. On the other hand, in the probabilistic set-up the regularization term, which stops the inference problem being ill-posed, generally comes in the package for free, encoded in a prior.

Outside machine learning, there’s been much effort spent importing constructions into min\mathbb{R}^{min}, so that you might hear of the Duality between Probability and Optimization. For a very good overview of the uses of semirings such as min\mathbb{R}^{min}, Jonathan Golan’s Some Recent Applications of Semiring Theory is recommended. I never knew about Sammy Eilenberg or John Horton Conway’s role in their revival around 1970.

Posted at January 18, 2007 8:21 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1113

1 Comment & 1 Trackback

Entropy; Re: Duality between Probability and Optimization

Given that “that many machine learning algorithms are expressible in thermodynamic form” – what is known about the entropy of machine learning algorithms? John Baez cited Chris Lee (UCAL) on his diary addressing the question of entropy of evolution though natural selection. We have the genetic algorithm as a kind of machine learning algorithm. Hence the question is not trivial. I have emailed Chris Lee an outline of my approach, which I have undertaken to coauthor with Prof. Fellman at Southern New Hampshire University for the (Oct/Nov 2007) International Conference on Complex Systems. Do others whoo wrrite or read n-Category Cafe know about other references to entropy in machine learning, and more particularly, to Genetic Algorithm? There is also discussion of this on Good Math, Bad Math, but unfortunately embedded in demolition of an irritating advocate of Intelligent Design. Even a crackpot can once in a while ask a good question, I think.

Posted by: Jonathan Vos Post on January 19, 2007 9:49 PM | Permalink | Reply to this
Read the post Category Theoretic Probability Theory
Weblog: The n-Category Café
Excerpt: Having noticed (e.g., here and here) that what I do in my day job (statistical learning theory) has much to do with my hobby (things discussed here), I ought to be thinking about probability theory in category theoretic terms....
Tracked: February 7, 2007 11:58 AM

Post a New Comment