Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 18, 2007


Posted by David Corfield

My colleague here in Canterbury Jon Williamson is part of an international research group, progicnet, whose aim is to find a good integration of probability theory and first-order logic. For one reason or another, some technical projects get counted as philosophy, while others don’t. Where I’m sure I’d have a hard time getting funding for 2-geometry, progicnet’s goals are philosophically pukka.

Now, as Jon says,

We see then that there are a plethora of combinations of probability and logic, and that these approaches are being investigated in some detail.

But is there a ‘natural’ way of doing it? In the quest to have philosophy take notice of category theory, all we need do is use it to blend probability theory and logic powerfully. Does anyone know of any progress along these lines? Perhaps involving the probability monad?

On the logic front, there’s a huge amount already worked out category theoretically, such as Lawvere’s observations on the adjunctions (p. 12) between quantification and substitution. Here, formulae sit in fibres above the product of the types of their free variables.


ψ(y) Yxϕ(x,y)ifandonlyifψ(y) X×Yϕ(x,y), \psi(y) \vdash_{Y} \forall x \phi(x, y) if and only if \psi(y) \vdash_{X \times Y} \phi(x, y),

where the second ψ(y)\psi(y) is the result of projecting it from the fibre above YY to the fibre above X×YX \times Y.


xϕ(x,y) Yψ(y)ifandonlyifϕ(x,y) X×Yψ(y). \exists x \phi(x, y) \vdash_{Y} \psi(y) if and only if \phi(x, y) \vdash_{X \times Y} \psi(y).

Now how do we go about getting a probabilistic valuation on the sentences, i.e., the formulae in the fibre above the empty set?

Posted at September 18, 2007 11:32 AM UTC

TrackBack URL for this Entry:

18 Comments & 3 Trackbacks

Re: Progic

Sometimes I feel I’ve finally learned British English. For example, I’ve not only learned that an “anorak” is a waterproof coat; I also understand what people mean by saying “he’s a real anorak”.

But then someone says something like “philosophically pukka” — and my bubble of hubris is rudely popped.

“Pukka” sounds bad, but from context I’m guessing it’s good.

Posted by: John Baez on September 18, 2007 8:08 PM | Permalink | Reply to this

Re: Progic

Yes, pukka is good. It came into into english in the days of the Raj from a hindi word meaning cooked or ripe. When I was a schoolboy in the 70’s it was a cool way to say ‘proper’ or ‘the real thing!’

Posted by: Geoff Clohessy on September 18, 2007 8:40 PM | Permalink | Reply to this

Re: Progic

I did not know it had anything to do with cooking. The phrase in my mind is “pukka wallah”. Don’t ask me what this means.

I also think you would have more chance of understanding this if you went to a public school.

Posted by: Bruce Westbury on September 19, 2007 8:15 PM | Permalink | Reply to this

Re: Progic

You’re thinking of punkah wallah, the boy who pulls the ceiling fan.

I don’t think the use of ‘pukka’ is so much a class matter. It’s more a case of when it was made popular. The chef Jamie Oliver likes to use it.

I’ll get back to the thread itself when I’ve come up for air from my new job.

Posted by: David Corfield on September 20, 2007 1:45 PM | Permalink | Reply to this

Re: Progic

I’m feeling sorry for David. It must be disappointing to post a blog entry about connections between probability theory and logic and trigger nothing but a discussion about the word ‘pukka’.

So, let’s talk a bit about that probability monad.

Roughly speaking, this is a functor sending any space XX to the space of probability measures on XX.

‘Any space’ — but what sort of space? David said we should use certain nice topological spaces called Polish spaces. I’m sure this has advantages, but it’s a bit technical. I think we should postpone such technicalities until we understand simpler issues.

So, let’s work with a simpler monad

M:SetSetM: Set \to Set

sending any set XX to the set MXM X whose elements are certain very nice measures on XX: finite positive linear combinations of Dirac delta measures.

The reason I like MXM X is that it’s the free [0,)[0,\infty)-module on XX!

Here [0,)[0,\infty) is not a ring, but a rig with the usual addition of real numbers as ++ and the usual multiplication as ×\times. We can talk about RR-modules for RR a rig, just as we can when it’s a ring. The free RR-module on XX consists of finite RR-linear combinations of elements of XX. When R=[0,)R = [0,\infty), these are the same as finite positive linear combinations of Dirac delta measures on XX.

David should like this, because it begins to develop a bridge between the probability monad and the wonderful world of matrix mechanics over arbitrary rigs.

For any rig RR there’s a monad

T R:SetSetT_R: Set \to Set

sending any set XX to the underlying set of the free RR-module on XX. I just explained why

MT [0,)M \cong T_{[0,\infty)}

But, it’s also interesting to take RR to be the rig of real numbers, or complex numbers, or truth values — or the rig of costs min\mathbb{R}^{min}, or the temperature-dependent rig T\mathbb{R}^T. As David knows and the links show, all these are interrelated in a wonderful way. Probability theory is just part of this big picture, which also includes classical mechanics, quantum mechanics and Boolean logic.

Or is it???

You’ll notice my monad

M:SetSetM : Set \to Set

sends a finite set XX not to the set of probability measures on XX, but to the set of all measures on XX.

That’s fine if we’re willing to accept the concept of ‘relative probabilities’, which aren’t necessarily normalized so their sum equals 1.

But what if we annoyingly insisted on working with probabilities that sum to 1? Then we’d get the monad

P:SetSetP : Set \to Set

sending any set XX to the set of probability measures on XX that are finite linear combinations of Dirac delta measures. This is a watered-down version of David’s probability monad.

Now, there’s no rig RR such that

PT RP \cong T_R

The annoying condition that probabilities must sum to 1 stands in the way.

But, I believe there’s a generalized ring that does the job — a generalized ring in the sense of Nikolai Durov. This is the guy who generalized algebraic geometry to handle weird generalized rings like ‘the field with one element’. David already has already written about this.

Indeed, if you read my summary of Durov’s enormous book, you’ll see that one of his generalized rings is nothing but a monad with a certain property!

And, I think my probability monad

P:SetSetP : Set \to Set

simply is a generalized ring according to Durov’s definition! Unless I’m confused, it’s even a ‘commutative’ generalized ring!

So, everything David likes is part of the same big picture: the probability monad, the field with one element, the rig of truth values, the rig of costs, and so on. It’s all about linear algebra over Durov’s generalized rings. In particular, using these generalized rings, we can really talk about a ‘generalized ring of probabilities’, even though probabilities must sum to 1 — a condition we could never demand for arbitrary finite sets of elements of a mere rig.

If we’re going to combine probability theory and logic in a really nice way, maybe we should take advantage of this big picture. It may not be philosophically pukka, but it’s certainly pukka.

Posted by: John Baez on September 20, 2007 3:09 AM | Permalink | Reply to this

Re: Progic

In the film “Harvey”, Mr. Wilson looks up the definition of a word pronounced very much like “pukka” in an encyclopedia in the nuthouse reception area. He reads,

“Pooka – from old Celtic mythology. A fairy spirit in animal form, always very large. The pooka appears here and there, now and then, to this one and that one. A benign but mischievous creature, very fond of rumpots, crackpots, and how are you, Mr. Wilson?”

Posted by: Doug Jones on September 20, 2007 6:32 AM | Permalink | Reply to this

Re: Progic

Now David has his revenge. I posted a truly amazing comment explaining how to unify Boolean logic and probability theory in terms of modules of generalized rings, and I get a reply about… the word ‘pukka’!

Posted by: John Baez on September 21, 2007 2:53 AM | Permalink | Reply to this

Re: Progic

Thanks for reminding me about that comment!

One problem I personally have with the longer comments is that sometimes I don’t have time to read them, and then a day or two later when I do have a bit, there is often a whole new slew of comments, and I forget about the old ones.

Many of these comments are really follow-up posts. It might be nice to have some way of promoting big comments so they don’t get lost. Maybe my real problem is that the Firefox RSS set up isn’t as flexible as, say, a mail program, where you can leave messages in your inbox for future reading.

Anyway, my two cents.

Posted by: James on September 21, 2007 4:29 AM | Permalink | Reply to this

Re: Progic

The upside of the many amazingly well thought-out comments in this place is that they’ll provide rich fodder for the n-wikification program…

Posted by: Allan E on September 21, 2007 4:38 AM | Permalink | Reply to this

Re: Progic

A key element of Durov’s framework – that it’s about algebraic monads (section 0.4.4) – is why you had to opt for the finiteness of the support of the probability measure. This would have to be altered for a full treatment of probability theory.

So the question is what that is relevant to probability theory has been captured already by formulating it as about a generalized ring?

Something to wonder about are the operations to be had. We can’t have a straightforward sum or product of probability distribution, since they wouldn’t sum to 1.

Posted by: David Corfield on September 21, 2007 9:52 AM | Permalink | Reply to this

Re: Progic

David wrote:

So the question is what that is relevant to probability theory has been captured already by formulating it as about a generalized ring?

Pretty much everything except the ‘yucky analysis stuff’ where you replace finite sums by integrals. The basic ideas all shine through.

Lest anyone feel the need to argue that probability theory requires analysis and that analysis isn’t yucky, I hasten to agree. I like analysis and I’ve done a lot of it. However, I’ve learned that one can more quickly sketch out interesting category-theoretic ideas by retreating to contexts where analysis doesn’t show up. First lay out the algebraic ‘skeleton’ of what you’re trying to do; then clothe it in the analytic ‘meat’.

For example: make sure you understand a bit about finite-dimensional Hilbert spaces before you tackle the infinite-dimensional ones. Similarly for 2-Hilbert spaces (I’m writing a paper on the infinite-dimensional ones now). I could list lots more examples.

So: there’s surely an analysis-flavored enhancement of Durov’s theory of generalized rings which handles your full-fledged probability monad, Polish spaces and all. But: to outline grand schemes it’s best to retreat to the watered-down probability monad I described.

Something to wonder about are the operations to be had. We can’t have a straightforward sum or product of probability distribution, since they wouldn’t sum to 1.

The algebraic monad I described gives you exactly the right nn-ary operations on probability measures: convex linear combinations! For example, for each p[0,1]p \in [0,1] there’s a binary operation, which given two probability measures μ\mu and ν\nu spits out pμ+(1p)ν p \mu + (1-p) \nu These are all the binary operations. More generally, there’s an (n1)(n-1)-simplex of nn-ary operations.

It’s really just when we get to the infinitary operations that the watered-down probability monad I described becomes impoverished compared to the probability monad you described! Mine doesn’t have any infinitary operations. Yours allows you to take any collection of probability measures μ x\mu_x where xx ranges over a Polish space XX, and any probability measure pp on XX, and form a new probability measure on XX, like this: Xμ xdp(x) \int_X \mu_x \; d p(x) It’s fun to think about this stuff, but only later, when you’re ready for analysis. One must learn to add before doing integrals!

This is how it always goes — which is one thing that slows down the applications of category theory to probability and all branches of analysis. Analysis requires an ultra-macho version of category theory that can only come after one has mastered the ordinary stuff; most people who like analysis never even learn the ordinary stuff.

Posted by: John Baez on September 21, 2007 12:57 PM | Permalink | Reply to this

Re: Progic

Let me just add that the monad I described:

P:SetSet P : Set \to Set

sending any set XX to the set of formal linear combinations

PX={ ip ix i:x iX,p i0, ip i=1,onlyfinitelymanyp inonzero}P X = \{ \sum_i p_i x_i : \; x_i \in X,\; p_i \ge 0, \; \sum_i p_i = 1, \; only finitely \; many \; p_i \; nonzero \}

is often known as the monad for convex sets. Its algebras are convex sets.

The free convex set on an nn-element set is a simplex with nn vertices. Annoyingly, this is called the (n1)(n-1)-simplex. (Note that the (1)(-1)-simplex is the empty set.)

By general abstract nonsense about monads coming from algebraic theories (known as algebraic monads), the (n1)(n-1)-simplex is also the set of nn-ary operations in the algebraic theory for convex sets.

There’s also an algebraic monad for affine spaces, where one drops the condition p i0p_i \ge 0. Here instead of the (n1)(n-1)-simplex we get an (n1)(n-1)-dimensional affine space.

Posted by: John Baez on September 21, 2007 1:34 PM | Permalink | Reply to this

Re: Progic

Perhaps we should follow up David Mumford’s proposal for a stochastic predicate calculus.

It should have the syntax of standard predicate calculus except that we have two kinds of variables in it: the ordinary predicates and constants and quantiable free variables xx but also a set of random constants x̲\underline{x}. In addition, it comes with a truth value function pp mapping all formulas FF without free variables to real numbers between 00 and 11. If the formula FF has only ordinary variables in it, then p(F){0,1}p(F) \in \{0, 1\}. Formal semantics for this theory would make the random constants functions on probability spaces so that a formula would define a subset of the product of these spaces, hence have a probability. (pp. 11-12)

Posted by: David Corfield on September 21, 2007 12:56 PM | Permalink | Reply to this

Re: Progic

David wrote:

Perhaps we should follow up David Mumford’s proposal for a stochastic predicate calculus.

I have a feeling his proposal is related to the stuff I’m talking about, but I can’t quite put my finger on the connection. I’d need to understand the relation between Boolean algebra and the predicate calculus and then generalize it to the probabilistic case.

Here’s my first try — watch me crash and burn:

Propositional logic is summarized by the ‘free Boolean algebra’ monad. The predicate calculus goes a wee bit further. Lawvere worked out exactly how. We start with a Boolean algebra B nB_n of nn-ary predicates for each finite set nn. Then we say there’s a ‘variable substitution’ operation

B f:B mB nB_f : B_m \to B_n

for each function f:mnf: m \to n. For example, if ff is the unique function f:21f: 2 \to 1, then B fB_f sends any 2-ary predicate P(x 1,x 2)P(x_1, x_2) to the 1-ary predicate P(x 1,x 1)P(x_1, x_1). At this point we might as well admit BB is a functor from FinSetFinSet to BoolAlgBoolAlg. Then, we get quantifiers as adjoints to the substitution operations — that was Lawvere’s stroke of genius:

  • F.W. Lawvere, Adjointness in foundations, Dialectica, 23 (1969), 281–296

Now we want to generalize all this to the ‘stochastic’ world!

To do this, I’d like to see the ‘free Boolean algebra’ monad as a special case of the algebraic monads I’ve been talking about, and figure out how to generalize everything I just sketched to an arbitrary algebraic monad.

Then, if we replaced the ‘free Boolean algebra’ by the ‘free convex set’ monad, we might get a stochastic predicate calculus.

But at this point I realize something is seriously screwed up with my plan! The algebraic theory for convex sets doesn’t contain the ‘and’, ‘or’ and ‘not’ operations we had in Boolean algebras. It only has the convex linear combination operations!

In fact, I now see I was completely confused. Boolean algebras are sets of propositions, while convex sets are sets of states. We perform logical operations on propositions, but linear combination (or ‘mixture’) operations on states. Why was I trying to combine them??? They’re ‘dual’ in some way.

Posted by: John Baez on September 21, 2007 2:29 PM | Permalink | Reply to this

Re: Progic

Hmm, is the logic part of this right? I mean do you really talk about

nn-ary predicates for each finite set nn.

I thought the idea was to choose a cartesian closed base category like Sets, then look at all predicates over an object in it.

So if we have XX the set of dogs, predicates typed over XX are unary, like being a poodle. Typed nn-ary predicates correspond to a product of nn sets.

The adjunction process starts from observing that a function f:XYf: X \to Y spits out a predicate of XX for every predicate of YY. So Let YY be the set of people, and ff be the function which sends a dog to its owner. (Each dog is owned by one owner.) Then the predicate ‘is French’ gets sent to the predicate ‘is owned by a Frenchman’.

Let’s try to work out the adjunctions. If from being a poodle it follows that you were owned by a frenchman, then being an owner of a poodle implies you are French. So this functor sends dog predicates to human predicates ‘owner of a – dog’.

Now, the other one. If from being owned by a Frenchman, it followed that you were a poodle, then from being a Frenchman, it follows that if you own a dog then it’s a poodle. So this functor sends dog predicates to human predicates ‘owner only of – dogs’ (which includes owning no dogs).

Does anything similar happen with the probability functor?

Posted by: David Corfield on September 22, 2007 5:46 PM | Permalink | Reply to this

Re: Progic

Just one little thing for now. I was talking about the ‘untyped’ (= monotyped) predicate calculus, where to specify the ‘arity’ of a predicate we just need a number nn. Lawvere realized that the nice way to do this is via a functor

B:FinSetBoolAlgB :FinSet \to BoolAlg

This eliminates the heavy reliance on syntax that dominates the old-fashioned approach — ‘variables’, ‘quantifiers’, ‘well-formed formulas’ and all that junk.

BB sends any finite set nn to the set B nB_n of nn-ary predicates in your theory, which form a Boolean algebra under the usual logical operations. And, it sends any function f:nmf: n \to m to a Boolean algebra homomorphism B f:B nB mB_f: B_n \to B_m which describes ‘substitution of variables’ in the old approach. I gave an example above, but here’s another: if f:32f: 3 \to 2 is the function with f(0)=1,f(1)=0,f(3)=0f(0) = 1, \; f(1) = 0, \; f(3) = 0 then B fB_f maps any 3-ary predicate P(x 0,x 1,x 2)P(x_0,x_1,x_2) to the 2-ary predicate P(x 1,x 0,x 0)P(x_1,x_0, x_0). This is clearly a Boolean algebra homomorphism.

The nice part is that a Boolean algebra is itself a specially nice sort of category (since it’s a poset with implication as \le), and a Boolean algebra homomorphism is a specially nice sort of functor (since it preserves implication). So, it makes sense to demand that

B f:B nB mB_f : B_n \to B_m

has both a left and a right adjoint. And, this demand then gives you universal and existential quantifiers!

If this is old hat to you, sorry. If it’s not, I leave it as a puzzle to work out what the left adjoint of B fB_f should be like (in the simplest examples), and whether this is universal or existential quantification. Ditto for the right adjoint.

Jim has been telling me about this stuff forever, so I thought I’d finally put it to good use in an attempt to blend probability theory and the predicate calculus into some form of ‘progic’. I still think it should be helpful. However, my first attempt was a belly flop.

Posted by: John Baez on September 22, 2007 7:07 PM | Permalink | Reply to this

Re: Progic

Ok, so you’re telling a similar story to mine, but with a single untyped universe, UU, of entities. So you have nn-ary predicates defined over U nU^n. And the maps you deal with go from U mU^m to U nU^n.

Hmm, do you need that trick of finding a UU for which U UUU^U \equiv U to get implication working (‘Adjointness in Foundations’ p. 11)? Aren’t typed logics nicer?

But the more important point is that when we now think about probabilities, it’s not really a distribution over entities in some universe we care about. All that probability monad stuff concerns precisely that. So a morphism between XX and YY in the Kleisli category is a function between XX and PYP Y, otherwise known as a conditional distribution P(y|x)P(y | x). If X={*}X = \{*\}, we have a distribution over YY.

However, to achieve progic’s aims, I don’t think we’re looking to model distributions over, say, a set of dogs, as though there’s some specific dog, and a chance that it’s Rex, Fido, or Rover. I think it’s more a case of a set of dogs and probabilistic predicates, say, ‘being a poodle’, such that for each dog there’s a number between 00 and 11 corresponding to our belief that that dog is a poodle.

So Pr(poodle(Rex)) = 0.4, Pr(poodle(Fifi)) = 0.8, etc. In other words instead of a map from XX to {0,1}\{0, 1\}, we have a map from XX to [0,1][0, 1].

Now, unlike in the case of an ordinary predicate, knowing these probabilistic predicates for each member of XX doesn’t tell us what happens on X nX^n. So there’s no reason to have Pr(poodle(Fifi) and poodle(Rex)) = Pr(poodle(Fifi)) ×\times Pr(poodle(Rex)). Why assume independence? I might know that Rex and Fifi are owned by the same person, and when finding out that Rex is a poodle, it will increase my belief that Fifi is too.

Jon Williamson’s approach is to seek maximally uncertain distributions which satisfy all known constraints.

Posted by: David Corfield on September 23, 2007 12:11 PM | Permalink | Reply to this

Re: Progic

This has surely got to be a useful observation:

As a final example of a hyperdoctrine, we mention the one in which types are finite categories and terms arbitrary functors between them, while P(A)=S AP(A) = S^A, where SS is the category of finite sets and mappings, with substitution as the special Godement multiplication. Quantification must then consist of generalized limits and colimits…By focusing on those AA having one object and all morphisms invertible, one sees that this hyperdoctrine includes the theory of permutation groups; in fact, such AA are groups and a “property” of AA is nothing but a representation of A by permutations. Quantification yields “induced representations” and implication gives a kind of “intertwining representation”. Deductions are of course equivariant maps. (Adjointness in Foundations, p. 14)

Posted by: David Corfield on September 23, 2007 3:56 PM | Permalink | Reply to this
Read the post Progic II
Weblog: The n-Category Café
Excerpt: More on merging probability theory and logic
Tracked: September 25, 2007 9:48 AM
Read the post Progic IV
Weblog: The n-Category Café
Excerpt: More on unity probability theory and logic
Tracked: October 9, 2007 9:15 AM
Read the post What Can Category Theory Do For Philosophy?
Weblog: The n-Category Café
Excerpt: A possible philosophical meeting on category theory
Tracked: December 6, 2012 9:56 AM

Post a New Comment