Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 27, 2008

Coalgebraically Thinking

Posted by David Corfield

Having come across Peter Freyd’s coalgebraic characterisation of the reals, I’ve been spending some time of late trying to understand coalgebras. Where an algebra for an endofunctor F:CCF: C \to C is an object AA of CC together with an arrow α:F(A)A\alpha : F(A) \to A, a coalgebra for FF requires an object AA with an arrow β:AF(A)\beta : A \to F(A).

I’ve traced out what seemed to me an interesting path. First I stumbled upon Bart Jacob’s book Introduction to Coalgebra: Towards Mathematics of States and Observations. This I’d thoroughly recommend. Let me give you some nuggets from it:

The duality with algebras forms a source of inspiration and of opposition: there is a “hate-love” relationship between algebra and coalgebra. (p. v)

As already mentioned, ultimately, stripped to its bare minimum, a programming language involves both a coalgebra and an algebra. A program is an element of the algebra that arises (as so-called initial algebra) from the programming language that is being used. Each language construct corresponds to certain dynamics, captured via a coalgebra. The program’s behaviour is thus described by a coalgebra acting on the state space of the computer. (p. v)

There are many similarities (or dualities) between algebras and coalgebras which are often useful as guiding principles. But one should keep in mind that there are also significant differences between algebra and coalgebra. For example, in a computer science setting, algebra is mainly of interest for dealing with finite data elements – such as finite lists or trees – using induction as main definition and proof principle. A key feature of coalgebra is that it deals with potentially infinite data elements, and with appropriate state-based notions and techniques for handling these objects. Thus, algebra is about construction, whereas coalgebra is about deconstruction – understood as observation and modification.(p. 47)

A rule of thumb is: data types are algebras, and state-based systems are coalgebras. But this does not always give a clear-cut distinction. For instance, is a stack a data type or does it have a state? In many cases however, this rule of thumb works: natural numbers are algebras (as we are about to see), and machines are coalgebras. Indeed, the latter have a state that can be observed and modified. (pp. 47-8)

Now, just as initial algebras are a vital part of mathematics, so a key property for coalgebras is finality.

Initial algebras are special, just like final coalgebras. Initial algebras (in Sets) can be built as so-called term models: they contain everything that can be built from the operations themselves, and nothing more. Similarly, we saw that final coalgebras consist of observations only. (p. 48)

If a functor has an initial algebra or final coalgebra, then it’s isomorphic to its image under the functor. Indeed, an initial algebra is like a minimal fixed point of the functor, whereas a final coalgebra is like a maximal fixed point.

Take the functor F(X)=1+A×XF(X) = 1 + A \times X, for a fixed set AA. Searching for the smallest fixed point, we put in as little as possible. So we build up from the empty set, taking any existing set of lists to a new set which includes the empty list and all lists generated by adding an element of AA to the old lists. All finite lists of AA elements are generated this way and nothing else, so these lists form the elements of the initial algebra for FF. But a larger set is fixed by FF, namely, the set of AA-streams, or finite and infinite lists of AA elements. This in fact is the final coalgebra.

Coalgebras are about observation. We can think of our FF as observing about an entity whether it contains something AA-detectable or not, and if so which element of AA it detects. Have observed something it modifies it. The final coalgebra has as elements all possible outcomes of the behaviour you might observe. Do you still have list elements? If ever no, we have a finite list. If always yes, we have a infinite list. And there’s no other behaviour that can be detected.

Now, this talk of maximality put me in mind of a chapter in a book edited by Donald Gillies, my PhD supervisor, called Revolutions in Mathematics. In ‘A Restoration that Failed’, Herbert Breger argues that where Hilbert presented his style of mathematics as the established norm, it was in fact a very novel way of proceeding. Apparently Finsler (of Finsler geometry fame) promoted an older style of mathematics, where one captures what is taken to be an already existing system by writing a list of properties satisfied by that system, such that the system is maximal with respect to the properties.

When it comes to sets then, rather than worry about the set theoretic paradoxes by controlling their construction, building them bottom up from the empty set and various constructions, instead Finsler defines the system of sets to be that collection which is maximal with respect to certain properties. Breger tells us the story of how by 1928 Finsler’s approach was thoroughly misunderstood, allowing Reinhold Baer to dismiss Finsler’s set theory in a four page paper as inconsistent. Baer showed that any system satisfying Finsler’s axioms can be extended, hence that there is a no maximal such system. But in doing so he is using a different rules of set formation from Finsler.

To Zermelo, Fraenkel, Baer, and others, the idea of definite property is inseparably connected with the notion of set, the underlying philosophy being that well-formed definitions create the object.

Finsler is left to face a lonely battle, and we hear about him still writing in 1969, the year before his death, on how the continuum hypothesis is still open in his set theory, in what he calls ‘classical mathematics’. The neglected Finsler doesn’t even have a Wikipedia entry, and the MacTutor history is as brief as possible.

But there’s no need to feel too sorry for Finsler as his ideas, or something similar, did later see the light of day. In his Non-well-founded Sets, Peter Aczel describes a range of ways of developing a set theory which does not obey well-foundedness, the condition that membership chains must be finite.

If this, as Finsler puts it, is “classical mathematics”, perhaps it is odd that Aczel was inspired by computer science:

The original stimulus for my own interest in the notion of a non-well-founded set came from a reading of the work of Robin Milner in connection with his development of a mathematical theory of concurrent processes. (p. xix)

On the other hand, perhaps we could say that computer scientists are dealing with already existing systems – the program-running computers before them.

Certainly we hear the sound of maximality in Aczel’s book:

Thus, in the case of the axiom system for the natural numbers, the extremal axiom is the principle of mathematical induction, which is a minimalisation axiom, as it expresses that no objects can be subtracted from the domain of natural numbers while keeping the truth of the other axioms. The axiom systems for Euclidean Geometry and the real numbers involve on the other hand completeness axioms. These are maximalisation axioms; i.e. they express that the domain of objects cannot be enlarged while preserving the truth of the other axioms. (p. 106)

Aczel proposes his own anti-foundation axioms, AFA, and discusses it in relation to other such axioms, including FAFA (for Finsler) and SAFA (for Dana Scott). He notes that

It is suprising that it has taken over 50 years for this “success” to come about, whereas Fraenkel only had to wait a handful of years. It is worth recording here that Finsler’s axiom system uses a notion of isomorphism of sets which is different to the one introduced by Mirimanoff. If he had used Mirimanoff’s notion the resulting anti-foundation axiom would have been what I have called Scott’s AFA. (p. 107)

But what has this to do with coalgebra? Well there’s a project called algebraic set theory. In his introduction to the field, Steve Awodey writes:

The new insight taken as a starting point in AST is that models of set theory are in fact algebras for a suitably presented algebraic theory, and that many familiar set theoretic conditions (such as well-foundedness) are thereby related to familiar algebraic ones (such as freeness).

So what about the coalgebras? Well, consider the functor from the category of classes of sets to itself, which sends a class AA to the class of subclasses of AA which are sets. This restriction must be made, otherwise there would be no fixed points. Now, this functor has as initial algebra the class of sets and as final coalgebra the class of non-well-founded sets. An excellent article here is Rutten and Turi’s On the Foundations of Final Semantics: Non-Standard Sets, Metric Spaces, Partial Orders

So now I’m left wondering what would be possible were we to follow Finsler and opt for a more coalgebraic approach to mathematics. There’s plenty of coalgebraic thinking in mathematics, and you can do no better than to pay a visit to Tom Leinster’s coalgebraic topology for examples, including the category of strict \infty-categories as a coalgebra for an enrichment functor on finite product categories.

Then Pavlovic and Escardó tell us about Calculus in Coinductive Form:

Coinduction is often seen as a way of implementing infinite objects. Since real numbers are typical infinite objects, it may not come as a surprise that calculus, when presented in a suitable way, is permeated by coinductive reasoning. What is surprising is that mathematical techniques, recently developed in the context of computer science, seem to be shedding a new light on some basic methods of calculus. We introduce a coinductive formalization of elementary calculus that can be used as a tool for symbolic computation, and geared towards computer algebra and theorem proving. So far, we have covered parts of ordinary differential and difference equations, Taylor series, Laplace transform and the basics of the operator calculus.

But do algebraic blinkers prevent us from doing more coalgebraic thinking? Jacobs tells us that

Coalgebra is still in its infancy.

When it grows up it is expected to help unite

the theory of differential equations with automata and process theory.

While we’re waiting we can enjoy playing the Game of Life coalgebraically.

Posted at November 27, 2008 4:23 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1860

55 Comments & 2 Trackbacks

Re: Coalgebraically Thinking

Peter Freyd’s coalgebraic characterisation of the reals

Interesting. This is something I should try to better understand.

I definitely need to find a copy of

Peter J. Freyd: Path Integrals, Bayesian Vision, and Is Gaussian Quadrature Really Good? Electr. Notes Theor. Comput. Sci. 29: (1999)

In physics one notices that globally hyperbolic Lorentzian manifolds naturally carry the structure of a poset (where for points xx, yy we have xyx \leq y if and only if “yy is in the future of xx”) and notices that this underlying poset structure remembers a surprising amount of information: every map between globally hyperbolic Lorentzian manifolds which induces a functor on the underlying posets is automatically a conformal isometry of Lorentzian spaces.

In other words: the poset structure on a Lorentzian manifold remembers everything about its pseudo-Riemannian geometry except the volume density.

In this sense, posets which have a top and bottom element, i.e. which have one point that is in the future of all other points, and one point which is in the past of all other ones, are in particular given by the causal subsets (1,1) (1,0) O (0,1) (0,0), \array{ & & (1,1) \\ & \nearrow && \nwarrow \\ (1,0) &&O&& (0,1) \\ & \nwarrow && \nearrow \\ && (0,0) } \,, on which the co-presheaves of observable algebras of the quantum field theory on our Lorentzian spacetime are defined.

Observations like this have made some people play with the idea that all causal subsets of spacetime might have finite cardinality (third item in the definition of a causal set/cause here).

But in any case Peter Freyd’s theorem tells us that the continuum is nicely encoded in the poset language itself:

the endofunctor Duplicate:Posets tbPosets tbDuplicate : Posets_{t\neq b} \to Posets_{t \neq b} on posets with distinct top and bottom element which takes a causal subset OO and attaches it to itself OOO O \mapsto O \vee O

O O \array{ & & \\ & \nearrow && \nwarrow \\ &&O&& \\ & \nwarrow && \nearrow \\ && \\ & \nearrow && \nwarrow \\ &&O&& \\ & \nwarrow && \nearrow \\ && } has as “maximal fixed point”, i.e. as final coalgebra the interval \array{ \uparrow } with the continuum (\simeq \mathbb{R}) of points in it.

That’s nice.

Posted by: Urs Schreiber on November 27, 2008 8:29 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Yes, Freyd’s paper – Path Integrals, Bayesian Vision, and Is Gaussian Quadrature Really Good? – sounds fascinating:

Physicists know how to integrate over all possible paths, computer-vision experts want to assign probabilities to arbitrary scenes, and numerical analysts act as if some continuous functions are more typical than others. In these three disparate cases, a more flexible notion of integration is being invoked than is possible in the traditional foundations for mathematics. If allowed to enter a highly speculative mode, such as the intersection of category theory and computer science, we may bump into some solutions to the problem.

But it does’t seem to have been written up. It only takes up a single page in the journal.

Posted by: David Corfield on November 28, 2008 9:34 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Algebraic or categorical set theory as in

van den Berg, Moerdijk: A unified approach to algebraic set theory

sounds fascinating, but I am confused about the way it is supposed to get started:

I understand the idea seems to be to regard categories somehow as more fundamnetal as sets. But how do we say “category” before first having said “set”?

From p. 5 of the above article it seems one first says “class” then “category” then “set”. Is that so?

Posted by: Urs Schreiber on November 27, 2008 9:13 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

I don’t think the idea is to regard categories as more fundamental than sets. I think the idea is more to elucidate the structure of models of certain forms of set theory (ZF), as arising from initial algebras of a certain form. I’m not an expert on this, but here’s my take on it.

Really roughly speaking, one starts with an ambient category of “classes” CC, with enough structure to permit one to interpret a certain amount of first-order logic inside, and which also comes equipped with a notion of “smallness” in the form of a “small-power-set” functor

P s:CCP_s: C \to C

which intuitively maps a class cc to the class of “small subclasses” [or let us just say “subsets” for short] of cc. Under some appropriate assumptions, it is shown that an initial algebra of P sP_s (which would be a class VV isomorphic to P s(V)P_s(V), the class of all subsets of VV) plays the role of a “cumulative hierarchy” of sets that can be used to precisely model the ZF axioms, or at least an intuitionistic version of ZF.

[Technically, the category of “classes” should be a pretopos: that gives enough structure with which to internalize first-order logic. It helps me to think of it as saying that certain number of set-theoretic operations are perfectly “safe” for classes as well. Heck, the category of classes CC could be a topos as well, but of course we can’t go too crazy – for example, we can’t then go around contemplating an initial algebra for the power-class functor P:CCP: C \to C, otherwise we’d get a fictitious universe vv such that vPvv \cong P v, violating Cantor’s theorem. We can however contemplate an initial algebra for a small-power-set functor P sP_s, and thereby speak of a universe or cumulative hierarchy of “sets”, closed under P sP_s.]

Anyway, I for one think one has to be able to say “collection” before one can say “category”, but that’s not what this is about. Rather, it’s using categorical ideas to get a deeper understanding of the structure of formal ZF, demystifying ZF if you will and laying it bare as a certain form of algebra.

Posted by: Todd Trimble on November 28, 2008 12:31 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Thanks, Todd. I probably know too little about this foundational stuff to really appreciate what’s going on.

My naive impression had been that to get started at the beginnin mathematics has to pull itself on its own bootstraps (in the German original: on its own hair ;-) out of the swamp of unprecise thinking and somehow define what a “collection” is and which axioms that is supposed to fullfill.

In your comment you seem to imply that one can talk about classes before having set up set theory. How does this work? When I look up definitions of “classes” I see that they are formulated in “the context of set theory”, where probably my understanding of that is not good enough to appreciate some subtle differences.

So, concretely, you say:

one starts with an ambient category of “classes” CC, with enough structure to permit one to interpret a certain amount of first-order logic inside

Can you give more details about how this works? Can the quotation marks around “”classes”” be removed?

Suppose I made you write a survey article “Foundations of mathematics from a modern topos-theoretic point of view”, how would you start?

Sorry if these questions sound dumb. I’d really like to better understand this.

Another thing I’d like to know is: Jacob Lurie in his “Higher topos theory” adsvertizes working internal to “accessible categories” as a means to avoid most set-theoretical issues. I have only a very rough level of understanding of that, too.

Posted by: Urs Schreiber on November 28, 2008 10:45 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Suppose I made you write a survey article “Foundations of mathematics from a modern topos-theoretic point of view”, how would you start?

No need to make him – he’s already done so, followed up by this.

Posted by: David Corfield on November 28, 2008 11:03 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

he’s already done so, followed up by this.

These are beautiful expositions. If any series of blog posts ever deserved to be repreinted as a book, this does.

I also looked a bit at Sets for mathematics, which, I suppose, is an exposition of this very approach.

And I am very happy with this way of thinking of sets and set theory through the category of sets and its properties. I very much appreciate one of the main points that Todd emphasizes, namely that conceiving membership of elements in sets in terms of morphisms into sets heals one of the main deficiencies of classical set theory.

So if this is the point here, that sets, like any other structure, are best studied in terms of properties of the category they form, I am following.

But I still feel a bit confused, even then: when Todd (or Lawvere-Rosebrugh, I suppose) speaks about the category of sets which is supposed to define what sets are, he uses languge like “given two morphisms xyz there is a morphisms abc such that we have a universal morphism rst” and the like.

Isn’t that, at its basis, circular? In order to say what such language means, don’t we already need to have an idea of what sets are? Or “collections” at least?

I know one has to start somewhere and that starting to talk about mathematics probably has to involve this circularity at some point. We need to pull ourselves on our bootstraps out of the swamp.

So I am wondering about what the state-of-art of thinking about this bootstrap process is. From Todd’s account and from what I have seen in Lawvere-Rosebruch (just the first chapters), is it right that the attitute is this:

we assume there is a “naive notion of collections” which we don’t try to axiomatize. We assume “everybody knows” what it means to say things like “given this” and “there is” and “exists uniquely” etc.

Using such language, we then describe a certain category and then declare: this is the category of sets.

Is that what we do?

Posted by: Urs Schreiber on November 28, 2008 12:02 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

This is getting closer to how I look at it. Okay, here’s how I look at it: the word “set” or “collection” is never actually defined (and if we tried, we’d certainly run into the problem of the dictionary, that is, the problem that defining words using other words is bound to be circular). So, for instance, when you use a phrase like “model of set theory”, you have to refer to something like “a collection VV together with a binary relation \in on VV such that the following axioms are satisfied…”.

What a set theory like ZF does is give a formal language which is sufficiently rich to express all the things mathematicians would like to be able to do with “sets” in practice. [This shift from ontological questions (what sets are) to more operational ones (how are sets used) is a very important one in my mind.]

It should be said that a formal theory like ZF doesn’t use words like “set”: it just puts down first-order logical formulas (with one non-logical symbol, \in, not itself defined but axiomatized) to express the essential properties of things like an infinite set, power sets, and so on. (Naturally, good books on the subject will include some handwaving ancillary comments about “sets”, much as computer code is interlaced with comments addressed to the user.) As a backdrop to this theory, there is first-order logic itself, which the user of the theory must somehow learn the rules for, and there too there can be instances of built-in circularity, as when one uses the word “and” to discuss how the symbol \wedge works. But the problem is not too serious, since no mathematician worthy of the name ever misunderstands how the rules work in practice [even if that understanding is partly subconscious]. We can even computerize the rules through clever use of logic gates in circuits.

Same with categories-based theories of sets, such as Lawvere’s ETCS which I’ve been blogging about. Once one has mastered the underlying logical apparatus of categories (which you certainly don’t need a full-fledged set theory like ZF for, since the first-order definition of “category” is pretty simple: just look at the opening pages of Freyd & Scedrov’s book Categories, Allegories for instance), you can begin writing down axioms for “the category of sets” [as in ETCS] which pretty much cover any mathematical use you’d like to make of sets.

Not to belabor the point, but one doesn’t need a full-fledged Theory of the Category of categories (as Lawvere and a few others have tried to write down) before using the first-order notion of category as preamble to category-based set theory. Such a Theory would try to include axioms which encompass all the various uses categories are put to, and would include existential axioms like existence of certain functor categories, but that’s asking far more than working familiarity with the notion of category.

Posted by: Todd Trimble on November 28, 2008 2:41 PM | Permalink | Reply to this

Elementary theories

when [one] speaks about the category of sets which is supposed to define what sets are, [one] uses language like “given two morphisms xyz there is a morphisms abc such that we have a universal morphism rst” and the like.

Isn’t that, at its basis, circular? In order to say what such language means, don’t we already need to have an idea of what sets are? Or “collections” at least?

No, absolutely not! Do you see words like ‘set’ and ‘collection’ there? No, you only see words like ‘given’ and ‘there is […] such that’, which indicate quantification in first-order logic. That is, you do not need a set theory (or collection theory or whatever), only a logic.

This is true even for Zermelo-Fränkel set theory, which says things like ‘given a and b there is c such that a ∈ c and b ∈ c’ (the axiom of pairing). So ETCS is no more circular than ZF.

This is the situation in which one can justifiably use the word ‘elementary’. Thus, the elementary theory of the category of sets needs no external set theory, just a predicate logic. And you need only a logic to define an elementary topos (as opposed to the earlier concept of Grothendieck topos, which requires a set theory). Perhaps more familiarly, you can do elementary group theory (which has axioms like ‘given a there is b such that ab = 1’) and prove some theorems (such as ‘if all elements have order 2 then commutativity holds’) but not others (such as ‘a group of prime order is cyclic’, which requires an external natural-number arithmetic to state and prove, although still not a full-fledged set theory). Really, ETCS is not much more involved than that.

Posted by: Toby Bartels on November 28, 2008 10:34 PM | Permalink | Reply to this

Re: Elementary theories

No, you only see words like ‘given’ and ‘there is […] such that’, which indicate quantification in first-order logic. That is, you do not need a set theory (or collection theory or whatever), only a logic.

I see what you mean. However, if I try hard enough I can still get myself confused about this:

When we say “there is” here, don’t we reall mean: “there is in the collection of morphisms of the category of Sets which we are in the process of defining”.

I mean, for instance we don’t want to say just generally “given two morphisms with coinciding codomain, their pullback exists”. We want to say that this is true in the collection of morphisms of SetsSets.

So ETCS is no more circular than ZF.

Okay. I was maybe hoping for a moment it would be less circular. But all right.

And I don’t want to be a pain here, but since I am not an expert on these matters, I have further questions:

From the way it is built up, it seems now that large categories are more fundamental than small ones:

if I understand you all correctly, the idea is that one assumes the definition of the category SetsSets of sets is something we just all happen to understand by grace and which cannot be further questioned without running into circularities.

Given that, we will next define a small category to be a category internal to SetsSets, I suppose.

This just makes me wonder, because usually when we heare at conferences somebody talk about categories somebody from the audience will interrupt and ask “but your category is not small, so everything is much more involved than you suggest”, and the speaker will usually reply “sure, but, as usual, I will assume that these set-theoretical issues can be dealt with by universe-enlargement, or by invoking accessible categories or by playing some other boring tricks”.

Why is that happening if it’s the large categories which we “understand by grace” and the small ones which require axiomatization??

(I am playing devil’s advocate here. I am not really puzzled by these questions. On the other hand, I wouldn’t mind if some day I could feel that I really know what’s going on in these subtle matters.)

Posted by: Urs Schreiber on November 29, 2008 1:52 PM | Permalink | Reply to this

Re: Elementary theories

On the other hand, I wouldn’t mind if some day I could feel that I really know what’s going on in these subtle matters.)

Don’t know if this has already been mentioned in this thread, but have you read Mike Shulman’s Set theory for category theory?

Abstract: Questions of set-theoretic size play an essential role in category theory, especially the distinction between sets and proper classes (or small sets and large sets). There are many different ways to formalize this, and which choice is made can have noticeable effects on what categorical constructions are permissible. In this expository paper we summarize and compare a number of such “set-theoretic foundations for category theory,” and describe their implications for the everyday use of category theory. We assume the reader has some basic knowledge of category theory, but little or no prior experience with formal logic or set theory.

Posted by: Bruce Bartlett on November 29, 2008 2:18 PM | Permalink | Reply to this

Re: Elementary theories

Urs writes:

When we say “there is” here, don’t we really mean: “there is in the collection of morphisms of the category of Sets which we are in the process of defining”.

It may not matter what we “really mean” — Toby is talking about formal mathematics here, not psychology!

One of the great advantages of Hilbert’s strategy of formalizing mathematics is that it lets us separate the question “what are the rules of the game?” from the question “what do we think the game means?” Both questions are important, but the first one is much easier, so it’s nice to be able to treat it separately.

When trying to formalize mathematics, people often use first-order logic as an infrastructure for setting up more complicated theories like set theory, category theory and so on. First-order logic is a formalism that includes the usual logical connectives (‘and’, ‘or’, ‘not’, and ‘implies’ are already more than enough), variables which can be quantified over (using \forall and \exists), and predicates which cannot be quantified over. Typically we include equality as one of the predicates. Finally, there are a few ‘deduction rules’ that say how we can prove things.

First-order logic lets us write down various axioms of set theory. But, we can also go ahead and use first-order logic to write down axioms for category theory or topos theory, skipping the set theory!

This was one of Lawvere’s great realizations: we don’t need to do category theory on top of set-theoretic foundations.

Sadly, this realization was not embraced by many ‘traditional’ logicians — namely, those who held set theoretic foundations as sacred. There was a heated debate, a lot of misunderstanding, and a sad split in the community of logicians.

Luckily, we are blessed here by the presence of Toby and Todd, who are experts on both sides of logic: the ‘traditional’ and the ‘category-theoretic’ approaches.

Posted by: John Baez on November 29, 2008 11:04 PM | Permalink | Reply to this

Re: Elementary theories

When we say “there is” here, don’t we reall mean: “there is in the collection of morphisms of the category of Sets which we are in the process of defining”.

Not if you don’t want to. In fact, you don’t have to mean anything about collections, morphisms, or categories at all! Secretly we know what we want these things to mean (the intended semantics), but the actual list of axioms itself makes no reference to this. You may know that David Hilbert said (perhaps apocryphally) ‘It must be possible to replace in all geometric statements the words point, line, plane by table, chair, beer mug.’. So while we know that these are supposed to be functions between sets, and this may inspire our choice of axioms, the axioms themselves make no reference to this fact. You know this, but you what you may not have noticed is that they also make no reference to the fact that these functions are supposed to form a collection (if you even believe that they do).

By the way, as the reference to Hilbert should remind us, we are not defining the category of Sets; we are only axiomatising it!

[I]f I understand you all correctly, the idea is that one assumes the definition of the category Sets of sets is something we just all happen to understand by grace and which cannot be further questioned without running into circularities.

Maybe the guy that came up with the axioms understands the category Sets by grace, but there is no reason why we all should! All that we need to know about sets to use them is the axioms hold, not by grace but by fiat: here are the axioms, now what follows?. And it’s quite possible to question the axioms further (say, to consider whether one axiom follows from the others and is therefore irrelevant, or whether the axioms might in fact be inconsistent, or whether all of the theorems that we’re interested in might follow from a weaker system of axioms, and so on) if we wish, although we certainly don’t have to. What we can’t do, of course, is to try to prove that the axioms are correct for the actual category Sets —unless of course already have some other set theory (say, ZF) and want to see if these axioms (say, of ETCS) hold for that other set theory (which in this case, they do). Or, of course, if we have some intuitive concept of sets, we can see if the axioms seem correct for that, but then there is no question of proof.

You do need to understand something by grace, however: first-order classical logic, the language in which both ETCS and ZF are written. (Actually, you can analyse that a bit further, in a metalogic in which only the so-called ‘negative’ logical operators —⊤, ∧, →, ∀— are used. But I don’t know any way to proceed without those.)

From the way it is built up, it seems now that large categories are more fundamental than small ones:

No, no!

Given that, we will next define a small category to be a category internal to Sets, I suppose.

You can if you want to, and with a little more work you can define a large category and even show how Sets is large and cannot be small.

But tell me this: Until you accept that Sets (as axiomatised, let’s assume, by ETCS) is the basis for defining terms like ‘small’ and ‘large’, why do you say that Sets is large!? These terms ‘small’ and ‘large’ only make sense relative to some standard of size, some particular set theory that tells us what they mean.

Before you have a set theory, and in particular while you are writing down the axioms of ETCS, the most fundamental kind of category is a metacategory: a system of objects and arrows satisfying the axioms of a category, without any assumption as to how many of these there are, whether they form a collection, or even how to state such questions. (Actually it is possible to state in first-order logic such things as ‘There are exactly 5 objects that have exactly 7 endomorphisms apiece.’, but that doesn’t get you very far.) Then once you have some notion of arithmetic (full-fledged set theory is not yet necessary), you get the concepts of finite and infinite category (as well as things like ‘There are exactly a prime number of objects that have exactly a perfect number of endomorphisms apiece.’). And once you have a set theory, you get the concepts of small and large category (or even countable and uncountable category). But until you adopt some set theory to define ‘small’ and ‘large’, your categories are neither. (By default, I use the simple word ‘category’ even for a metacategory, since that is the fundamental concept.)

This just makes me wonder, because usually when we heare at conferences somebody talk about categories somebody from the audience will interrupt and ask “but your category is not small, so everything is much more involved than you suggest”, and the speaker will usually reply “sure, but, as usual, I will assume that these set-theoretical issues can be dealt with by universe-enlargement, or by invoking accessible categories or by playing some other boring tricks”.

If the speaker is only discussing that one category, then they can simply reply that they are discussing a metacategory, and concepts of size do not apply. (But if the objection is that the category actually is large, then there should be some set theory relative to which the category in question is large, and if the speaker is using this, then a more involved answer is necessary.)

Posted by: Toby Bartels on November 29, 2008 11:26 PM | Permalink | Reply to this

Re: Elementary theories

Thanks to everyone for the comments above. I found them really helpful, though I’m still far from comfortable with these things.

Posted by: James on November 30, 2008 12:34 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Here’s a nice introduction to terminal coalgebras, from a talk by Eugenia Cheng at the CT08 conference.

(Incidentally, I don’t usually hear people talk about ‘final’ rather than ‘terminal’ objects — except in the case of coalgebras. I have no idea how this situation came about. Just in case anyone’s confused: final == terminal.)

Eugenia’s talk was about the definition of weak nn-category proposed by Todd Trimble in 1999. (See here or here, or Eugenia’s talk, for details.) Todd’s definition only applied for finite nn. However, Eugenia and I discovered that by using terminal coalgebras, you can extend the definition to n=n = \infty.

This is an example of a general principle on initial algebras vs. terminal coalgebras, more or less mentioned by David. One often thinks of the initial algebra for an endofunctor as being the collection of all ‘finite’ objects, and the terminal coalgebra as the collection of all ‘possibly infinite’ objects.

Example  The endofunctor XX+1X \mapsto X + 1 of Set\mathbf{Set} has \mathbb{N} as its initial algebra (‘finite’ natural numbers), and {}\mathbb{N} \cup \{\infty\} as its terminal coalgebra (‘possibly infinite’ natural numbers).

There are many other similar examples. The point is that taking terminal coalgebras allows us to ‘leap to \infty’.

Let me mention one more example of this phenomenon, as it’s relevant to the Freyd continuum thing.

Let C\mathbf{C} be the category of bipointed spaces, by which I mean topological spaces XX equipped with two distinct, closed basepoints x ,x +x_-, x_+. You can wedge together any two bipointed spaces XX, YY to form a new bipointed space XYX \vee Y. So there’s an endofunctor XXXX \mapsto X \vee X of C\mathbf{C}.

Fact One (a version of Freyd’s theorem):  its terminal coalgebra is the space [0,1][0, 1] with the endpoints as basepoints.

Fact Two (easier):  its initial algebra is the set of dyadic rationals in [0,1][0, 1], again with 00 and 11 as basepoints.

So the initial algebra consists of all numbers in [0,1][0, 1] with finite binary expansions, 0.b 1b 2b n, 0.b_1 b_2 \ldots b_n, whereas the terminal coalgebra consists of all numbers in [0,1][0, 1] with any (possibly infinite) binary expansions, 0.b 1b 2. 0.b_1 b_2 \ldots.

Posted by: Tom Leinster on November 28, 2008 12:26 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Can we get all the reals into the picture here, rather than just [0,1][0, 1]? I see Pavlovic and Pratt characterise the [0,1)[0, 1) interval as the final coalgebra with respect to both

X×X, X \to \mathbb{N} \times X,

and

X1+×X. X \to 1 + \mathbb{N} \times X.

They also speculate about an intriguing analogy

induction : arithmetic :: coinduction : analysis.

It looks like something necessarily can’t go smoothly for the reals. As above with Tom’s binary expansions, there’s the

0.011111...=0.1 0.011111... = 0.1

problem.

Pratt and Pavlovic explain how the continuum is not coinductively continuous with respect to addition and multiplication.

Posted by: David Corfield on November 28, 2008 9:00 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Can we get all the reals into the picture here, rather than just [0,1][0, 1]?

It’s an interesting question. In some work on self-similarity, I showed that any given compact metrizable space has a terminal-coalgebra characterization. This can be regarded as a big extension of Freyd’s result. But \mathbb{R} is not, of course, compact.

You can cheat. [0,1][0, 1] is homeomorphic to the extended real line [,][-\infty, \infty]. In that sense, we already have a characterization of the extended reals. What’s the coalgebra structure on [,][-\infty, \infty]? Well, any endpoint-preserving bijection [,][,][,] [-\infty, \infty] \to [-\infty, \infty] \vee [-\infty, \infty] will do, but perhaps the most natural choice is x{log(x) firstcopyof [,],  ifx0 logx secondcopyof [,],  ifx0. x \mapsto \left\{ \begin{aligned} -\log(-x) \in   first copy of  [-\infty, \infty], &  if x \leq 0 \\ \log x \in   second copy of  [-\infty, \infty], &  if x \geq 0. \end{aligned} \right. I don’t know how useful that observation is.

Posted by: Tom Leinster on November 28, 2008 3:49 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

The question regarding the real line reminds me of something that’s been bugging/intriguing me since PSSL83: is there a similar characterization of R not just as a topological space, but as a group? The characterization of [0,1] gives a categorical point of view on the space L1[0,1]; what would be very nice is a categorical perspective on the algebra L1(R)…

Posted by: Yemon Choi on November 28, 2008 6:03 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Yemon, I recently gave another talk in which I mentioned that initial-algebra characterization of L 1[0,1]L^1[0, 1] (the one you saw at PSSL83), and someone asked the same thing: what about the algebra structure? But I’m completely confused. What’s the algebra structure on L 1L^1? It can’t be pointwise multiplication, because (for instance) the function x{1/x ifx(0,1) 0 otherwise x \mapsto \left\{ \begin{aligned} 1/\sqrt{x} &if x \in (0, 1) \\ 0 &otherwise \end{aligned} \right. is in L 1L^1, but its pointwise square isn’t.

Posted by: Tom Leinster on November 29, 2008 8:12 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Hi Tom,
Yes, you’re right that L1[0,1] isn’t an algebra wrt pointwise product. I was referring to the structure that L1(R) possesses as a convolution algebra. That is, suppose f and g are continuous, scalar-valued functions on R with compact support, and define f*g by the usual formula for convolution

(1)(f*g)(t)= sf(s)g(s+t)ds (f*g)(t) = \int_s f(s)g(-s+t) ds

Then this extends to give a continuous algebra product on L1(R). (Aside: the same construction works for any locally compact hausdorff group G, so that every such G gives rise to an algebra L1(G). This is almost a covariant functor from groups to algebras, but not quite, unless we restrict ourselves to certain kinds of morphisms…)

Now: the characterization of L1[0,1] as `initial in a suitable sense’, seems to be a contravariant version of the characterization of [0,1] as `terminal in a suitable sense’. So we are using the rule of thumb (spaces)op <—> algebras. But if we want to look at L1(R) as an algebra, this lives in a world where you want to be covariant in the base space… and this is my confusion.

Actually, I’ve just thought of a variant on the question. Suppose that we fix a locally compact group G, and now try and G-enrich that category of Banach-widgets, i.e. give all objects a G-action and require all morphisms to be G-maps. Would we still get an initial object?

(Puzzle for the unoccupied: what might be my motivation for asking this?)

Posted by: Yemon Choi on November 30, 2008 2:03 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Thanks, Yemon. I understand the question now. Unfortunately, I’m not even close to an answer.

Posted by: Tom Leinster on December 1, 2008 5:27 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Regarding my question about a coalgebraic representation of all the reals, according to Peter Freyd Real Algebraic Analysis:

To this date [2008 - DC], no one has found
a functor whose final coalgebra is usefully the reals.

But then

For reasons that can easily be considered abstruse we were led to the belief that the closed interval–not the entire real line–is the basic structure of interest.

Posted by: David Corfield on September 1, 2009 11:28 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Just to clarify something here: from what I saw on those old 1999 messages on the categories list, Pavlovic and Pratt characterize the interval [0,1)[0, 1) as the terminal coalgebra of the endofunctor on posets given by ordinal product with ω\omega, that is, with the functor

X×XX \mapsto \mathbb{N} \times X

but where the right side is given the dictionary order, not the usual product order. If you use the product order, then the terminal coalgebra is the exponential \mathbb{N}^\mathbb{N} (as in Example 1 of those nice notes of the talk by Eugenia Cheng and Tom Leinster that Tom mentioned).

(Incidentally, I wasn’t able to access that paper by Pavlovic and Pratt. Is the link correct?)

Posted by: Todd Trimble on November 28, 2008 5:29 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Can the pp-adic (integers or numbers) be understood coalgebraically, not just in terms of their topology but also their arithmetic?

Posted by: David Corfield on November 28, 2008 9:20 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Interesting question. The pp-adic integers p^\hat{\mathbb{Z}_p} certainly have that kind of coalgebraic flavor to me; here is a guess about how that might work at least additively.

Working with the co-slice category

C=AbC = \mathbb{Z} \darr Ab

define an endofunctor given by taking the coproduct with the object 1p\mathbb{Z} \hookrightarrow \frac1{p}\mathbb{Z}. As an abelian group, this coproduct would just be the pushout construction

AA 1p.A \mapsto A \oplus_{\mathbb{Z}} \frac1{p}\mathbb{Z}.

I am conjecturing that the additive group of pp-adic integers is the terminal coalgebra for this endofunctor. Roughly, a coalgebra structure

f:AA 1pf: A \to A \oplus_{\mathbb{Z}} \frac1{p}\mathbb{Z}

can be used to generate, for each aAa \in A, a stream of pp-based digits in 1pmod\frac1{p}\mathbb{Z} mod \mathbb{Z}, which I am hoping would glue coherently to form a pp-adic integer, thus producing a coalgebra map

A p^.A \to \hat{\mathbb{Z}_p}.

Posted by: Todd Trimble on November 28, 2008 6:52 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Before anyone wastes any time trying to make sense of the preceding comment, I’d better admit that it’s embarrassingly flawed. I’ll see if I can repair it.

Posted by: Todd Trimble on November 29, 2008 1:31 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Hm, now that I’ve slept on it, the original idea I had for constructing the pp-adic integers as a terminal coalgebra doesn’t seem so bad after all. There is a repair to be made in what I said earlier, but the rough idea would be just to follow Adamek’s theorem cited in the notes of Eugenia’s talk, with fingers crossed.

So: Adamek’s theorem says that if CC is a category with a terminal object 11, then a terminal coalgebra for an endofunctor F:CCF: C \to C exists, provided that the limit of

F n+11F n!F n1F1!1\ldots \to F^{n+1}1 \stackrel{F^n !}{\to} F^n 1 \to \ldots F 1 \stackrel{!}{\to} 1

exists and FF preserves it.

Again, taking CC to be the co-slice Ab\mathbb{Z} \darr Ab, the terminal object has as its underlying abelian group the zero group. Repairing slightly what I suggested before, let us take the endofunctor

F:AbAbF: \mathbb{Z} \darr Ab \to \mathbb{Z} \darr Ab

to be the functor “push out along the arrow p:p \cdot -: \mathbb{Z} \to \mathbb{Z}”. (That is, if a category AA has pushouts, then for any morphism f:xyf: x \to y in AA there is a functor from the co-slice under xx to the co-slice under yy, and here we are taking an instance of that.)

Then F1F1 is the surjection p\mathbb{Z} \to \mathbb{Z}_p. One may check by induction that F n!F^n ! is the standard surjection

p n+1 p n.\mathbb{Z}_{p^{n+1}} \to \mathbb{Z}_{p^n}.

The limit of the Adamek diagram is then the pp-adic integers ^ p\hat{\mathbb{Z}}_p (clearly that’s true at the abelian group level, and then we use the easy general fact that the underlying functor xAAx \darr A \to A creates limits).

The only thing left to check is that FF preserves this limit. Indeed, the pushout of the exact sequence

0p p00 \to \mathbb{Z} \stackrel{p \cdot -}{\to} \mathbb{Z} \to \mathbb{Z}_p \to 0

is

0^ pp^ p p0.0 \to \hat{\mathbb{Z}}_p \stackrel{p \cdot -}{\to} \hat{\mathbb{Z}}_p \to \mathbb{Z}_p \to 0.

There is a morphism from this sequence to

0 p n p n+1 p00 \to \mathbb{Z}_{p^n} \to \mathbb{Z}_{p^{n+1}} \to \mathbb{Z}_p \to 0

where the first arrow is the standard projection π n:^ p p n\pi_n: \hat{\mathbb{Z}}_p \to \mathbb{Z}_{p^n} and the third arrow is the identity 1: p p1: \mathbb{Z}_p \to \mathbb{Z}_p. The middle arrow of this morphism of sequences, which by definition is

F(π n):F(^ p)F( p n),F(\pi_n): F(\hat{\mathbb{Z}}_p) \to F(\mathbb{Z}_{p^n}),

does indeed match the standard projection

π n+1:^ p p n+1\pi_{n+1}: \hat{\mathbb{Z}}_p \to \mathbb{Z}_{p^{n+1}}

and I believe we are done. As a slight extra bonus, one could rewrite the entire argument replacing AbAb by the category of locally compact Hausdorff abelian groups, and get the compact abelian group of pp-adics as a terminal coalgebra.

Posted by: Todd Trimble on November 29, 2008 4:50 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Cool!

Posted by: Tom Leinster on November 29, 2008 8:18 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Mac Lane gives a construction of the p-adics using the limit of a diagram that I think is equivalent to yours (taking Z mod p^n+1 down to Z mod p^n) in the category of rings as a n example of creation of limits. (Bottom of page 110 of Mac Lane’s “Categories for the Working Mathematician” (second edition))

Posted by: Robert Furber on December 4, 2008 3:01 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

That’s right, and that was the seed diagram that got me started. From there, it was more or less the problem of figuring out how that diagram could be seen to be of the form

F n+11F n!F n1F1!1\ldots \to F^{n+1} 1 \stackrel{F^n !}{\to} F^n 1 \to \ldots \to F1 \stackrel{!}{\to} 1

i.e., what functor F:CCF: C \to C (on which category CC) do we use so that the nn-th projection p n+1 p n\mathbb{Z}_{p^{n+1}} \to \mathbb{Z}_{p^n} arises by applying FF to the (n1)(n-1)-st projection.

Unfortunately, I didn’t see how to do this in the category of rings (or something like the category of rings) – that is, I didn’t see how to get the ring of pp-adic integers as a terminal coalgebra.

However, I’ll bet one can exploit the terminal coalgebra structure I did find to recover the ring multiplication

m:^ p^ p^ pm: \hat{\mathbb{Z}}_p \otimes \hat{\mathbb{Z}}_p \to \hat{\mathbb{Z}}_p

coalgebraically, by putting a suitable coalgebra structure on the tensor product (so that mm comes about as the unique coalgebra map to the terminal coalgebra). There is in fact a coalgebra structure on the tensor product which I claim is pretty obvious. Then to check that mm is the coalgebra map, one thing to do would be to check that its restriction to the coalgebra conclusion

^ p^ p\mathbb{Z} \otimes \mathbb{Z} \to \hat{\mathbb{Z}}_p \otimes \hat{\mathbb{Z}}_p

factors as

m^ p.\mathbb{Z} \otimes \mathbb{Z} \stackrel{m}{\to} \mathbb{Z} \to \hat{\mathbb{Z}}_p.

I’m hoping that’s enough, by working in the topologized setting mentioned at the end of my prior post, and using a uniform continuity argument plus the fact that the integers in the pp-adic topology are dense in the pp-adic integers. Well, I’m writing this at a strange time in the morning for me, so I should really look into this carefully when I’m properly awake.

Posted by: Todd Trimble on December 4, 2008 10:38 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

The Prüfer group is the Pontryagin dual of the pp-adics. Does that duality force the group to be an initial algebra?

Or is this a case of formal duality and concrete duality failing to coincide, as Lawvere and Rosebrugh discuss.

Posted by: David Corfield on December 4, 2008 11:17 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

I believe the argument dualizes perfectly: the initial algebra for the functor

F:LCHAbLCHAbF: \mathbb{Z} \darr LCHAb \to \mathbb{Z} \darr LCHAb

that pushes out along multiplication by p:p: \mathbb{Z} \to \mathbb{Z} is the Prüfer group, constructed as the colimit of

0F(0)F n(0)F n!F n+1(0)0 \to F(0) \to \ldots \to F^n(0) \stackrel{F^n !}{\to} F^{n+1}(0) \to \ldots

In fact, the main hypothesis of the dual of Adamek’s theorem becomes even simpler to verify: FF preserves this colimit because it is left adjoint to pulling back along multiplication by pp.

I still don’t know how to get the topological group p\mathbb{Q}_p of pp-adics as an initial algebra or final coalgebra in some way (noting in passing that it is Pontryagin dual to itself).

Posted by: Todd Trimble on December 4, 2008 1:03 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

One often thinks of the initial algebra for an endofunctor as being the collection of all ‘finite’ objects, and the terminal coalgebra as the collection of all ‘possibly infinite’ objects.

If you want necessarily infinite objects, then you use a functor F such that F(0) = 0, so that there are no finite objects.

Examples:

  • For x ↦ Ax + 1, the initial algebra consists of lists (finite sequences) of elements of A;
  • For x ↦ Ax + 1, the final coalgebra consists of streams (finite or infinite sequences) of elements of A;
  • For x ↦ Ax, the inital algebra is empty;
  • For x ↦ Ax, the final coalgebra consists of (infinite) sequences of elements of A.
Posted by: Toby Bartels on November 28, 2008 11:15 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Tom told us about Freyd’s wedging functor for bipointed spaces that:

Fact One (a version of Freyd’s theorem): its terminal coalgebra is the space [0,1][0, 1] with the endpoints as basepoints.

Fact Two (easier): its initial algebra is the set of dyadic rationals in [0,1][0, 1], again with 00 and 11 as basepoints.

Now I just happen to be reading Week 175 and came across John telling us about the hyperfinite II 1II_{1} factor:

Start with the algebra of 1×11 \times 1 matrices, and stuff it into the algebra of 2×22 \times 2 matrices as follows:

xx \mapsto

(x 0 0 x) \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} This doubles the trace, so define a new trace on the algebra of 2×22 \times 2 matrices which is half the usual one. Now keep doing this, doubling the dimension each time, using the above formula to define a map from the 2 n×2 n2^n \times 2^n matrices into the 2 n+1×2 n+12^{n + 1} \times 2^{n + 1} matrices, and normalizing the trace on each of these matrix algebras so that all the maps are trace-preserving. Then take the union of all these algebras… and finally, with a little work, complete this and get a von Neumann algebra!

One can show this von Neumann algebra is a factor. It’s pretty obvious that the trace of a projection can be any fraction in the interval [0,1][0, 1] whose denominator is a power of two. But actually, any number from 00 to 11 is the trace of some projection in this algebra - so we’ve got our paws on a type II 1II_1 factor.

This raises the obvious question. Does the kind of completion going on between Freyd’s initial algebra and terminal coalgebra have anything to do with what John described as a completion taking “a little work”?

Posted by: David Corfield on October 22, 2009 4:15 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Interesting. There’s been some related discussion here.

Posted by: Tom Leinster on October 22, 2009 6:58 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

This theme has come up before, to what extent is the mathematics we use in physics a reflection of what can be measured about the system, rather than about the system itself? Of course, you might want to say there’s nothing more to a system than the totality of measurements which can be made of it.

In A Tutorial on (Co)Algebras and (Co)Induction, Jacobs and Rutten give the example of

a machine with two buttons value,next:XA×X\langle value, next \rangle : X \to A \times X, with autonomous activity in time… The possible observations that we can make of the situation are of the form: after pushing the nextnext button nn \in \mathbb{N} times with intervals α 1,,α n\alpha_1, \ldots, \alpha_n, we see a certain value in AA after an interval of length β\beta.

The space of observations will be a certain terminal algebra.

The question I’m vaguely trying to feel for is whether certain mathematical constructions arise as holders for all possible observations.

I wonder how quantum systems might be seen in this light.

Posted by: David Corfield on November 28, 2008 11:31 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

to what extent is the mathematics we use in physics a reflection of what can be measured about the system, rather than about the system itself

[…]

I wonder how quantum systems might be seen in this light.

I haven’t thought about this in terms of coalgebras. When we discussed the Isham-Döring and Heunen-Spitters approach # my impression had been that the main underlying point they promoted is:

a physical theory is a “pointed topos”, namely a topos EE with a single out object Σ\Sigma, called the “space of states”.

States of the system are morphisms ψ:1Σ\psi : 1 \to \Sigma and possible observations are morphsims P:ΣΩP : \Sigma \to \Omega. The degree to which it is true that PP holds onm the state ψ\psi is given by the morphism 1ψΣPΩ1 \stackrel{\psi}{\to} \Sigma \stackrel{P}{\to} \Omega.

Of course that’s just the standard underlying idea of toposes, retold using physics terminology.

Posted by: Urs Schreiber on November 28, 2008 12:17 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

“to what extent is the mathematics we use in physics a reflection of what can be measured about the system, rather than about the system itself”

meaning?? non-quantitative aspects?
how do you get a hold on `the system itself’??

Posted by: jim stasheff on November 29, 2008 10:36 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

From the computer science point of view, this is the idea of the black box. Maybe all I can find out about the workings of some black boxes is by repeatedly pressing a button and reading off a value from set AA.

If that’s all I can possibly know, I may say that my state space is the set of infinite streams of elements of AA, with ‘dynamics’ given by head,tail\langle head, tail \rangle.

Posted by: David Corfield on November 30, 2008 1:27 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

When you observe a quantum system you whack it with a hermitian operator and get back a probability distribution on the set of eigenvalues for that operator. Speaking loosely, if H is the set of hermitian operators on your state space V, then from an observational perspective you can think of the system as described by a coalgebraic structure X -> P(X × R)H where P is the function mapping sets to the collection of PDFs on that set and R is the real line corresponding to the possible eigenvalues. (Really a few tweaks are needed to make that correct.)

We could describe the “observable state” of the system as a solution to the equation X=P(X × R)H with two states considered equivalent when bisimilar. The original state vector has disappeared from the description leaving us only with observables. This is quite nice because there is a sense in which the state vector is “too much information” - no experiment can ever completely and non-destructively extract the full state vector from non-trivial physical systems. And built right into this description is the notion that observations affect the state. On the other hand, a good old element of the original state vector space V seems a lot easier to work with.

Posted by: Dan Piponi on November 28, 2008 9:18 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

This is precisely the line of thought that Samson Abramsky is developing in section 4 of Coalgebras, Chu Spaces, and Representations of Physical Systems. A final coalgebra, or universal model appears in chapter 7.

Posted by: David Corfield on October 23, 2009 9:25 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

I just spotted the paper on arxiv and came straight here to see if anyone else had noticed the connection :-)

Posted by: Dan Piponi on October 23, 2009 9:42 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

trying to understand coalgebras

Do I misunderstand the title?
thinking not coalgebraically but
thingking about applying coalgebra pov?

for coalgebras in thier own rite,
a thorough intro is in


MR2035107 (2005b:16070) Michaelis, Walter Coassociative coalgebras. Handbook of algebra, Vol. 3, 587–788, North-Holland, Amsterdam, 2003.

He’s also written on the Lie analog

Posted by: jim stasheff on November 29, 2008 10:41 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Yeah: pretty different sense of “coalgebra” being discussed here.

You’re aware that categorists have a general notion of “algebra” over a monad TT, an object xx equipped with a morphism α:Txx\alpha: T x \to x compatible with the monad structure. This notion of algebra is more in tune with universal algebra in the sense of Birkhoff and others. For example, given any “equational theory” 𝒯\mathcal{T}, that is, a set of basic operations, and a set of axioms in the form of (universally quantified) equations between operations derivable from the basic ones, you can cook up a monad TT so that the category of algebraic gadgets of type 𝒯\mathcal{T} coincides with the category of TT-algebras in the categorists’ sense. In that way, monad theory becomes a vast extrapolation of universal algebra.

Later, it was found useful to generalize this idea of algebra even further. If T:CCT: C \to C is any functor, not necessarily one equipped with a monad structure, one defines a TT-algebra just to be an object xx equipped with a morphism α:Txx\alpha: T x \to x, subject to… no conditions! Believe it or not, this really can be a useful notion. For example, in computer science, many data types (e.g., the type of lists over an alphabet AA, or binary trees with leaves labeled in AA, etc.) are usefully and efficiently described as initial algebras over one functor or another. (An easy but remarkable result due to Joachim Lambek is that if xx is an initial TT-algebra in this sense, then the algebra structure TxxT x \to x is an isomorphism. Thus an initial algebra is intuitively a “least fixed point” of TT.)

Also useful in a variety of settings is the dual of this last notion of algebra. This is the notion of coalgebra under discussion here. Dual to the Lambek result, we have that terminal TT-coalgebras are also “fixed points” under TT, and it turns out that a remarkable number of mathematical structures can be expressed as terminal coalgebras.

Posted by: Todd Trimble on November 29, 2008 11:57 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

What I was trying to point to with title and post is whether there is a profound difference in point of view between the algebraic and the coalgebraic. I’ve reported on claimed dichotomies: construction/destruction; data types/observation-modification of states; arithmetic-induction/analysis-coinduction.

We largely do our coalgebra in the (algebraic) category of sets. Can we go coalgebraic all the way down? With examples such as the reals as terminal coalgebra, this takes place in the category of posets. Could we put some coalgebra in lower down and work with partially ordered non-well-founded sets?

Is the notion of category itself on the algebraic side? If so, could there be a coalgebraic version?

Why has it taken longer to become aware of coalgebra? Was it, as Breger suggests, that a certain style of axiomatisation comes to prevail in the early 20th century?


Posted by: David Corfield on November 30, 2008 9:45 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

I just wanted to mention that quantum measurements also admit a natural coalgebraic characterisation. More precisely, in the category FdHilb of finite dimensional Hilbert spaces and linear maps, with the tensor product as monoidal structure, the `self-adjoint’ (in the indexed sense) Eilenberg-Moore coalgebras for the comonad X(x)-:FdHilb —> FdHilb which is canonically induced by commutative special dagger Frobenius comonoids (X,d,e), are exactly the projector spectra {P_1, …, P_n} of self-adjoint operators H=sum_i a_i P_i. You can find this result as thm 1.5 in a joint paper with Dusko Pavlovic entitled Quantum measurements without sums. This result assumed another one which, in fact, was only recently established in collaboration with Jamie Vicary in A new description of orthogonal bases. This recent result was discussed in TWF 268.

Posted by: bob on November 30, 2008 8:20 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

Aczel mentioned Euclidean geometry as involving a maximisation axiom. Take Hilbert’s completeness axiom:

V.2: Line completeness. To a system of points, straight lines, and planes, it is impossible to add other elements in such a manner that the system thus generalized shall form a new geometry obeying all of the five groups of axioms. In other words, the elements of geometry form a system which is not susceptible of extension, if we regard the five groups of axioms as valid.

Can we understand this coalgebraically?

Posted by: David Corfield on November 30, 2008 9:36 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

More from Breger:

To the revolutionary, the most striking difference between Zermelo’s and Finsler’s axioms is the certain ontological flavour of Finsler’s axioms. To the conservative, the philosophical background of Zermelo’s axioms is the implicit assumption that sets do not exist unless they can be derived from given sets by axiomatically fixed rules. Axiom 3 [the axiom of completeness] is of particular interest. It is the analogue of Hilbert’s somewhat problematic axiom of completeness for geometry. Weyl and Fraenkel purposefully took the contrary into consideration, namely an axiom of restriction postulating the minimal system which fulfils the other axioms. Weyl’s and Fraenkel’s axiom is obviously motivated by the revolutionary idea that axioms and definitions create objects, and that sets which are too big should not be brought into existence, whereas Finsler’s axiom of completeness is motivated by the conservative idea that big sets exist anyway, so set theory should investigate them.

The different philosophical backgrounds imply different consequences for the consistency topic. The consistency of arithmetic and Euclidean geometry had not been a problem as long as the Platonistic interpretation of the objects had been self-evident. (pp. 258-9)

Presumably the reals are emerging from the completeness axiom applied to Hilbert’s geometry axioms qua largest Archimedean field. The Wikipedia entry for the reals says

R is “complete” in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals from surreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield.

Is something coalgebraic going on here? It seems that Conway games form a final coalgebra.

Posted by: David Corfield on December 1, 2008 9:36 AM | Permalink | Reply to this

Re: Coalgebraically Thinking

John Armstrong tells us that:

…the real numbers \mathbb{R} are a terminal object in the category of Archimedean fields.

Is there some kind of Archimedean functor about for the reals to be a maximal fixed point?

In Serge Lang’s Algebra we are told that for every ordered field KK and every subfield FF with induced order, there is a maximal archimedean subfield of KK over FF, by Zorn’s lemma.

Something tells me this line isn’t going to work.

Posted by: David Corfield on December 1, 2008 1:57 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Another aspect of coalgebras not mentioned above is their connection to the notion of termination of a computer program. Normally when we talk about a function being Turing computable we imagine a Turing machine that takes an input, runs until it hits a ‘halt’ instruction, and leaves a result. We usually think of functions that aren’t computable as, in some sense, not well-behaved.

But in many programming languages it’s easy to write a program that takes as input an infinite list of integers, say, and returns the infinite list of those integers doubled. This program doesn’t terminate so it doesn’t define a Turing computable function. And yet it’s well-behaved in the sense that whatever stream of inputs you give it it’ll always produce a well-defined stream of outputs. Many algorithms take this form ranging from algorithms that perform exact real arithmetic to operating systems. So we need a new concept - the idea of an algorithm that processes an infinite stream but never hangs as long as you keep providing inputs. Such a program is said to be ‘productive’ and productivity is dual to computability.

As David mentions, algebras are related to data. An F-algebra gives you a data structure and associated to it is the notion of structural recursion, ie. computing a function of the data structure by recursion over the subelements in a way that’s guaranteed to terminate for finite structures. So data+structural recursion gives guaranteed computability.

David mentions that coalgebras are related to state. But I find it more helpful to think in terms of codata, infinite data structures. Analogous to recursion is corecursion, and in this case we replace structural recursion with “guarded recursion”. It’s exactly dual to the previous case - codata+guarded recursion gives productivity. So coalgebras give a really nice way to reason about the well-behavedness of programs, even when they don’t terminate.

In the real world of open ended loops like OSes and word processors, it’s often not computability we need, but productivity (absolutely no pun intended). And that makes coalgebraic reasoning an important topic.

Posted by: Dan Piponi on December 3, 2008 9:27 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Let me refer people to your wonderful explanation of what you wrote above.

You say there

It made me realise there was this entire co-universe out there that I knew nothing about…

A strange feeling isn’t it? Like there’s a substantial region of the world you’d never heard about.

Posted by: David Corfield on December 4, 2008 9:23 AM | Permalink | Reply to this

exploring the co-universe; Re: Coalgebraically Thinking

I agree with that sensation. Yet is the co-universe better described as the “smos”?

Posted by: Jonathan Vos Post on December 9, 2008 3:00 PM | Permalink | Reply to this

Re: exploring the co-universe; Re: Coalgebraically Thinking

D’oh!

It took me 10 minutes to get that …

I was thinking you meant you meant an Al Capp character, Shmoo or Smo — you have to search the page a bit on that last one …

But at least now I know it’s a fungible term!

Posted by: Jon Awbrey on October 22, 2009 8:30 PM | Permalink | Reply to this
Read the post Coalgebraic Modal Logic
Weblog: The n-Category Café
Excerpt: Putting together ideas of modal logic and coalgebra with ways of modelling first-order theories.
Tracked: September 7, 2009 12:49 PM
Read the post Coalgebraic Tangles
Weblog: The n-Category Café
Excerpt: I'm sinking in a sea of administrative duties at the moment, so for a bit of sanity I thought I'd jot down the glimmer of a thought I had. We spoke back here about the term model for a...
Tracked: January 25, 2011 11:51 AM

Re: Coalgebraically Thinking

Two brief mentions here about dyadic rationals. How about some links to e.g. Thompson’s group?

Posted by: jim stasheff on September 20, 2012 1:50 PM | Permalink | Reply to this

Re: Coalgebraically Thinking

Tod brought up Thompson’s group in a similar situation here.

Tom recently brought it up here.

Is there some nice universal account to explain its ubiquity?

Posted by: David Corfield on September 20, 2012 3:16 PM | Permalink | Reply to this

Post a New Comment