## May 14, 2021

### Structure vs. Observation

#### Posted by Emily Riehl guest post by Stelios Tsampas and Amin Karamlou

Today we’ll be talking about the theory of universal algebra, and its less well-known counterpart of universal coalgebra. We’ll try to convince you that these two frameworks provide us with suitable tools for studying a fundamental duality that arises between structure and behaviour. Rather than jumping straight into the mathematical details we’ll start with a few motivating examples that arise in the setting of functional programming. We’ll talk more about the mathematics at play behind the scenes in the second half of this post.

Take a look at this Agda-style1 definition of an algebraic datatype, a structure. \begin{aligned} {\color{#CD6600} data} \:& {\color{#0000CD} Expr}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#008B00} lit} : \mathbb{N} \to {\color{#0000CD} Expr}\\ & {\color{#008B00} add} : {\color{#0000CD} Expr} \times {\color{#0000CD} Expr} \to {\color{#0000CD} Expr}\\ & {\color{#008B00} mul} : {\color{#0000CD} Expr} \times {\color{#0000CD} Expr} \to {\color{#0000CD} Expr} \end{aligned} The intention of the programmer is to define datatype ${\color{#0000CD} Expr}$ that can be constructed in exactly three ways. One can either apply a natural number to the function ${\color{#008B00} lit}$ (for “literal”), find a pair of ${\color{#0000CD} Expr}$s and apply them to the function ${\color{#008B00} add}$, or do the same thing to ${\color{#008B00} mul}$. In fact, ${\color{#0000CD} Expr}$ is defined precisely in terms of these three functions, which we call constructors, and it is the least such set whose building blocks are ${\color{#008B00} lit}$, ${\color{#008B00} mul}$ and ${\color{#008B00} add}$. In the end, ${\color{#0000CD} Expr}$ consists of expressions such as “${\color{#008B00} add}$ (${\color{#008B00} lit}$ 5) (${\color{#008B00} mul}$ (${\color{#008B00} lit}$ 4) (${\color{#008B00} lit}$ 2))”, “${\color{#008B00} lit}$ 0” and so on.

Such datatypes are not the only ones we can define in functional languages such as Agda. The dual notion is that of a record or a class. Well, here’s a classic. \begin{aligned} {\color{#CD6600} record}\:& {\color{#0000CD} Stream}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#CD6600} field} \\ & {\color{#EE1289} head} : \mathbb{N} \\ & {\color{#EE1289} next} : {\color{#0000CD} Stream}\\ \end{aligned} ${\color{#0000CD} Stream}$ is a datatype, like ${\color{#0000CD} Expr}$ before. In fact, ${\color{#0000CD} Stream}$ is the set of all infinite sequences of natural numbers only in this case functions ${\color{#EE1289} head}$ and ${\color{#EE1289} next}$ do not represent ways to construct a ${\color{#0000CD} Stream}$ instance. Rather, they show the exact opposite, so they are destructors for a ${\color{#0000CD} Stream}$! Perhaps a more accurate representation would be something akin to \begin{aligned} {\color{#CD6600} record}\:& {\color{#0000CD} Stream}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#CD6600} field} \\ & {\color{#EE1289} head} : {\color{#0000CD} Stream} \to \mathbb{N} \\ & {\color{#EE1289} next} : {\color{#0000CD} Stream} \to {\color{#0000CD} Stream}\\ \end{aligned}

By saying “destructor”, the idea is that a ${\color{#0000CD} Stream}$ can be applied to either ${\color{#EE1289} head}$ or ${\color{#EE1289} next}$ to get a natural number or another ${\color{#0000CD} Stream}$ respectively. Hence, destructors are different when compared to constructors in that they are not used to build an instance of a datatype out of a number of ingredients, but to pick it apart and present the observations that one can make out of it. In the case of ${\color{#0000CD} Stream}$, i.e. infinite sequences of natural numbers, we can always observe the ${\color{#EE1289} head}$, the number at the current position in the stream, and move on to observe the rest of the ${\color{#0000CD} Stream}$ (i.e. observing the ${\color{#EE1289} next}$ ${\color{#0000CD} Stream}$), to repeat the process.

We can witness the first traces of a duality by comparing the codomains of constructors with the domains of destructors. At one end, constructors all map to the same thing, ${\color{#0000CD} Expr}$, while all destructors map from the same thing, ${\color{#0000CD} Stream}$. But the antitheses do not end here as, interestingly, instances of ${\color{#0000CD} Expr}$ and ${\color{#0000CD} Stream}$ are meant to be used in radically different ways.

Let us define a simple function that “calculates” the value of an ${\color{#0000CD} Expr}$ expression. In this case we want to define a function such that “${\color{#008B00} add}$ (${\color{#008B00} lit}$ 5) (${\color{#008B00} mul}$ (${\color{#008B00} lit}$ 4) (${\color{#008B00} lit}$ 2))” amounts to 13 or that “${\color{#008B00} lit}$ 0” amounts to 0. This is simple enough. \begin{aligned} & {\color{#0000CD} calc} : {\color{#0000CD} Expr} \to \mathbb{Z} \\ & {\color{#008B00} lit}\:n \mapsto n \\ & {\color{#008B00} add}\:(l,r) \mapsto ({\color{#0000CD} calc}\:l) + ({\color{#0000CD} calc}\:r)\\ & {\color{#008B00} mul}\:(l,r) \mapsto ({\color{#0000CD} calc}\:l) * ({\color{#0000CD} calc}\:r) \end{aligned}

${\color{#0000CD} calc}$ is an example of a simple inductive definition. The key is that it composes values of smaller sub-${\color{#0000CD} Expr}$essions to specify the result of the whole ${\color{#0000CD} Expr}$ession. “Composition” is a rather fitting word here, as the definition of ${\color{#0000CD} calc}$ is essentially a recipe on how to mix ${\color{#0000CD} Expr}$essions together. Now, how about a small dose of un-mixing? \begin{aligned} & {\color{#0000CD} zip} : {\color{#0000CD} Stream} \times {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} zip}\:(l,r)) = {\color{#EE1289} head}\:l \\ & {\color{#EE1289} next}\:({\color{#0000CD} zip}\:(l,r)) = {\color{#0000CD} zip}\:(r, {\color{#EE1289} next}\:l) \end{aligned}

Our new function ${\color{#0000CD} zip}$ takes apart two ${\color{#0000CD} Stream}$s by specifying, at each given step, how to (i) extract a natural number and (ii) extract a new ${\color{#0000CD} Stream}$ out of them. The end result is that of a coinductive definition. Notice how ${\color{#0000CD} zip}$ maps onto ${\color{#0000CD} Stream}$s and compare it to ${\color{#0000CD} calc}$ that maps from ${\color{#0000CD} Expr}$s. Once again, the arrows are reversed. Strange.

Let’s forget about that mystery for a second and try out a couple of proofs. For starters, notice that ${\color{#0000CD} calc}$ maps all elements of ${\color{#0000CD} Expr}$ to positive integers. This is a property that we can prove inductively.

\begin{aligned} & {\color{#0000CD} prop}\: : \forall\:(e : {\color{#0000CD} Expr}) \to {\color{#0000CD} calc}\:e \geq 0 \\ & {\color{#0000CD} prop}\:(lit\:n) = true\:as\:n \geq 0 \\ & {\color{#0000CD} prop}\:({\color{#008B00} add}\:l\:r) = {\color{#008B00} true}\:as\:{\color{#0000CD} prop}\:l\:\wedge\:{\color{#0000CD} prop}\:r\\ & \:\:\:\:\:\:\:\:\:\:\:\:\wedge\:{\color{#0000CD} calc}\:({\color{#008B00} add}\:l\:r) = {\color{#0000CD} calc}\:l + {\color{#0000CD} calc}\:r\\ & {\color{#0000CD} prop}\:({\color{#008B00} mul}\:l\:r) = {\color{#008B00} true}\:as\:{\color{#0000CD} prop}\:l\:\wedge\:{\color{#0000CD} prop}\:r \\ & \:\:\:\:\:\:\:\:\:\:\:\:\wedge\:{\color{#0000CD} calc}\:({\color{#008B00} mul}\:l\:r) = {\color{#0000CD} calc\:l} \cdot {\color{#0000CD} calc\:r} \end{aligned} The above is an example of a mechanized Agda proof written in a mathematician-friendly way. We encode the fact that ${\color{#0000CD} calc}$ produces positive values as property/function ${\color{#0000CD} prop}$. A valid proof requires ${\color{#0000CD} prop}$ being true for each of the constructors of ${\color{#0000CD} Expr}$, which is trivial for ${\color{#008B00} lit}$ and slightly more involved for ${\color{#008B00} add}$ and ${\color{#008B00} mul}$. In the latter two cases we invoke the induction hypothesis on both operands, which translates to two nested calls to ${\color{#0000CD} prop}$.

The above proof is an example of the so-called Curry-Howard correspondence, relating mathematical proofs with programs written in typed functional languages. The main aspects of it are the fact that propositions can be encoded as types (first line of the proof) and proofs as definitions. Indeed, ${\color{#0000CD} prop}$ is simply a function that we are giving a definition for.

Now let’s try something similar for ${\color{#0000CD} Stream}$. First, we define function even that only picks positions $0,2,4,\dots$ from a ${\color{#0000CD} Stream}$. \begin{aligned} & {\color{#0000CD} even} : {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} even}\:xs) = {\color{#EE1289} head}\:xs \\ & {\color{#EE1289} next}\:({\color{#0000CD} even}\:xs) = {\color{#0000CD} even}\:({\color{#EE1289} next}\:({\color{#EE1289} next}\:xs)) \end{aligned}

If we “double” a ${\color{#0000CD} Stream}$ $xs$ (as in ${\color{#0000CD} zip}\:(xs , xs)$) and apply the result to ${\color{#0000CD} even}$, we will obviously recover the original ${\color{#0000CD} Stream}$ $xs$. We once again write down the proof, only this time it is coinductive. \begin{aligned} & {\color{#0000CD} prop2} : \forall\:(xs : {\color{#0000CD} Stream}) \to {\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs)) \equiv xs \\ & {\color{#EE1289} head}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#EE1289} head}\:xs\:&& (1)\\ & {\color{#EE1289} next}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#EE1289} next}\:xs\:as\:\\ & \:\:\:\:\:\:\:\:\:\:\:\:{\color{#EE1289} next}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#0000CD} even}\:({\color{#0000CD} zip}\:({\color{#EE1289} next}\:xs , {\color{#EE1289} next}\:xs))) && (2) \\ & \:\:\:\:\:\:\:\:\:\:\:\:{\color{#0000CD} even}\:({\color{#0000CD} zip}\:({\color{#EE1289} next}\:xs , {\color{#EE1289} next}\:xs))) \equiv {\color{#EE1289} next}\:xs && (3) \end{aligned} where (1) and (2) are true by definition of ${\color{#0000CD} even}$ and ${\color{#0000CD} zip}$ and (3) is obtained via coinduction on ${\color{#EE1289} next}$ $xs$.

The proof essentially asserts that both the ${\color{#EE1289} head}$ and the ${\color{#EE1289} next}$ part of the two streams in question are identical. If we look closely, we can observe the subtle “structural” differences between ${\color{#0000CD} prop}$ and ${\color{#0000CD} prop2}$: in the case of the former, we had to provide a valid proof for each ${\color{#0000CD} Expr}$ constructor, while in the latter case, we had to provide a valid proof of equivalence for each ${\color{#0000CD} Stream}$ destructor! What’s going on here?

Constructors and destructors, induction and coinduction, composition and decomposition are all elements of a duality that transcends functional programming going all the way up to the very foundations of thought: It’s the duality of structure vs observation. And it is a duality that universal (co)algebra gives a very satisfying account for.

## The Category Theory Within

We’ll now give a brief overview of the main building blocks of universal (co)algebra. Along the way, we will revisit our Agda examples in order to see how this simple and rather abstract theory can be used in concrete situations.

Given endofunctors $T, F: \mathbf{Set} \rightarrow \mathbf{Set}$2, a $T$-algebra $(S, a_{S})$ consists of a set $S$ together with a function $a_{S}: T(S) \rightarrow S$. Dually, an $F$-coalgebra $(X, \alpha_X)$ consists of a set $X$ together with a function $\alpha_X: X \rightarrow F(X)$. We’ve already encountered examples of both algebras and coalgebras:

1. Consider the endofunctor $E$ which maps a set $U$ to $Ex(U) = N + (U \times U) + (U \times U)$, and a function $f$ to $Ex(f) = id_{N} + (f \times f) + (f \times f)$. Then, $({\color{#0000CD} Expr}, {\color{#008B00} lit} + {\color{#008B00} add} + {\color{#008B00} mul})$ is an $E$-algebra.
2. Consider the endofunctor $S$ which maps a set $X$ to $S(X) = N \times X$, and a function $f$ to $S(f) = id_{N} \times f$. Then, $({\color{#0000CD} Stream}, ({\color{#EE1289} head}, {\color{#EE1289} next}))$ is an $S$-coalgebra.

A $T$-algebra homomorphism between two $T$-algebras $(U, a_U), (V, a_V)$ is any function $f:U \rightarrow V$ where $a_V \circ T(f) = f \circ a_U$. Dually, an $F$-coalgebra homomorphism between two $F$-algebras $(X, \alpha_X), (Y, \alpha_Y)$ is any function $f:X \rightarrow Y$ where $\alpha_Y \circ f = F(f) \circ \alpha_X$. In terms of commuting diagrams these conditions amount to: Once again, we’ve already seen examples of both algebra and coalgebra homomorphisms:

1. ${\color{#0000CD} calc}$ is an $E$-algebra homomorphism.
2. ${\color{#0000CD} zip}$ is an $S$-coalgebra homomorphism.

We leave it to the reader to check the commutativity conditions for these homomorphisms, which are summarised in the diagrams below: The function ${\color{#0000CD} zipStep}$ above is defined as ${\color{#0000CD} zipStep}\:(l, r) = ({\color{#EE1289} head}\:l, (r, {\color{#EE1289} next}\:l))$. If we keep use zipStep according to the (pseudo)-agda style code below we can recover the behaviour of ${\color{#0000CD} zip}$.

\begin{aligned} & {\color{#0000CD} altZip} : {\color{#0000CD} Stream} \times {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} altZip}\:(l,r)) = lHead \\ & {\color{#EE1289} next}\:({\color{#0000CD} altZip}\:(l,r)) = {\color{#0000CD} altZip}\:(r, lNext) \\ & \:\:{\color{#CD6600}where}\\ & \:\:\:\: lHead\:r\:nextL = {\color{#0000CD} altZip}\:(l,r) \end{aligned}

So ${\color{#0000CD} zipStep}$ is in a sense equivalent to a single step of the ${\color{#0000CD} zip}$ function. This is not a coincidence! We’ll return to this point shortly when we talk about coinduction.

$T$-algebras and algebra homomorphisms form a category. We refer to an initial object of this category as an initial algebra. Dually, $F$-coalgebras and coalgebra homomorphisms form a category whose final objects we refer to as final coalgebras. Initiality and finality occupy a central role in universal (co)algebra. This is what we explore next.

Take an initial $T$-algebra $(A, a_a)$ together with an arbitrary $T$-algebra $(U, a_u)$. By initiality, there exists a unique $T$-algebra homomorphism $f: A \rightarrow S$. We say that such a function $f$ is inductively defined. Intuitively, an inductive function $f$ breaks complicated elements of $A$ down into their constituent parts, applies some function on each constituent part, and mixes the results together to obtain an action on the whole element. If you’re an eagle eyed reader you’ll know that we’ve already encountered a function that behaves this way. Remember how ${\color{#0000CD} calc}$ breaks complicated ${\color{#0000CD} Expr}$s down into smaller sub-${\color{#0000CD} Expr}$s and mixes their results together to get a result for the whole ${\color{#0000CD} Expr}$? Well now we know that this behaviour is indicative of the fact that ${\color{#0000CD} calc}$ is inductively defined (note that in this case ${\color{#0000CD} Expr}$ is an initial $T$-algebra3).

Now consider the dual scenario where we have a final $F$-coalgebra $(Z, \alpha_Z)$ together with an arbitrary $F$-coalgebra $(Y, \alpha_Y)$. By universality there must be a unique coalgebra homomorphism $g:Y \rightarrow Z$. We call such a function g coinductively defined. A nice intuition here is to think of $(Y, \alpha_Y)$ as a dynamic system evolving according to the transition structure given by $\alpha_Y$, then the function $g$ is the result of applying the transition $\alpha_Y$ over and over again. Repeated application of a transition structure might sound awfully familiar to you, as in fact it should. We were only just talking about how ${\color{#0000CD} zip}$ can be thought of as repeated applications of ${\color{#0000CD} zipStep}$. Again, this is indicative of the fact that ${\color{#0000CD} zip}$ is a coinductive definition, and that ${\color{#0000CD} Stream}$ is a final coalgebra.

With that our whirlwind tour comes to a close. We’ve seen how universal algebra gives us tools for exploring the structure of things, while universal coalgebra allows us to explore their behaviour. Together they gave us a way to rigorously analyse the duality between structure and behaviour. Earlier in the article we made the rather bold claim that this duality transcends the examples we’ve seen here and goes up all the way to the foundations of thought. We’ll end on a similarly dramatic note by giving you a philosophical question to ponder:

Is a “thing” best defined by its constituent parts (structure) or by its observable actions(behaviour).

If you’re interested in learning more there’s many facets of the structure vs behaviour duality that we didn’t have space to talk about. Things like congruence vs bisimulation, minimality vs simplicity, and freeness vs cofreeness. A good starting point for further reading is the paper which motivated this blog post, Universal Coalgebra: a theory of systems. In this classic work the author introduces universal coalgebra as an abstract theory for capturing the behaviour of a wide class of dynamic systems4. Even though it is a fantastic introduction to universal coalgebra the focus on duality in this article is arguably limited. A paper that does extensively cover duality is An introduction to (co)algebra and (co)induction. This is also a great article for readers who may be newer to category theory as the material is presented in set-theoretic language whenever possible. Much of the material from both these papers is now available as part of an introductory book on coalgebras. Finally, since our examples focused on functional programming we mention that chapter 24 of Category Theory for Programmers gives a nice account of how (co)algebras are used in the functional programming language Haskell.

1 Agda is a functional programming language often used as a theorem prover.

2 This definition can easily be generalised to endofunctors on an arbitrary category $\mathcal{C}$.

3 Proving this fact directly is a rather fun excercise that we leave for the reader. Hint: First build up a homomorphism $f$ from ${\color{#0000CD} Expr}$ into an arbitrary $Ex$-algebra by considering where $f$ could send elements such as ${\color{#008B00} lit}$ 0 or add (${\color{#008B00} mul}$ 5) (${\color{#008B00} mul}$ (${\color{#008B00} mul}$ 4) (${\color{#008B00} lit}$ 2)) to in the target set. Then assume for the sake of contradiction that $f$ is not unique, i.e. send some of the objects to a different place in the target set. You should be able to show that this new function breaks the algebra homomorphism commutativity conditions. Once you’ve proven this have a go at showing that ${\color{#0000CD} Stream}$ is a final coalgebra.

4 These so-called dynamic systems are by no means limited to the functional programming examples in this blog post. In fact, the main appeal of universal coalgebra comes from the omni-presence of dynamic systems around us. Labelled transition systems, finite automata, Kripke models arising in modal logic, And even the evolution of quantum systems are just a few examples of such systems.

Posted at May 14, 2021 8:37 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3315

### Re: Structure vs. Observation

I already knew about datatype constructors, but I love the idea of datatype destructors. I now imagine that it’s part of a dual theory to constructive mathematics, called destructive mathematics. To be able to proclaim “I am a destructive mathematician” possibly even beats “I am a persistent homologist”.

(Sorry for the non-serious comment… I hope I’m not setting a precedent. BTW, the link to Amin’s web page doesn’t work.)

Posted by: Tom Leinster on May 14, 2021 9:33 PM | Permalink | Reply to this

### Re: Structure vs. Observation

There is a “destructive math”, or at least a “destructive set theory”: Aczel’s anti-foundation axiom! The short of it is that in a category of classes, the powerset functor P has an initial algebra: the Von Neumann hierarchy V (of well-founded sets). The set theory in V works by constructing it using powerset and propositions, starting from the empty set. But there is also a terminal coalgebra of P, the class of “anti-founded” sets. In this set theory, you destruct a set by giving an equation defining its elements, something like A = {B, {C}}, etc. But you can have A on both sides of the equation, like A = {A, {A}}, the set which contains itself and a set containing itself.

Posted by: David Jaz Myers on May 15, 2021 12:25 AM | Permalink | Reply to this

### Re: Structure vs. Observation

Tom, you held a destructor in your hands back here:

This is David’s $pred$.

Posted by: David Corfield on May 16, 2021 5:17 PM | Permalink | Reply to this

### Re: Structure vs. Observation

Yes. (And your recall of long-ago Café conversations never ceases to amaze me.) It was the name that amused me and was new to me, not the idea itself.

Posted by: Tom Leinster on May 16, 2021 8:55 PM | Permalink | Reply to this

### Re: Structure vs. Observation

In the second and third code snippets, what does the line “field” do/mean?

Posted by: Tom Leinster on May 14, 2021 9:40 PM | Permalink | Reply to this

### Re: Structure vs. Observation

“field” is a keyword which begins the section where you say all the fields of the records. The fields of a record are the cartesian factors of the endofunctor that it is the terminal algebra of.

The reason you have to say it is because you can also give constructors for records in Agda that go before the fields. The constructor is the map you get from the universal property of the cartesian product, telling you how to build an element of the record by mapping into each field (cartesian factor of the endofunctor).

Posted by: David Jaz Myers on May 15, 2021 12:14 AM | Permalink | Reply to this

### Re: Structure vs. Observation

Posted by: Tom Leinster on May 16, 2021 11:27 AM | Permalink | Reply to this

### Re: Structure vs. Observation

On a minor point, is it possible to put in the names of the authors of the papers and books in the Further Reading section?

Posted by: Simon Willerton on May 15, 2021 10:40 AM | Permalink | Reply to this

### Re: Structure vs. Observation

Here’s a couple of places where there’s either a typo or else I’m confused!

• “Consider the endofunctor $E$ which maps a set $U$ to $Ex(U)$”: are $E$ and $Ex$ the same?

• “The function zipStep above”: I couldn’t find an earlier reference to zipStep.

• “Take an initial $T$-algebra $(A, a_a)$ together with an arbitrary $T$-algebra $(U, a_u)$. By initiality, there exists a unique $T$-algebra homomorphism $f: A \rightarrow S$”: I’m guessing that $S=U$.

Posted by: Simon Willerton on May 15, 2021 10:41 AM | Permalink | Reply to this

### Re: Structure vs. Observation

Related to the zipStep type - the diagram for the commutativity of the zip morphism has the wrong vertical labels, they match the previous diagram. I guess the left has [head, zipStep] and the right has [head, next]

Posted by: Marc G on May 15, 2021 1:56 PM | Permalink | Reply to this

### Re: Structure vs. Observation

Posted by: Amin Karamlou on May 15, 2021 5:52 PM | Permalink | Reply to this

### Re: Structure vs. Observation

What an interesting twist on ‘structure vs. behavior’ you make! If you had asked me before reading your post which of the two I associate with induction, I would have probably said behavior. Why? I would have contrasted induction with deduction (instead of co-induction). In that setting, induction would have been more about “one step after the next” and deduction would have had a stronger structural flavor, building up knowledge from axioms… So, I think you’ve changed my perspective on the inductive base case - from “starting point of the endless journey/behavior” to “basis of structure”.

Thanks for this food for thought! :D

Posted by: Anna Knörr on May 15, 2021 6:51 PM | Permalink | Reply to this

### Re: Structure vs. Observation

So if I am reading this correctly, the definition of the real numbers, whether defined as infinite sequences of rational numbers or of decimals, is coalgebraic in nature?

### Re: Structure vs. Observation

Posted by: David Corfield on May 16, 2021 5:26 PM | Permalink | Reply to this

### Re: Structure vs. Observation

I’m sure there is a reason for the symmetric language, but isn’t it more common to refer to applying head or next to a Stream, rather than applying Stream to one of the destructors?

Posted by: L Spice on May 16, 2021 5:37 PM | Permalink | Reply to this

### Re: Structure vs. Observation

A very nice post! I’ve been thinking about coinductive types a lot myself recently too. One conclusion that I’ve come to is that the common description of the elements of a coinductive type as “infinite data structures” is somewhat misleading, and perhaps partially to blame for their continuing second-class status. True, from the “static” viewpoint of mathematics, the elements of a coinductive type are certainly infinite structures. But from the “dynamic” viewpoint of programming, it seems better to think of them as “servers”, i.e. “programs” that continue interacting forever (rather than computing a value and terminating).

Something of this sort is, of course, implicit when talking about coalgebra as a model for “dynamic systems”, but it seems to me that a lot of references still don’t seem to take it as seriously as could be done. (However, my own reading has surely only scratched the surface of what’s out there about coinduction, and I expect what I’m about to say is well-known to many people.) I was first clued into this viewpoint by a remark in Conor McBride’s “Let’s see how things unfold”:

As a total programmer, I am often asked ‘how do I implement a server as a program in your terminating language?’, and I reply that I do not: a server is a coprogram in a language guaranteeing liveness.

I found this remark rather cryptic at first, but recently I’ve come to appreciate it much more.

Part of the problem is that everybody’s favorite simple example of coinduction, namely streams, are too simple to exhibit the interactive behavior characteristic of a server. Consider instead a more general M-type (non-dependent for simplicity) $M = M_{x:A} B$, with destructors $receive : M \to A$ and $send : M \to B \to M$. In static mathematics, we can think of an element of $M$ as an infinite rooted tree where each node is labeled by an element of $A$ and has a family of children labeled by the elements of $B$. But from a dynamic perspective, it’s better to think of of an element of $M$ as a server with an “internal state” that supports two operations: we can receive from it an element of $A$, or we can send to it an element of $B$ and thereby change its internal state.

The latter viewpoint corresponds more directly, not only to the destructors, but to the corecursor. Specifically, the way we define an element of $M$ is to give a type $S$ (whose elements should be thought of as valid internal states of the server) together with maps $r:S\to A$ (saying what is received from a server in that state) and $s:S\to B\to S$ (saying how the state is updated when we send an input to the server). The principle of coinduction now sounds much more nontrivial: it says that two servers that behave the same, giving the same outputs for all possible input sequences, are equal, even if they have totally different types of internal states.

To me this helps to explain why coinductive types tend to get short shrift from mathematicians. From a “static” viewpoint, it’s harder to justify the need for a whole dual version of mathematics just to talk about infinite structures, when we already have perfectly good ways to talk about infinite structures: a stream of elements of $A$ is just a function $\mathbb{N}\to A$, and similarly an infinite tree as above is just a function $\List(B) \to A$. But this would be a horrific way to program a server! Can you imagine if the only way a web server could respond to a single request is by being passed, every time, the entire sequence of all the requests it had received up until that point, and replaying from square one all the calculations and updates it had to do in response to them?

Posted by: Mike Shulman on May 17, 2021 5:31 AM | Permalink | Reply to this

### Re: Structure vs. Observation

I agree, it’s an amazing quote. This perspective on computer prorgams is also partly explored in section 6.9 of Jacobs’ Introduction to Coalgebra book (note that a draft version is available online).

I can recall from my only undergrad TCS class, which touched a bit on automata and Turing Machines, that an “algorithm” always needs to terminate. I never quite understood why. As you point out, servers do not necessarily terminate, so why the big fuss with termination?

Of course, it is no coincidence that inductively defined functions necessarily terminate, that mathematicians are more familiar with induction and that all algorithms must be terminating. Like you said, and for whatever reason, coinductive types tend to be dismissed. For instance, there could be a common sentiment that the (coalgebraic) behavior of a machine is simply an implementation detail, or side-effects of, an inductively defined function.

Posted by: Stelios Tsampas on June 2, 2021 12:28 AM | Permalink | Reply to this

### Re: Structure vs. Observation

The type signatures of your example coinductive type are very similar to the most basic lens type. Is this a coincidence? Or does it point to the likely proof principle to apply when reasoning about optics?

Posted by: Barnes on June 29, 2021 2:31 PM | Permalink | Reply to this

### Ivo

Interesting! I have a totally silly question: is it at all common to say “apply $x$ to $f$” for the evaluation of a function $f$ at an input $x$? I noticed you’ve used it a few times in this post. I would rather say “apply $f$ to $x$”, and so would (I think?) say most mathematicians I know… Is this what thinking in Polish notation looks like? :-)

Posted by: Ivo on May 17, 2021 12:52 PM | Permalink | Reply to this

### Ivo

Interesting! I have a totally silly question: is it at all common to say “apply $x$ to $f$” for the evaluation of a function $f$ at an input $x$? I noticed you’ve used it a few times in this post. I would rather say “apply $f$ to $x$”, and so would (I think?) say most mathematicians I know… Is this what thinking in Polish notation looks like? :-)

Posted by: Ivo on May 17, 2021 12:52 PM | Permalink | Reply to this

### Re: Ivo

Posted by: Ivo on May 17, 2021 12:54 PM | Permalink | Reply to this

### Re: Structure vs. Observation

The vertical arrows on “zip is a co-algerba homomorphism” diagram seem to be mislabelled.

Posted by: Nikita on May 25, 2021 5:55 PM | Permalink | Reply to this
Posted by: Amin on June 2, 2021 5:23 PM | Permalink | Reply to this

### Re: Structure vs. Observation

Thanks for this thought-provoking posting. I took a look at Introduction to coalgebras and it seemed to me that there are other dualities present which might of course be aspects of the one presented here.

Jacobs starts with the special case where the algebra is the action of a set of symbols on a state space and the coalgebra reflects the evolution of the state space and the labels of its internal states. In other words, this is apt for talking about the progress of a computation driven by inputs on the one hand and producing outputs on the other. In the special context of finite state machines or read-only Turing machines this is a self-duality: the languages recognised by such a machine are precisely the languages generated by such a machine. But this leads me to think about two further points. Firstly, the generation model is inherently non-deterministic: we consider all possible paths through the state space which end at an acceptance node: even if there is a single path per symbol out of a state, there is no determinism in selecting the path to be taken. Secondly, both the acceptance and the generation model have a non-deterministic analogue, in that there may be multiple paths out of a state with the same label, and again any path that leads to an acceptance node denotes validity of the sequence of labels. The beauty of the FSM theory is that these determine the same set of languages.

Thinking about non-determinism led me on to thinking about coalgebras more generally. I think it was Rota who first advocated for encoding assembly by multiplications and disassembly by comultiplications in the algebraic theory of combinatorics. From this point of view, the result of a comultiplication should be simply the sum (in a suitable sense) of all possible disassemblies of an object: in the computational setting, then, a comultiplication should yield the sum of all possible successor states of a given state.

Since a practical computation is likely to have both inputs and outputs, presumably there is some sort of consistency condition that is reflected algebraically in the existence of a bialgebra structure: is this the natural setting? To return to the FSM setting for a moment, there is an algebraic formulation of the Myhill–Nerode Theorem on languages accepted or generated by FSM in terms of the existence of a bialgebra structure: this is presented in Underwood’s Fundamentals of Hopf algebras. The title of that book reminds me that Rota would have looked not only for a bialgebra but a Hopf algebra structure in this setting. What would the antipode be? In the combinatorial setting I think of it as inclusion-exclusion or Moebius inversion, but it would be interesting to know what it means computationally.

Posted by: Richard Pinch on June 4, 2021 11:01 AM | Permalink | Reply to this

### Re: Structure vs. Observation

I’ll follow up my own comment, if I may, with a further reflection on duality in general. The original post is about a duality between two things, namely “structure” and “observation” and I start off by thinking that this means there is some sort of two-variable map from pairs of (structure,observation) to, er, something: the usual algebraic interpretation uses an exponential law to turn this into a pair of maps from structures to maps from observations to something and from observations to maps from structures to something, thus exhibiting each of the two things in terms of “the” dual of the other. Such a duality might then give rise to a Galois correspondence, in turn giving rise to some sort of closure operators. All of these seem as if they might give rise to useful ideas.

But even in the well-known algebraic setting, there’s difficulty with the concept of “the” dual. When there’s some sort of rigidity, such as for finite-dimensional vector spaces over a field or Hilbert spaces, the duality is symmetric, and this is expressed by saying the the original space is canonically isomorphic to the double dual. Otherwise, the dual seems to be, somehow, “bigger” then the original, and this works against the duality-between formulation.

We can see this in the specific case of algebras and coalgebras in the case of vector spaces over a field with additional structure. The linear dual of a coalgebra is an algebra, but to construct a coalgebra by taking the linear dual of an algebra requires an additional finiteness condition.

Is an analogous finiteness requirement reflected in the structure/observation duality?

Posted by: Richard Pinch on June 10, 2021 9:08 AM | Permalink | Reply to this

Post a New Comment