Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

March 11, 2018

Stabilization of Derivators

Posted by Mike Shulman

(guest post by Ian Coley)

I recently published a paper to the arXiv which reconstructs an old paper of Alex Heller. Heller’s Stable homotopy theories and stabilization is one of a few proto-derivator papers that are still oft-cited by those of us studying derivators — a subject absent from this website since the two papers of Mike Shulman and Kate Ponto were published in 2014! Therefore before getting into the paper itself, it’s worth recalling what a derivator is supposed to be and do. For those interested in the long version, check out the nLab article or Moritz Groth’s excellent paper.

Posted at 8:18 PM UTC | Permalink | Followups (3)

March 10, 2018

Cognition, Convexity, and Category Theory

Posted by John Baez

guest post by Tai-Danae Bradley and Brad Theilman

Recently in the Applied Category Theory Seminar our discussions have returned to modeling natural language, this time via Interacting Conceptual Spaces I by Joe Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Dan Marsden, and Robin Piedeleu. In this paper, convex algebras lie at the heart of a compositional model of cognition based on Peter Gärdenfors’ theory of conceptual spaces. We summarize the ideas in today’s post.

Sincere thanks go to Brendan Fong, Nina Otter, Fabrizio Genovese, Joseph Hirsh, and other participants of the seminar for helpful discussions and feedback.

Posted at 6:51 PM UTC | Permalink | Followups (11)

March 4, 2018

Coarse-Graining Open Markov Processes

Posted by John Baez

Kenny Courser and I have been working hard on this paper for months:

It may be almost done. So, it would be great if you folks could take a look and comment on it! It’s a cool mix of probability theory and double categories.

‘Coarse-graining’ is a standard method of extracting a simple Markov process from a more complicated one by identifying states. We extend coarse-graining to open Markov processes. An ‘open’ Markov process is one where probability can flow in or out of certain states called ‘inputs’ and ‘outputs’. One can build up an ordinary Markov process from smaller open pieces in two basic ways:

  • composition, where we identify the outputs of one open Markov process with the inputs of another,


  • tensoring, where we set two open Markov processes side by side.

A while back, Brendan Fong, Blake Pollard and I showed that these constructions make open Markov processes into the morphisms of a symmetric monoidal category:

Here Kenny and I go further by constructing a symmetric monoidal double category where the 2-morphisms include ways of coarse-graining open Markov processes. We also extend the previously defined ‘black-boxing’ functor from the category of open Markov processes to this double category.

But before you dive into the paper, let me explain all this stuff a bit more….

Posted at 12:34 AM UTC | Permalink | Followups (11)

March 1, 2018

Univalence From Scratch

Posted by Mike Shulman

Martín Escardó has written “a self-contained, brief and complete formulation of Voevodsky’s Univalence Axiom” in English and Agda:

Posted at 12:15 PM UTC | Permalink | Followups (16)