March 25, 2018
On the Magnitude Function of Domains in Euclidean Space, II
Posted by Simon Willerton
joint post with Heiko Gimperlein and Magnus Goffeng.
In the previous post, On the Magnitude Function of Domains in Euclidean Space, I, Heiko and Magnus explained the main theorem in their paper
(Remember that here a domain in means a subset equal to the closure of its interior.)
The main theorem involves the asymptoic behaviour of the magnitude function as and also the continuation of the magnitude function to a meromorphic function on the complex numbers.
In this post we have tried to tease out some of the analytical ideas that Heiko and Magnus use in the proof of their main theorem.
Heiko and Magnus build on the work of Mark Meckes, Juan Antonio Barceló and Tony Carbery and give a recipe of calculating the magnitude function of a compact domain (for an odd integer) by finding a solution to a differential equation subject to boundary conditions which involve certain derivatives of the function at the boundary and then integrating over the boundary certain other derivatives of the solution.
In this context, switching from one set of derivatives at the boundary to another set of derivatives involves what analysts call a Dirichlet to Neumann operator. In order to understand the magnitude function it turns out that it suffices to consider this Dirichlet to Neumann operator (which is actually parametrized by the scale factor in the magnitude function). Heavy machinary of semiclassical analysis can then be employed to prove properties of this parameter-dependent operator and hence of the magntiude function.
We hope that some of this is explained below!
March 19, 2018
Magnitude Homology Reading Seminar, I
Posted by Simon Willerton
In Sheffield we have started a reading seminar on the recent paper of Tom Leinster and Mike Shulman Magnitude homology of enriched categories and metric spaces. The plan was to write the talks up as blog posts. Various things, including the massive strike that has been going on in universities in the UK, have meant that I’m somewhat behind with putting the first talk up. The strike also means that we haven’t had many seminars yet!
I gave the first talk which is the one here. It is an introductory talk which just describes the idea of categorification and the paper I wrote with Richard Hepworth on categorifying the magnitude of finite graphs, this is the idea which was generalized by Tom and Mike.
March 11, 2018
Stabilization of Derivators
Posted by Mike Shulman
(guest post by Ian Coley)
I recently published a paper to the arXiv which reconstructs an old paper of Alex Heller. Heller’s Stable homotopy theories and stabilization is one of a few proto-derivator papers that are still oft-cited by those of us studying derivators — a subject absent from this website since the two papers of Mike Shulman and Kate Ponto were published in 2014! Therefore before getting into the paper itself, it’s worth recalling what a derivator is supposed to be and do. For those interested in the long version, check out the nLab article or Moritz Groth’s excellent paper.
March 10, 2018
Cognition, Convexity, and Category Theory
Posted by John Baez
guest post by Tai-Danae Bradley and Brad Theilman
Recently in the Applied Category Theory Seminar our discussions have returned to modeling natural language, this time via Interacting Conceptual Spaces I by Joe Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Dan Marsden, and Robin Piedeleu. In this paper, convex algebras lie at the heart of a compositional model of cognition based on Peter Gärdenfors’ theory of conceptual spaces. We summarize the ideas in today’s post.
Sincere thanks go to Brendan Fong, Nina Otter, Fabrizio Genovese, Joseph Hirsh, and other participants of the seminar for helpful discussions and feedback.
March 4, 2018
Coarse-Graining Open Markov Processes
Posted by John Baez
Kenny Courser and I have been working hard on this paper for months:
- John Baez and Kenny Courser, Coarse-graining open Markov processes.
It may be almost done. So, it would be great if you folks could take a look and comment on it! It’s a cool mix of probability theory and double categories.
‘Coarse-graining’ is a standard method of extracting a simple Markov process from a more complicated one by identifying states. We extend coarse-graining to open Markov processes. An ‘open’ Markov process is one where probability can flow in or out of certain states called ‘inputs’ and ‘outputs’. One can build up an ordinary Markov process from smaller open pieces in two basic ways:
- composition, where we identify the outputs of one open Markov process with the inputs of another,
and
- tensoring, where we set two open Markov processes side by side.
A while back, Brendan Fong, Blake Pollard and I showed that these constructions make open Markov processes into the morphisms of a symmetric monoidal category:
- A compositional framework for Markov processes, -Category Café, January 12, 2016.
Here Kenny and I go further by constructing a symmetric monoidal double category where the 2-morphisms include ways of coarse-graining open Markov processes. We also extend the previously defined ‘black-boxing’ functor from the category of open Markov processes to this double category.
But before you dive into the paper, let me explain all this stuff a bit more….
March 1, 2018
Univalence From Scratch
Posted by Mike Shulman
Martín Escardó has written “a self-contained, brief and complete formulation of Voevodsky’s Univalence Axiom” in English and Agda: