## December 4, 2023

### Magnitude 2023

#### Posted by Tom Leinster

I’m going to do something old school and live-blog a conference: Magnitude 2023, happening right now in Osaka. This is a successor to Magnitude 2019 in Edinburgh and covers all aspects of magnitude and magnitude homology, as well as hosting some talks on subjects that aren’t magnitude but feel intriguingly magnitude-adjacent.

Slides for the talks are being uploaded here.

What is magnitude? The magnitude of an enriched category is the canonical measure of its size. For instance, the magnitude of a set (as a discrete category) is its cardinality, and the magnitude of an ordinary category is the Euler characteristic of its nerve. For metric spaces, magnitude is something new, but there is a sense in which you can recover from it classical measures of size like volume, surface area and dimension.

What is magnitude homology? It’s the canonical homology theory for enriched categories. The magnitude homology of an ordinary category is the homology of its classifying space. For metric spaces, it’s something new, and has a lot to say about the existence, uniqueness and multiplicity of geodesics.

Let’s go!

Posted at December 4, 2023 10:13 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3511

### Colloquium: Introduction to magnitude

There was a kind of pre-event yesterday, a colloquium for the Osaka University mathematics department which I gave:

The many faces of magnitude

If you’ve seen me give a general talk on magnitude then many of the slides will be familiar to you. But I want to draw your attention to a new one (page 27), which is a summary of recent and current activity in magnitude homology and is meant to whet your appetite for some of the talks this week.

Here’s that slide in plain text, shorn of the photos of the people involved:

What’s happening in magnitude homology?

• There is a relationship between magnitude homology and persistent homology — but they detect different information (Nina Otter; Simon Cho)

• Applications of magnitude homology to the analysis of networks (Giuliamaria Menara)

• A theory of magnitude cohomology (Richard Hepworth)

• Connections between magnitude homology and path homology (Yasuhiko Asao)

• A comprehensive spectral sequence approach that encompasses both magnitude homology and path homology (Yasuhiko Asao; Richard Hepworth and Emily Roff; also relevant is work of Kiyonori Gomi)

• A concept of magnitude homotopy (Yu Tajima and Masahiko Yoshinaga)

• New results on magnitude homology equivalence of subsets of $\mathbb{R}^n$, involving convex geometry (joint with Adrián Doña Mateo)

• And lots more… find out this week!

Posted by: Tom Leinster on December 4, 2023 11:33 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

After my colloquium, Eiko Kin (professor here at Osaka) told me some things that I found very interesting. And she asked me the following question.

Take a surface homeomorphism that has the property of being pseudo-Anosov. (I don’t know what this means, but for the broad gist of this question it doesn’t matter.) Then one can construct from it a graph, which has something to do with the directions in which the homeomorphism expands and contracts. This is a graph with multiple edges, in general. In turn, the graph has a transition matrix: the rows and columns are the vertices, and the entries are the numbers of edges between the corresponding vertices. And the transition matrix has a largest eigenvalue. Theorem (Thurston): the largest eigenvalue is equal to the entropy of the homeomorphism we started with.

The question was whether there could be some magnitude connection, given that (i) magnitude is related to entropy, and (ii) there is a notion of the magnitude of graphs.

Of course, I don’t know. It’s an intriguing possibility, and it would be fantastic if some connection was there.

More generally, this is a good opportunity to point out the following gap in our knowledge. We know various things about the relations between magnitude and entropy, but this is always entropy for a single, static probability distribution. We know nothing about relations with entropy in the dynamical sense, e.g. the entropy of a self-map of a topological space. This would be well worth looking into.

Posted by: Tom Leinster on December 6, 2023 12:55 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

One of the issues here is that the Markov partitions and attendant (di)graphs (or more generally stochastic matrices if you want to get quantitative and have a Markov chain versus a shift) are not unique: presumably some extremum will need to be taken or there will need to be a result that shows things don’t depend on the choice here.

For example, Figure 10 of https://arxiv.org/abs/2104.00753 shows three different Markov partitions for the cat map, which is the archetypal Anosov map (vs flow). Some subsequent figures show how these stretch across, and section VI of https://arxiv.org/abs/1009.2127 details the adjacency matrices (they are called $B^{(0)}$).

In these papers I focused on a particular class of refinements of Markov partitions of maximum entropy at a physics (and not even mathematical physics!) level of rigor. This construction appears not to have been considered elsewhere but I would be interested to be wrong about that.

Again you’ll want to tie this to Ihara zeta functions in the graphical case (and maps to Ruelle zeta functions on the original dynamics) and probably a good place to start is https://doi.org/10.1016/S0024-3795(02)00341-5

Posted by: Steve Huntsman on December 6, 2023 5:52 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

Going through some old notes of mine relevant, Parry and Williams (https://doi.org/10.1112/plms/s3-35.3.483) considered zeta functions of the form $1/\det(I-tP)$ for $P$ a stochastic matrix. I had the notion of trying to get such zeta functions to be sizes some time ago but could not find an obvious way to make that work, though taking the Kronecker sum $\boxplus$ of stochastic matrices seems interesting because $\text{spec } \boxplus_m P_m = \sum_m \text{spec } P_m$ (for this see Horn and Johnson’s Matrix Analysis, 1st ed., Thm. 4.4.5).

Meanwhile the metric or Kolmogorov-Sinai entropy of a nice Markov chain with transition matrix $P$ is $-\sum_{jk} \pi_j P_{jk} \log P_{jk}$ where $\pi = \pi P$ is the invariant distribution. It has the nice property that the entropy of a product transformation is the sum of the component transformations, so something size-like.

Posted by: Steve Huntsman on December 7, 2023 7:07 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

By the way: has anyone tried to make genera of multiplicative sequences work as size functions?

Posted by: Steve Huntsman on January 16, 2024 2:02 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

Interesting idea (link to definition). Not to my knowledge.

Posted by: Tom Leinster on January 16, 2024 4:34 PM | Permalink | Reply to this

### Re: Colloquium: Introduction to magnitude

I guess this would only work for magnitude vs (co)homology because the unit (= empty set) can’t be terminal: any manifold with more than one connected component can have more than one morphism to the empty set. Think of the disjoint union of circles: we can either make two cups or a U-pipe. So Cob isn’t semicartesian (unless I’m mistaken, which can easily happen to me in these sorts of realms).

That said, enriching over Cob seems interesting from the point of view of TQFTs and their ilk. Rather Atiyah-ish on at least two counts.

Posted by: Steve Huntsman on January 16, 2024 9:46 PM | Permalink | Reply to this

### Emily Roff: Iterated magnitude homology

The first talk is by Emily Roff, based on her paper Iterated magnitude homology.

Emily began by reviewing some of the foundations.

The setup for magnitude is this:

• a monoidal category $V$
• a ring $K$
• a “size” homomorphism $|\cdot|: (ob(V),\otimes) \to (K, \cdot)$

and with some linear algebra, we get a notion of the magnitude of a finite $V$-category.

Categorifying, the setup for magnitude homology is this:

• a semicartesian monoidal category $V$
• a monoidal abelian category $A$
• a strong symmetric monoidal functor $\Sigma: V \to A$ (a “size functor”)

and with some homological algebra, we get a notion of the magnitude homology of a $V$-category (not necessarily finite).

The magnitude homology functor $V Cat \to A^{\mathbb{N}}$ is the composite of three functors:

$V Cat \stackrel{M B^\Sigma}{\to} [\Delta^{op}, A] \stackrel{C}{\to} Ch(A) \stackrel{H_\bullet}{\to} A^{\mathbb{N}}$

Here $C$ and $H_\bullet$ are standard, off-the-shelf functors: chain complexes and homology. The first functor, $M B^\Sigma$, gives the magnitude nerve construction, which takes a $V$-category $X$ as input and produces as output the simplicial object with $n$th object

$\bigoplus_{x_0, \ldots, x_n \in X} \Sigma(X(x_0, x_1) \otimes \cdots \otimes X(x_{n - 1}, x_n)).$

Its simplicial object structure is given in the usual nervy way, but here’s where we need the restriction that the monoidal category $V$ is semicartesian: the unit object is terminal. That’s needed in order to define the first and last face maps.

Emily then told us about partially ordered groups (which include Coxeter groups with the Bruhat order) and norms on groups (e.g. any generating set on a group determines a word-length norm), which induce metrics in the usual kind of way.

For a symmetric monoidal category $V$, a $V$-group is a one-object $V$-category whose underlying ordinary category is a group. E.g. if $V$ is $Cat$, $Poset$ or $Met$ then a $V$-group is a monoid $G$ in $V$ together with a function $(-)^{-1}: U(G) \to U(G)$ making the usual diagrams commute. (Here $U$ means the underlying set of the $V$-object $G$ and $Met$ is metric spaces with the $\ell^1$ product.) This is something more general than an internal group.

• Every partially ordered group is a group enriched in $Poset$. (The inversion map is basically never monotone.)

• Every normed group with conjugation-invariant norm gives a group enriched in $Met$. (The inversion map is 1-Lipschitz if the metric is symmetric.)

• A strict 2-group, i.e. a group object in $Cat$, is a special kind of group enriched in $Cat$. Strict 2-groups classify homotopy 2-types. For instance, any normal subgroup $N$ of a group $G$ gives a strict 2-group $G_N$ whose objects are the elements of $G$ and with arrows $(k, g) : g \to kg$ for $k \in N$ and $g \in G$. Mac Lane and Whitehead proved that $\pi_1(B G_N) = G/N$.

Now onto iterated magnitude homology.

In all the cases above, $G$ has a “second-order” enrichment, i.e. it’s enriched in $V$-$Cat$ where $V$ is one of $Set$, $(0 \to 1)$ and $[0, \infty]$. Iterated magnitude homology is a homology theory of categories enriched in $V$-$Cat$.

A short digression. Any 2-category has a classifying space. There are a couple of approaches to this: Duskin’s and Street’s on the one hand, and Segal’s on the other. Bullejos and Cegarra proved that the two classifying space constructions are equivalent.

Now let $V$ be a semicartesian monoidal category and $\Sigma: V \to A$ a strong symmetric monoidal functor.

Proposition: the magnitude nerve defines a strong sym mon functor $MB^\Sigma: VCat \to [\Delta^{op}, A],$

which means we can use it as a size functor!

The double magnitude nerve of a $(V Cat)$-category $X$ $MB^{MB^\Sigma}(X) \in [\Delta^op \times Delta^op, A].$ The iterated magnitude nerve of $X$ is the diagonal part of this bisimplicial object, and its homology is called the iterated magnitude homology $IMH_\bullet(X)$. (The role of the diagonal here recalls the Segal construction mentioned above.)

Examples/theorems:

• For a $Cat$-group $G$, we have $IMH_1(G) = \pi(G)_{ab}$.

• For a normal subgroup of a group $G$, we have $IMH_1(G_N) = (G/N)_{ab}$

• For a preordered group $G$, we have $IMH_1(G) = (G/P_\sim)_{ab}$, where $P_\sim$ is the subgroup of elements $g$ such that $g \leq e \leq g$.

• Take a $Met$-group $G = (G, d)$. An element $g$ of $G$ is primitive if there is nothing strictly between it and the identity $e$ (“between” in the sense of the triangle identity). In length grading $0$, we have $IMH^0_\bullet(G) \cong H_\bullet(G)$ (the ordinary group homology), and in length gradings $\ell \gt 0$, the abelian group $IMH^\ell_2(G)$ is freely generated by the conjugacy classes of primitive elements of norm $\ell$.

Summary: iterated magnitude homlogy captures “the homology of a classifying space”.

And for a group with a conjugacy-invariant norm, the iterated magnitude homology is sensitive to both the the topology of the classifying space and the geometry of the group under the norm.

Posted by: Tom Leinster on December 5, 2023 1:29 AM | Permalink | Reply to this

### Re: Emily Roff: Iterated magnitude homology

Well, that turned out quite long. I suspect future ones will be shorter, which is probably a good thing.

Posted by: Tom Leinster on December 5, 2023 1:41 AM | Permalink | Reply to this

### Re: Emily Roff: Iterated magnitude homology

Tiny correction: the inversion map in a metrically enriched group is 1-Lipschitz provided the metric is symmetric. However, the map that pairs an element with its inverse is not. (This is really just saying that the ell_1 product is not cartesian, i.e. doesn’t have 1-Lipschitz diagonals.)

Posted by: Emily Roff on December 9, 2023 12:35 AM | Permalink | Reply to this

### Re: Emily Roff: Iterated magnitude homology

Thanks. Fixed.

Posted by: Tom Leinster on December 10, 2023 9:47 AM | Permalink | Reply to this

### Mark Meckes: Inequalities for magnitude and maximum diversity

From homological algebra to analysis, with Café regular Mark Meckes.

Mark began with some classical geometric inequalities…

• The isoperimetric inequality: for nice $A \subseteq \mathbb{R}^n$, the ratio $vol_n(A)/(vol_{n-1}(\partial A))^{n/(n-1)}$ is maximized when $A$ is a Euclidean ball.

• The Brunn-Minkowski inequality: for nice $A, B \subseteq \mathbb{R}^n$, $vol_n(A + B)^{1/n} \geq vol_n(A)^{1/n} + vol_n(B)^{1/n}$ or equivalently $vol_n(\lambda A + (1 - \lambda) B) \geq vol_n(A)^\lambda vol_n(B)^{1 - \lambda}.$

Notation: $A$ is a nonempty compact metric space, $M(A)$ is the set of signed Borel measures on $A$, and $P(A)$ is the set of Borel probability measures on $A$. For a signed measure $\mu$ and $x \in A$, write $Z\mu(x) = \int_X e^{-d(x, y)} d\mu(y).$

We’ll use two mutually consistent definitions of magnitude:

• If there exists $\mu \in M(A)$ such that $Z\mu$ has constant value 1 (a weight measure) then $Mag(A) = \mu(A)$.

• If $A$ is of “negative type” then $Mag(A) \sup_{\mu \in M(A)} \mu(A)^2 /\int Z\mu d\mu.$ This is also equal to $\inf \|f\|^2_H$ where the inf is over functions $f$ that have constant value 1 on A, $H$ is a particular function space on $X \supseteq A$, and details are omitted.

The maximum diversity of $A$ is $Div(A) = \sup_{\mu \in P(A)} \Biggl( \int_X (Z\mu)^{q - 1} d\mu \Biggr)^{1/(1 - q)}$ where $q \in [0, \infty]$ and the right-hand side turns out to be independent of $q$ (a theorem of Emily Roff and myself; the finite case is due to Mark and myself).

Basic, easy inequalities:

• If $A \subseteq B$ then $1 \leq Div(A) \leq Div(B) \lt e^{diam(B)}$.

• If $A \subseteq B$ and $B$ is of negative type then $1 \leq Mag(A) \leq Mag(B)$.

• The fundamental inequality: if $A$ is of negative type then $Div(A) \leq Mag(A)$.

• Magnitude is lower semicontinuous on spaces of negative type: if $A_k \to A$ in the Gromov-Hausdorff topology then $Mag(A) \leq liminf Mag(A_k).$ (There are examples where this inequality is strict.)

Lower bounds for maximum diversity

• It’s pretty straightforward to show that $Div(A) \geq \frac{vol(A)}{n!vol(B)}$ where $A$ is a subset of an $n$-dimensional normed space, whose unit ball is called $B$.

• The packing number $P(A, \varepsilon)$ is a notion of the “effective number of points” at a particular scale: the largest number of points at distance $\geq \varepsilon$ from each other. Then $Div(A) \geq \frac{P(A, \varepsilon)}{1 + P(A, \varepsilon)e^{-\varepsilon}}.$

Upper bounds for maximum diversity

• Aishwarya, Li, Madiman and Meckes (in prepartion): if $A = \bigcup_i A_i$ then $Div(A) \leq \sum_i Div(A_i)$.

• The covering number $N(A, \varepsilon)$ is the smallest cardinality of an $\varepsilon$-net. The previous bullet point implies that $Div(A) \leq N(A, \varepsilon)e^{2\varepsilon}.$

• Packing and covering numbers are basically the same: $P(A, 2\varepsilon) \leq N(A, \varepsilon) \leq P(A, \varepsilon)$. From this we get the growth rate of $Div(t A)$ as $t \to \infty$ is the Minkowski dimension of $A$. We also get an expression in terms of maximum diversity for $\sup_{\varepsilon \geq 0} \varepsilon\sqrt{\log(N(A, \varepsilon))}$

Lower bounds for magnitude

Assume negative type now. Magnitude has been bounded below in two broad ways:

• $Mag(A) \geq Div(A)$, combined with lower bounds for $Div(A)$.

• Use monotonicity wrt inclusion, combined with exact computation via weight measures. E.g. in $\ell_1^n$, $Mag \Biggl( \bigcup_{i = 1}^n [0, a_i] e_i \Biggr) = 1 + 2\sum a_i,$ so if $\sum_i a_i = \infty$ then in $\ell_1$, $Mag \Biggl( \bigcup_{i = 1}^\infty [0, a_i] e_i \Biggr) = \infty$ (a simple example of a compact negative type space with infinite magnitude).

Upper bounds for magnitude

In principle, we have the bound $Mag(A) \leq \|f\|_H^2$ for any $f \in H$ with $f|_A \equiv 1$. But this is hard to implement in practice; the norm is a complicated Sobolev-type norm. Mostly, it has been used for asymptotic results such as those of Barceló and Carbery, of Leinster and Meckes, and (sharpest of all) of Gimperlein, Goffeng and Louca.

The major exception: for a subset $A$ of Euclidean space $\mathbb{R}^n$, $Div(A) \leq Mag(A) \leq c_n Div(A)$ where $c_n$ is a dimensional constant. This is a deep result in potential theory about the relationship between two different families of capacities. Meckes deduced that the growth rate of the magnitude of $t A$ as $t \to \infty$ is also the Minkowski dimension of $A$.

The most powerful way of bounding magnitude above ahs been to approximate $A$ by spaces $A_k$ with weight measures, then use lower semicontinuity. And can iterate this idea, approximating $A_k$ in the same way too.

To say more, need the idea of the intrinsic volumes on an $n$-dimensional normed space of negative type. There’s a Hadwiger-type theorem in this generality, giving well-defined intrinsic volumes $\mu_0, \ldots, \mu_n$ called the Holmes-Thompson intrinsic volumes. (There are some details here regarding choice of normalization that I’m not attempting to keep track of.)

• Take the normed space $\ell_1^n$. For compact, $\ell_1$-convex sets $K$, $Mag(K) \leq \sum_{i = 0}^n 4^{-i} \mu_i(K) = vol([0, 1]^n + \tfrac{1}{2}K).$

• Next approximate an $n$-dimensional subspace of $L_1 = L_1[0, 1]$ by subspaces of $\ell_1^N$. For an $n$-dimensional normed space $E$ of negative type and compact convex $K \subseteq E$, $Mag(K) \leq \sum_{i = 0}^n (1/4)^i \mu_i(K) \leq \exp(\mu_1(K)/4)$ and there’s a slightly sharper estimate in the special case of $K \subseteq \ell_2^n$, using the Alexandrov-Fenchel inequalities.

Applications:

Let $E$ be a fin-dim normed space of negative type.

• If $A$ is a compact subset of $E$ then $Mag(t A)$ is always finite and $Mag(t A) \to 1$ as $t \to 0$ (the one-point property).

• If $K$ is a compact convex subset of $E$ then $Mag(t K)$ is differentiable at $0$, and in fact $lim sup_{t \to 0} \frac{Mag(t K) - 1}{t} \leq \tfrac{1}{4} \mu_1(K).$ There are some known cases where the limit exists and is $\mu_1(K)$, in which case that’s the derivative at $t = 0$ of $Mag(t K)$.

• There is another application to Sudakov minoration. Using the bound on $\varepsilon \sqrt{\log N(K, \varepsilon)}$ mentioned earlier, Meckes gets a new proof of Sudakov’s minoration inequality.

• He also gets a short new proof of a special case of Mahler’s conjecture, making the assumption of negative type: $vol(B) vol(B^\circ) \geq 4^n/n!$ for the unit ball $B$ in an $n$-dimensional normed space of negative type. (Here $B^\circ$ is the polar set of $B$.)

There is also a Brunn-Minkowski-type inequality for maximum diversity, at least in one-dimensional space $\mathbb{R}^1$. In higher dimensions, there are only partial results known now. E.g. for subsets $A$ and $B$ of Euclidean space and $0 \leq \lambda \leq 1$, $Div((1 - \lambda)A + \lambda B) \geq Div(A/\sqrt{2})^{1 - \lambda} Div(B/\sqrt{2})^\lambda.$

Open questions

Mark finished with a bunch of open questions. There are some really basic unknowns here!

• Consider spaces $A$ of negative type. If $t \gt 1$, is $Mag(t A) \geq Mag(A)$? Is $Mag(A) \leq \# A$? If $t \lt 1$, is $Mag(t A) \geq Mag(A)^t$ (cf. Brunn-Minkowski)?

• In a normed space of negative type, is magnitude continuous of compact convex sets? We know it’s continuous in full dimension and at a point. But Mark suspects it’s false in general.

• Does $Mag(tK)$ have derivative $\mu_1(K)/4$ at $t = 0$, for compact $K$ in a finite-dimensional linear subspace $E \subseteq L_1$?

• What about a maximum diversity approach to the full Mahler conjecture?

• What about Brunn-Minkowski in arbitrary normed spaces, for either magnitude or maximum diversity?

• Can we get any inequalities from magnitude homology? This could be a powerful method!

Posted by: Tom Leinster on December 5, 2023 3:08 AM | Permalink | Reply to this

### Re: Mark Meckes: Inequalities for magnitude and maximum diversity

What an intense blog-comment! I’ve seen people publish papers with less content than this.

Looks like a great conference.

Posted by: John Baez on December 5, 2023 11:13 AM | Permalink | Reply to this

### Re: Mark Meckes: Inequalities for magnitude and maximum diversity

Ha, yes, I don’t think I’ve mastered the art of live-blogging. But Mark’s talk was great, and I didn’t want to miss anything!

It is a great conference, with a really positive and friendly atmosphere.

Posted by: Tom Leinster on December 5, 2023 1:03 PM | Permalink | Reply to this

### Re: Mark Meckes: Inequalities for magnitude and maximum diversity

One tiny correction: the Brunn–Minkowski-type inequality with the $1/\sqrt{2}$ holds in Euclidean space of any dimension, not just the plane.

Posted by: Mark Meckes on December 6, 2023 9:10 PM | Permalink | Reply to this

### Re: Mark Meckes: Inequalities for magnitude and maximum diversity

Thanks, Mark. Fixed now.

Posted by: Tom Leinster on December 6, 2023 11:28 PM | Permalink | Reply to this

### Re: Mark Meckes: Inequalities for magnitude and maximum diversity

A recent update to one of the issues discussed in my talk: Emily Roff and Masahiko Yoshinaga have shown that generic finite metric spaces satisfy the one-point property.

Posted by: Mark Meckes on January 2, 2024 3:55 PM | Permalink | Reply to this

### Asuka Takatsu: Geometry of sliced/disintegrated Monge-Kantorovich metrics

The first talk not on magnitude is by Asuka Takatsu, in an area where optimal transport meets metric geometry. It represents joint work with J. Kitagawa.

Notation: $P(X)$ is the set of probability measures on a complete, separable metric space $X$.

The Monge-Kantorovich metric is also known as the Wasserstein metric. Data scientists are interested in it, but it’s expensive to compute. To fix this, one possibility is to use “sliced” M-K metrics.

For $1 \leq p \lt \infty$ and $1 \leq q \leq \infty$, we’ll define the M-K distance $MK_{p, q}$ on the space $P_p(\mathbb{R}^n)$ of measures with finite $p$th moment (i.e. $\int |x|^p d\mu(x)$ is finite). With this metric, $P_p(\mathbb{R}^n)$ is complete and separable, and it’s geodesic iff either $n = 1$ or $p = 1$.

We’re also going to consider the space $P(S^{n - 1} \times \mathbb{R})$, and the embedding of $\mathbb{R}^n$ into $S^{n - 1} \times \mathbb{R}$. For a probability measure $\sigma$ on $S^{n - 1} \times \mathbb{R}$, there’s an M-K metric $MK_{p, q}^\sigma$ on $P_{p, q}^\sigma(S^{n - 1} \times \mathbb{R})$, which is complete, separable (if $q \lt \infty$), and always geodesic.

The classical M-K distance $MK_p$ first arose in optimal transport. For probability measures $\mu$ and $\nu$ on $X$, we have the set of couplings between them, $\Pi(\mu, \nu) = \{ \pi \in P(X \times X): \pi(A \times X) = \mu(A)  \text{and}  \pi(X \times A) = \nu(A)  \text{for all measurable} A\}.$ For a cost function $c: X \times X \to \mathbb{R}$, the total cost of moving from $\mu$ to $\nu$ according to $\pi$ is $\int_{X \times X} c(x, y) d\pi(x, y)$, and the optimal transport problem is to minimize the total cost over $\pi \in \Pi(\mu, \nu)$. We put $MK_p(\mu, \nu) = \inf_{\pi \in \Pi(\mu, \nu)} \biggl( \int_{X \times X} d(x, y)^p d\pi(x, y) \biggr)^{1/p}.$ This is a metric on $P_p(X)$. The inf is achieved. Also, the metric space $(P_p(X), MK_p)$ is complete and separable, and it’s geodesic if $X$ is.

(Categorical point: the “gluing lemma” says there’s a canonical map $\Pi(\mu, \nu) \times \Pi(\nu, \lambda) \to \Pi(\mu, \lambda).$ It’s associative… and indeed, we have a category here. There may be some analytic/measure-theoretic requirements to get this to work. Mark Meckes tells me that Tobias Fritz calls this the “category of couplings”, and that Simon Willerton has pondered it too.)

Examples

• If $\mu = \tfrac{1}{n} \sum_{i = 1}^n \delta_{x_i}$ and $\nu = \tfrac{1}{n} \sum_i \delta_{y_i}$ then the minimizer is $\tfrac{1}{n} \delta_{(x_i, y_{\sigma(i)})}$ for some permutation $\sigma$.

• Let $\mu, \nu \in P(\mathbb{R})$. Then we transport $x$ to $y$ whenever $\mu(-\infty, x] = \nu(-\infty, y]$. (I guess this assumes $\mu$ and $\nu$ always give nonzero measures to nonempty open sets, otherwise that doesn’t make sense.)

This was just the warmup before Asuka got onto the main topic, sliced MK-metrics. For $\omega \in S^{n - 1}$, define $R^\omega = \langle -, \omega \rangle: \mathbb{R}^n \to \mathbb{R}.$ Then for $\mu \in P(\mathbb{R}^n)$, we get the pushforward measure $R^w_\#\mu \in P(\mathbb{R})$. (Asuka drew a picture here justifying the word “sliced”.) And now put $MK_{p, q}(\mu, \nu) = \|MK_p(R_\#^\bullet \mu, R_\#^\bullet \nu)\|_{L^q(\sigma)}$ where $\sigma$ is the uniform probability measure on $S^{n - 1}$. That is, unwinding the notation, $MK_{p, q}(\mu, \nu) = \biggl( \int_{S^{n -1}} MK_p(R_\#^\omega \mu, R_\#^\omega \nu)^q d\sigma(\omega) \biggr)^{1/q}.$ So if $q = 1$, then $MK_{p, q}(\mu, \nu)$ is the expected $MK_p$ distance between $\mu$ and $\nu$ after you project them down to a random line in $\mathbb{R}^n$. (This reminds me of mean width.)

$MK_{p,q}$ is a complete, separable metric on $P_p(\mathbb{R}^n)$. It’s topologically equivalent to $P_p(\mathbb{R}^n)$ with the metric $MK_p$, but not usually metrically equivalent. Typically it’s not geodesic, but it is if $n = 1$ or $p = 1$.

At this point I stopped being able to take notes, as this is new stuff to me and it took all my concentration to follow.

Posted by: Tom Leinster on December 5, 2023 6:10 AM | Permalink | Reply to this

### Re: Asuka Takatsu: Geometry of sliced/disintegrated Monge-Kantorovich metrics

Mark Meckes tells me that Tobias Fritz calls this the “category of couplings”

Specifically, this category is briefly discussed in Remark 12.10 (and appears again in a few other spots) in A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. In fact he discusses it in a more general setting of a Markov category which satisfies some additional axioms; those additional axioms correspond to the extra requirements needed to get the construction to work.

In the setting of Asuka’s talk, what you need is precisely the assumption of working on a complete, separable, metrizable space (aka a Polish space), which she included from the beginning. In Tobias’s paper, this means that the category of Polish spaces and measurable maps satisfies the necessary axioms.

Posted by: Mark Meckes on December 6, 2023 10:02 PM | Permalink | Reply to this

### Emerson Gaw Escolar: On interval covers and resolutions of persistence modules

The second non-magnitude talk is on a topic that seems very closely linked to magnitude homology: persistent homology. The speaker is Emerson Gaw Escolar, and let me quote the beginning of his abstract:

In topological data analysis, one-parameter persistent homology can be used to describe the topological features of data in a multiscale way via the persistence barcode, which can be formalized as a decomposition of a persistence module into interval representations. Multi-parameter persistence modules, however, do not necessarily decompose into intervals and thus (together with other reasons) are complicated to describe.

(All the abstracts are on the programme page.)

Emerson’s talk will give general results on “interval approximation” and “interval global dimension” of finite posets, and he’ll also try to relate his observations to magnitude.

Setting: $P$ is a finite poset, $A = k P$ is its incidence algebra (i.e. the category algebra), and we consider the category $[P, vect_k]$ of finite-dimensional $k$-representations of $P$, also called persistence modules over $P$. The goal is to approximate them by simpler ones called interval-decomposable ones.

For example, if we consider a point cloud and increasing radii $r_1 \lt r_2 \lt \cdots \lt r_n$ of balls around the points, then our poset $P$ is $\bullet \to \bullet \to \cdots \to \bullet$. In this 1-dimensional situation, we can decompose into interval modules. That’s the standard structure theorem in persistent homology (the “bar code” theorem, I think it’s sometimes called).

(Emerson’s talk has lots of nice diagrams that I can’t reproduce here. The conference website has copies of slides from speakers willing to share them; maybe Emerson’s will appear there.)

Now for multiple parameters. For instance, you might have spatial parameters (e.g. radius) and a time parameter (data changing through time). Or there might be clustering depending on multiple parameters that govern how you’re doing your clustering.

In that first example, the spatiotemporal one, you have a different persistence diagram for each time point. So you need to think about how the birth/death events at one time point connect up to those at other time points. There’s no analogue of the decomposition into interval modules; we have a “wild representation type”.

Often, the poset $P$ for multiparameter situations is the grid $[m] \times [n]$ for some natural numbers $m, n$. (Here $[n] = \{0 \lt \cdots \lt n\}$.) We have at our disposal the adjointness relation $Fun([m] \times [n], vect) \cong Fun([m], Fun([n], vect)).$ But we’ll think about all finite posets $P$.

A full subposet $I$ of $P$ is an interval if it is connected and convex, in the obvious senses. (But “interval” in the poset literature means something different!) For example, there are intervals in $[m] \times [n]$ that aren’t the product of two posets of the form $[k]$. For an interval $I$, the interval representation $V_I: P \to vect_k$ is defined by $V_I(p) = k$ for all $p \in I$ and $V_I(p) = 0$ when $p \not\in I$; it’s defined on maps in the simplest possible way.

Now for approximation. Let $M$ be a persistence module. An interval-decomposable approximation of $M$ is a weakly terminal map from an interval-decomposable module to $M$. (So it’s something a bit like a projective cover.)

You can then consider “interval resolutions”: diagrams

$\cdots \to X_2 \stackrel{f_2}{\to} X_1 \stackrel{f_1}{\to} X_0 \stackrel{f_0}{\to} M \to 0$

such that $f_0$ is an interval approximation of $M$ and $f_i$ is an interval approximation of $ker f_{i - 1}$. And then you can talk about minimal interval resolutions, projective dimension of $M$, and global dimension of $k P$. I’ll skip some stuff here, but the path leads us deeper into representation theory and, in particular, the theory of dimensions of various types.

Relative homological algebra plays an increasing role in persistence, which Emerson mentioned briefly before skipping forward to interval global dimension of finite posets. He and his collaborators have computed the interval global dimension of $[m] \times [n]$ for small $m$ and $n$, and there are some interesting patterns, conjectures and theorems about monotonicity and stabilization.

Monotonicity theorem: if $Q$ is a full subposet of $P$ then $intgldim(Q) \leq intgldim(P)$. The proof uses the “intermediate extension” construction of Kuhn, Beilinson, Bernstein and Deligne, which is some analogue of convex hull and is a map from $vect^Q$ to $vect^P$ (not induction or coinduction).

Now, what about magnitude?

Is magnitude a measure of complexity, from the point of view of intervals?

Let $P$ be a finite poset. Form the full subcategory of interval representations of $P$. This is a finite category. What is its magnitude? Here we’re talking about the magnitude of the linear category $Intervals(P)$ of interval representations of $P$, i.e. the Vect-enriched category. What about the category of indecomposable projectives?

A paper by Joe Chuang, Alastair King and myself helps to answer the second question, with a fomrula for the magnitude of the linear category of indecomposable projectives over a suitably finite algebra. Also, for a finite poset $P$, $Mag_\#(P) = Mag(IndecProj(k P))$ where the LHS is the magnitude/Euler characteristic as a category, and the RHS treats $IndecProj(k P)$ as a linear category. This answers the question.

For the first question, I fell behind in taking notes, unfortunately.

It looks like the magnitude of $Intervals(0 \to \cdots \to n)$ is $n$, tested numerically for $n \leq 10$.

Posted by: Tom Leinster on December 5, 2023 7:25 AM | Permalink | Reply to this

### Tom Leinster: Magnitude homology equivalence of Euclidean sets

The last talk of the day is by me, on joint work with Adrián Doña Mateo.

Since I can’t very well live-blog my own talk, I’ll just drop a link to my slides and copy my abstract:

I will present a theorem describing when two closed subsets $X$ and $Y$ of Euclidean space have the same magnitude homology in the following strong sense: there are distance-decreasing maps $X \to Y$ and $Y \to X$ inducing mutually inverse isomorphisms in positive-degree magnitude homology. The theorem states that this is equivalent to a certain concrete geometric condition. This condition involves the notion of the “inner boundary” of a metric space, which consists of those points adjacent to some other point. The proof of the theorem builds on Kaneta and Yoshinaga’s structure theorem for magnitude homology.

Posted by: Tom Leinster on December 5, 2023 7:29 AM | Permalink | Reply to this

### Greybox fuzzing time-intensive programs

Wow, much food for thought.

One perhaps exotic-sounding application that I just put on arXiv last week (so too late to be noticed) is to help find inputs that reach particular parts of a program’s control flow graph in cases that typical “fuzzing” tools are poorly equipped to handle: https://arxiv.org/abs/2311.17200

Posted by: Steve Huntsman on December 5, 2023 1:34 PM | Permalink | Reply to this

### Masahiko Yoshinaga: Magnitude homotopy type

The first talk on Wednesday is by our generous host Masahiko Yoshinaga, and it’s one I’ve been especially looking forward to: the notion of magnitude homotopy. It’s based on a paper of his with Yu Tajima, and the full title is “Magnitude homotopy type: a combinatorial-topological approach to magnitude homology”.

The idea is to study magnitude homology groups via combinatorial topology, and more particularly, discrete Morse theory.

Starting from a poset $P$, we get its order complex, the simplicial complex $\Delta = \{(p_0 \lt \cdots \lt p_n): n \geq 0, p_i \in P\}$, hence a space $|\Delta|$.

Short review of discrete Morse theory. Take a set $V$ of vertices and a set $S \subseteq 2^V$. Then take $M \subseteq \{(\sigma, \tau) \in S \times S: \sigma \subset \tau, |\tau \setminus \sigma| = 1\}.$ We call $M$ an acyclic matching if:

• $\sigma \in S$ appears at most once
• there is no cycle $\sigma_1 \vdash \tau_1 \succ \sigma_2 \vdash \tau_2 \succ \cdots \vdash \tau_n \succ \sigma_1$, where $\sigma_i \vdash \tau_i$ means $(\sigma_i, \tau_i) \in M$ and $\tau_i \succ \sigma_{i + 1}$ means that $\tau_i \supset \sigma_{i + 1}$ and $|\tau_i \setminus \sigma_{i + 1}| = 1$.

We write $C = \{\sigma \in S: \sigma  \text{does not appear in}   M\}$ and this is called the set of critical cells.

The fundamental theorem of discrete Morse theory says that $|S|$ is equivalent to a CW complex constructed from the critical cells.

Example Take a triangle (empty) with vertices called 1, 2, 3. Its face poset $P$ has elements 1, 2, 3, 12, 13, 23 with the evident inequalities. Take $1 \vdash 12$ and $2 \vdash 23$. Then $3$ and $13$ are critical, and $P$ is homotopy equivalent to a circle with a point marked on it; the point is $3$ and the rest of the circle is $13$. (Masahiko drew some diagrams here, and this is my poor effort to put them into text.)

That’s the review of discrete Morse theory.

Now take a quasi-metric space $(X, d)$, i.e. a non-symmetric metric space. A directed graph is a typical example. We sometimes allow $\infty$ as a distance.

We’ll construct a poset suitable for the study of magnitude homology, the “causal order”. Masahiko wrote the words “potential on space-time” and then, as a motivating example, drew a very charming diagram.

There are some similar diagrams in his paper with Tajima; they’re space-time diagrams showing causal relationships and “cones of influence” or light-cones.

The causal order on $X \times \mathbb{R}$ is defined by $(x_1, t_1) \lt (x_2, t_2) \iff t_1 \lt t_2  \text{and}  d(x_1, x_2) \leq t_2 - t_1.$ Magnitude homology can be defined purely in terms of this poset.

Now we define causal intervals. Let $a, b \in X$ and $\ell \geq 0$. Write $Cau^\ell(X: a, b) = [(a, 0), (b, \ell)]$ where the square brackets mean the interval in the causal order.

(Masahiko also made some remarks on the physics angle, e.g. time-like and light-like relations in Minkowski geometry. The notion of causal relation goes back to a 1967 paper of Kronheimer and Penrose. There’s also a relevant 2018 paper of Kunzinger and Sämann on causal spaces, I guess this one.)

Next, magnitude homotopy type!

For $a, b \in X$, we have the order complex $\Delta Cau^\ell(a, b)$, i.e. the set of chains in $Cau^\ell(a, b)$. It has a subcomplex $\Delta'$ consisting of the chains $(x_0, t_0) \lt \cdots \lt (x_n, t_n)$ such that $d(x_0, \ldots, x_n) \lt \ell.$ (Here $d(x_0, \ldots, x_n)$ means $\sum d(x_{i-1}, x_i)$.) The magnitude homotopy type is the quotient simplicial complex. $M^\ell(X: a, b) = \Delta Cau^\ell(X: a, b)/\Delta',$ i.e. we collapse $\Delta'$ to a single point. Or we can consider everything as a space and take $|\Delta Cau^\ell(X: a, b)|/|\Delta'|.$ (Alternative point of view: consider the pair $(\Delta, \Delta')$.)

Theorem (Tajima and Yoshinaga)

• $\tilde{H}_n(M^\ell(X: a, b)) \cong MH_{n, \ell}(X: a, b)$, where $tilde{H}$ means reduced homology and the right-hand side is defined in the obvious way so that $MH_{n, \ell}(X) = \bigoplus_{a, b \in X} MH_{n, \ell}(X: a, b)$.
• $M^\ell(X: a, b)$ is a double suspension of $\bigl|\Delta(Cau^\ell(a, b)\setminus\{(a,0), (b, ell)\})/\Delta'\cap\cdots\bigr|$ (an object originally considered for graphs by Asao and Izumihara).

Example Let $X = \mathbb{Z}$, with points labelled as $p_n$ ($n \in \mathbb{Z}$). What is $M^4(X: p_0, p_0)$? Here the explanation absolutely needs the diagrams that Masahiko drew:

I’m afraid my notes can’t convey the dynamic live explanation here. The conclusion is that $M^4(X: p_0, p_0)$ is homotopy equivalent to $S^4 \vee S^4$, and $MH_{4, 4} \cong \mathbb{Z}^2$.

Now put $M^\ell(X) = \bigvee_{a, b \in X} M^\ell(X: a, b)$ where the wedge is a one-point sum. (The definition of $M^\ell(X: a, b)$ as a quotient $\Delta/\Delta'$ gives it a basepoint.) Then we have a functor $M^\ell: Met \to Top_\ast.$

Theorem If $X$ is finite and its similarity matrix $Z = (e^{-d(a, b)})$ is invertible then the family of reduced Euler characteristics $\Bigl( \tilde{\chi}(M^\ell(X: a, b)) \Bigr)_{a, b \in X, \ell \geq 0}$ determines the metric space $(X, d)$.

Sketch proof The inverse $Z^{-1}$ of the similarity matrix of $X$ has $(a, b)$-entry $\sum_{\ell \geq 0} \tilde{\chi}(M^\ell(a, b))e^{-\ell}.$

There’s also an inclusion-exclusion result: under conditions, $M^\ell(X \cup Y) \vee M^\ell(X \cap Y) \simeq M^\ell(X) \vee M^\ell(Y).$ And there’s a Künneth theorem too!

Posted by: Tom Leinster on December 6, 2023 1:44 AM | Permalink | Reply to this

### Jun O’Hara: Residues of manifolds

Next we have another non-magnitude talk on a topic that magnitudey people should probably be interested in: Jun O’Hara (web page) from Chiba University on residues of manifolds.

Let $X = (X, d, \mu)$ be either a closed submanifold of $\mathbb{R}^n$ or a compact body in $\mathbb{R}^n$ (“body” meaning of dimension $n$). (I think $d$ and $\mu$ are the distance function and metric on $X$.) We’re interested in the function $z \mapsto B_X(z) = \int_{X \times X} d(x, y)^z d\mu_x d\mu_y \in \mathbb{C}$ on $\mathbb{C}$. (This reminded me of capacities in potential theory, but that’s just a single integral with $x$ or $y$ fixed, and typically $z$ is real there. Here we’re integrating the capacity over the whole space.)

The function $B_X$ is holomorphic on the $z$ with real part $\gt - \dim X$, and can be analytically continued to $\mathbb{C}$ with only simple poles at some negative integers. It’s called a Brylinski beta function or the meromorphic (Riesz) energy function of $X$, and has been studied especially closely for knots.

I’m going to be briefer in these notes because the subject is less familiar to me and the talk slides are on the speaker’s web page.

We’re also interested, for a knot $K$, in the integrals $\int_{K \times K} \frac{d x d y}{|x - y|}, \qquad E(K) = \int_{K \times K} \frac{d x d y}{|x - y|^2}$ or rather their regularizations. The motivating problem: find a functional $e$ on the space of knots so that each knot type has an “optimal” representative minimizing $e$ within the knot type.

The meromorphic function $B_X$ produces two important quantities:

• Residues: $R_X(z_0)$, the residue at a pole $z_0$ of $B_X$. Examples: volume, total squared curvature, Willmore functional.

• Energy: $E_X(z)$ is $B_X(z)$ if $z$ is not a pole of $B_X$, and $lim_{w \to z} \Bigl( B_X(w) - \frac{R_X(z)}{w - z}\Bigr)$ if $z$ is a pole. Examples: knot energies, generalized Riesz energies.

For a compact manifold $(X, d)$ and $R \in \mathbb{C} \setminus \{0\}$, we have the magnitude operator $Z_X(R)$: $u \mapsto \biggl( x \mapsto \frac{1}{R} \int_X e^{-R d(x, y)} u(y) d y \biggr).$ Notice that the “scale factor” $R$ is a complex number! This idea was crucial to the work of Gimperlein, Goffeng and Louca on magnitude, in which (for example) they discovered that the magnitude function of a suitably nice space determines its surface area and Willmore energy.

There followed a summary of how to compute residues using the coarea formula, and an explanation of a relation with integral geometry (including work with Gil Solanes in Barcelona).

Theorem (Brylinski; Fuller and Vemuri) Take a closed $m$-manifold $M$ in $\mathbb{R}^n$, with $d(x, y) = |x - y|$. Then $B_M$ has poles at $-m, -m - 2, \ldots$, the first residue at $-m$ is essentially the volume of $M$, the second residue is essentially the Willmore energy for $m = 2$ and $n = 3$ (and more generally we get something to do with mean curvature).

If $n = 2, 3$ then the intrinsic volumes (including the Euler characteristic) of a compact body are given by a linear combination of residues and relative residues. For all $n$, the inclusion-exclusion property holds for residues of compact bodies if the boundary is regular enough.

Now consider geodesic distance on a closed submanifold of $\mathbb{R}^n$ (and can generalize to Riemannian manifold). The first residue is volume, the second residue is total scalar curvature, and the third residue is something more complicated involving the Ricci tensor and total scalar curvature.

Gimperlein, Goffeng and Louca showed that the magnitude function of such a manifold has an expansion whose coefficients involve volume and total scalar curvature, among other things. There’s more about this in a guest post they wrote last year.

A point to watch out for: different manifolds $X$ can have the same beta function $B_X$.

There are pairs of spaces distinguished by their magnitude functions, but not by their beta functions. E.g. consider $\ell_1 \gt \cdots \gt \ell_6$ such that all triples satisfy the triangle inequalities to make them the edge-lengths of a tetrahedron. A numerical experiment: there are 30 tetrahedra with the same beta functions, but different magnitudes.

On the other hand, there’s a pair of graphs distinguished by their beta functions, but both with magnitude function $\frac{4 - 2q}{1 + q}$ They’re $\bullet-\bullet-\bullet-\bullet$ and the 4-vertex graph that looks like $\mathsf{Y}$.

There’s a more general, interesting, question of comparing the residue method and the magnitude method to see which is more sensitive (in different situations) in detecting differences between spaces.

Posted by: Tom Leinster on December 6, 2023 2:57 AM | Permalink | Reply to this

### Re: Jun O’Hara: Residues of manifolds

Thank you very much for the post! From Kiyonori Gomi’s suggestion, I found that the (relative) position of three points can be determined by the magnitude function, to be precise, the behavior of it as R goes to +\infinity.

Posted by: Jun O'Hara on December 6, 2023 3:15 PM | Permalink | Reply to this

### Re: Jun O’Hara: Residues of manifolds

Ah, fantastic! Thanks for that.

Jun’s comment answers a question he raised during his talk: if two three-point metric spaces have the same magnitude function, are they isometric?

The answer to the analogous question is “no” for four or more points, as shown by the graph example at the end of his talk (given in my notes above).

Posted by: Tom Leinster on December 6, 2023 11:26 PM | Permalink | Reply to this

### Re: Jun O’Hara: Residues of manifolds

Tom, Kiyonori informed me that the example of two graphs with the same magnitude, --- and Y, appeared in your paper “The magnitude of metric spaces”, Example 2.3.5. I apologize for my wrong reference at my talk.

Posted by: Jun OHara on December 10, 2023 12:25 PM | Permalink | Reply to this

### Re: Jun O’Hara: Residues of manifolds

Oh, no problem at all! I’d totally forgotten I’d included that example.

Posted by: Tom Leinster on December 10, 2023 9:16 PM | Permalink | Reply to this

### Re: Jun O’Hara: Residues of manifolds

A very recent update to this discussion: Jun has now shown that generic finite metric spaces are determined by their magnitude functions.

Posted by: Mark Meckes on January 2, 2024 3:57 PM | Permalink | Reply to this

### Masaki Tsukamoto: Introduction to mean dimension

This afternoon’s first talk is by Masaki Tsukamoto, who’s giving an introduction to Gromov’s notion of mean dimension. I’ll paste in the abstract:

Mean dimension is a topological invariant of dynamical systems introduced by Gromov in 1999. It is a dynamical version of topological dimension. It evaluates the number of parameters per unit time for describing a given dynamical system. Gromov introduced this notion for the purpose of exploring a new direction of geometric analysis. Independently of this original motivation, Elon Lindenstrauss and Benjamin Weiss found deep applications of mean dimension in topological dynamics. I plan to survey some highlights of the mean dimension theory.

Tsukamoto works in dynamical systems and ergodic theory, and explained to us the idea of “dynamicalization”: take concepts and theorems in geometry, and find analogues in dynamical systems.

For example:

• number of points $\mapsto$ topological entropy

• pigeonhole principle $\mapsto$ symbolic coding

• New example by Gromov: topological dimension $\mapsto$ mean dimension.

Slogan:

Mean dimension is the number of parameters per unit time for describing a dynamical system.

Plan for the talk:

• Topological entropy and symbolic dynamics
• Mean dimension and topological dynamics
• Mean dimension and geometric analysis

Topological entropy and symbolic dynamics

For now, a dynamical system is a compact metrizable space $X$ with a homeomorphism $T: X \to X$, or equivalently, a continuous $\mathbb{Z}$-action. So then we can consider generalizing from $\mathbb{Z}$ to other groups.

For $\varepsilon \gt 0$, define $\#(X, d, \varepsilon)$ as the minimum number of open sets of diameter $\lt \varepsilon$ needed to cover $X$. Now for $N \geq 1$, put $d_N(x, y) = \max_{0 \leq n \lt N} d(T^n x, T^n y).$ This is a metric. The topological entropy of $(X, T)$ is $h_{top}(X, T) = \lim_{\varepsilon \to 0} \lim_{N \to \infty} \frac{\log \#(X, d_N, \varepsilon)}{N}.$

Example Let $A$ be a finite set, and consider the full shift on the alphabet $A$, which is the dynamical system $(A^\mathbb{Z}, \sigma)$ defined by $\sigma((x_n)_{n \in \mathbb{Z}}) = (x_{n + 1})_{n \in \mathbb{Z}}.$ Then we get $h_{top}(\sigma) = \log|A|.$

Generally, here’s a recipe for dynamicalization: given an invariant $Q(K)$ of spaces $K$, define a dynamical version $Q_{dyn}$ so that $Q_{dyn}(K^\mathbb{Z}, \sigma) = Q(K).$

We just did this for cardinality. What about the pigeonhole principle?

Some terminology. An embedding $(X, T) \to (Y, S)$ of dynamical systems is a topological embedding that commutes with $T$ and $S$.

What obstructions are there to existence of dynamical embeddings? Write $P_n(X, T)$ for the set of periodic points of period dividing $n$. If $(X, T)$ embeds in $(Y, S)$ then:

• $h_{top}(X, T) \leq h_{top}(Y, S)$
• $|P_n(X, T)| \leq |P_n(Y, S)|$ for all $n \geq 1$.

A partial converse holds! A closed subset $X$ of $A^\mathbb{Z}$ is called a subshift if $\sigma(X) = X$. The Krieger embedding theorem says that if $A$ and $B$ are finite sets and $X \subseteq A^\mathbb{Z}$ satisfies:

• $h_{top}(X, \sigma) \lt log|B|$,
• $|P_n(X, \sigma)| \leq |B|^n$ for all $n \geq 1$,

then $X$ embeds in $B^\mathbb{Z}$. Note that $A$ can be much larger than $B$!

This is the dynamical version of the pigeonhole principle, in the following version: a finite set $A$ embeds in a finite set $B$ iff $|A| \leq |B|$. We’ve just changed the appropriate notion of size.

Mean dimension and topological dynamics

A continuous map of metric spaces is called an $\varepsilon$-embedding if each fibre has diameter $\lt \varepsilon$. For compact $X$, define $Widim_\varepsilon(X, d)$ (width dimension) as the minimum integer $n$ for which $X$ can be $\varepsilon$-embedded into an $n$-dimensional simplicial complex.

The topological dimension of $X$ is $\dim X = \lim_{\varepsilon \to 0} Widim_\varepsilon(X, d) \in \mathbb{N}.$ This is a topological invariant!

Theorem (Menger and Nöbeling) If $\dim X \lt N/2$ then $X$ topologically embeds in $[0, 1]^N$.

Short digression: every compact metric space of topological dimension 1 can be topologically embedded into the Menger sponge (itself a subspace of $[0, 1]^3$).

The dynamical embedding problem Consider the shift map $\sigma$ on the Hilbert cube $[0, 1]^\mathbb{Z}$. When does a given dynamical system $(X, T)$ embed in $([0, 1]^\mathbb{Z}, \sigma)$?

Periodic points again provide an obstruction. If $(X, T)$ does embed in $([0, 1]^\mathbb{Z}, \sigma)$ then $P_n(X, T)$ embeds in $P_n([0, 1]^\mathbb{Z}, \sigma) \cong [0, 1]^n$.

Allan Jaworski proved in his 1974 PhD thesis that if $(X, T)$ is a finite-dimensional system (i.e. $\dim X \lt \infty$) and has no periodic point then $(X, T)$ embeds in $([0, 1]^\mathbb{Z}, \sigma)$. Then he left mathematics and became an artist.

What about infinite-dimensional systems? Joseph Auslander asked in 1988: can we embed every minimal dynamical system in $([0, 1]^\mathbb{Z}, \sigma)$ in the shift of the Hilbert cube? Minimal means that every orbit is dense, e.g. an irrational rotation of the circle. (This is a topological analogue of ergodicity.) Minimality clearly precludes the existence of periodic points: it’s a strong version of there being no periodic points.

The answer came about a decade later. In 1999, Mikhail Gromov introduced mean dimension, defined by

$mdim(X, T) = \lim_{\varepsilon \to 0} \lim_{N \to \infty} \frac{Widim_\varepsilon(X, d_N)}{N}$

Again, this is a topological invariant!

If $K$ is a nice space then

$mdim(K^\mathbb{Z}, shift) = \dim K.$

In this sense, mean dimension is the dynamicalization of topological dimension.

Earlier we had the slogan:

Mean dimension is the number of parameters per unit time for describing a dynamical system.

Here’s an informal (or maybe I mean formal?) explanation of the “per unit time” bit:

$mdim(K^Z, \sigma) = \frac{dim K^\mathbb{Z}}{|\mathbb{Z}|} = \frac{|\mathbb{Z}| \times \dim K}{|\mathbb{Z}|} = \dim K.$

In particular,

$mdim([0,1]^\mathbb{Z}, \sigma) = 1.$

But if $(X, T)$ embeds in $(Y, S)$ then one easily shows that

$mdim(X, T) \leq mdim(Y, S)!$

So if $mdim(X, T) \gt 1$ then it doesn’t embed in the shift of the Hilbert cube! Lindenstrauss and Weiss found in 2000 that for any $c \geq 0$, there is some minimal dynamical system of mean dimension. IN particular, there is some minimal system not embeddable in the shift of the Hilbert cube.

Lindenstrauss proved a further result that if a minimal dynamical system has mean dimension smaller than $N/36$ then it embeds in the shift of $([0, 1]^N)^\mathbb{Z}$. And, with the speaker Tsukamoto, he proved that this constnt 36 can’t be improved to 2 or less. Finally, Tsukamoto and Gutman proved that it can be improved to 2. So that completely settles that question.

Mean dimension and geometric analysis

The talk then went into more advanced material on Gromov’s original motivation, which was from geometry rather than symbolic dynamics. Gromov was interested in groups acting on manifolds $M$, with a PDE on M and its space $X$ of solutions. I decided to concentrate on listening to this part rather than taking notes.

This was a really wonderful talk, with beautiful conceptual structure. I now think of dynamicalization as a cousin of categorification and feel irresistibly drawn to try to find something new to dynamicalize.

Posted by: Tom Leinster on December 6, 2023 6:04 AM | Permalink | Reply to this

### Re: Masaki Tsukamoto: Introduction to mean dimension

In question time, I asked what the dynamicalization of the concept of the Shannon entropy of a finite set would be. Metric entropy, perhaps?

The speaker answered by recalling the variational principle: for a dynamical system $(X, T)$, it’s a theorem that

$h_{top}(X, T) = \sup_\mu h_\mu(T),$

where the sup is over all $T$-invariant measures $\mu$ on $X$ and $h_\mu$ is the (Kolmogorov-Sinai) metric entropy with respect to $\mu$.

Posted by: Tom Leinster on December 6, 2023 6:24 AM | Permalink | Reply to this

### Re: Masaki Tsukamoto: Introduction to mean dimension

In question time, I asked what the dynamicalization of the concept of the Shannon entropy of a finite set would be. Metric entropy, perhaps?

The direct answer to this is “yes”.

Metric entropy is the slightly odd name for what would better be called measure-theoretic entropy, because it has nothing to do with metric spaces. Given an endomorphism $T$ of a probability space $(X, \mu)$, one can define its entropy $h_\mu(T)$, as per the link above.

Take a finite probability space $A$. There’s an induced product measure $\mu$ on $A^\mathbb{Z}$. Let $\sigma:A^\mathbb{Z} \to A^\mathbb{Z}$ be the shift function. Small theorem: $h_\mu(\sigma)$ is equal to the Shannon entropy of the probability space $A$.

Hence “metric” entropy is indeed a dynamicalization of Shannon entropy, in the sense of this talk.

Posted by: Tom Leinster on December 6, 2023 11:37 PM | Permalink | Reply to this

### Re: Masaki Tsukamoto: Introduction to mean dimension

I wrote:

This was a really wonderful talk, with beautiful conceptual structure. I now think of dynamicalization as a cousin of categorification and feel irresistibly drawn to try to find something new to dynamicalize.

Maybe we can combine dynamicalization with categorification. For example, dynamicalization does

• cardinality $\mapsto$ topological entropy

and categorification does

• cardinality $\mapsto$ magnitude.

If we do both at once, perhaps we get an invariant of an endofunctor of a topological category, or something like that?

A different way to combine dynamicalization with categorification would be to look for a homology theory of dynamical systems whose Euler characteristic is topological entropy. (Or perhaps better, its exponential. It’s the exponential of entropy, not entropy itself, that behaves like cardinality.)

Is there already a homology theory for topological dynamical systems? What is its Euler characteristic?

Small observation: since (the exponential of) topological entropy is not usually an integer, we’d need the Euler characteristic of our homology theory not to be an integer either, so it can’t be a simple alternating sum of ranks of homology groups. But such things have been known to happen!

Posted by: Tom Leinster on December 6, 2023 12:31 PM | Permalink | Reply to this

### Re: Masaki Tsukamoto: Introduction to mean dimension

In the discrete setting, topological entropy can be a nice size function as outlined in https://arxiv.org/abs/2205.05178. So you can combine all this stuff. This should give a way to bootstrap into symbolic dynamics per se but also (e.g.) Anosov systems, which can be described in that vein. The Ruelle zeta function (which counts closed geodesics of a given length) should feature prominently.

Posted by: Steve Huntsman on December 6, 2023 2:18 PM | Permalink | Reply to this

### Re: Masaki Tsukamoto: Introduction to mean dimension

There is a sense in which dynamicalization of invariants of size is dual to categorification of invariants of size.

To explain what I mean, I need to back up a bit and note that while forgetful functors usually don’t preserve invariants of size, free functors often do. For example, the forgetful functor

$\mathbf{Vect} \to \mathbf{Set}$

does not “preserve size”: the dimension of a vector space is not equal to the cardinality of its underlying set. But the free functor

$\mathbf{Set} \to \mathbf{Vect}$

does: the dimension of the free vector space on a set is the cardinality of that set. Similarly, the forgetful functor

$\mathbf{Top} \to \mathbf{Set}$

that takes the set of points does not transform Euler characteristic into cardinality. But its left adjoint, the discrete space functor

$\mathbf{Set} \to \mathbf{Top},$

does “preserve size”: the Euler characteristic of a discrete space on a set is the cardinality of that set. One more example: the objects functor

$\mathbf{Cat} \to \mathbf{Set}$

doesn’t preserve size: the magnitude (Euler characteristic) of a category does not equal the cardinality of its set of objects. But its left adjoint is the discrete category functor

$\mathbf{Set} \to \mathbf{Cat},$

and the magnitude of the discrete category on a set is the same as the cardinality of the set. The same goes for enriched categories, including metric spaces (e.g. the magnitude of a discrete metric space is just its cardinality).

Here I’m assuming that everything is finite enough that everything is defined.

Now take a group $G$ and consider the category $[G, \mathbf{Set}]$ of left $G$-sets. The “size” of a $G$-set $X$ is understood to be the magnitude (= groupoid cardinality) of the lax quotient (= action groupoid) $X//G$, which works out to be $|X|/|G|$.

As you’d expect given the examples above, the forgetful functor

$U: [G, \mathbf{Set}] \to \mathbf{Set}$

does not preserve size, as $|X|/|G| \neq |X|$. And as you’d expect, the left adjoint $L$ to $U$ does. Explicitly, $L$ is given on a set $S$ by $L(S) = G \times S$, with the obvious $G$-action, and

$\frac{|G \times S|}{|G|} = \frac{|G| \times |S|}{|G|} = |S|.$

No surprises yet.

But $U$ also has a right adjoint $R$, given on a set $A$ by $R(A) = A^G$, again with the obvious $G$-action. It doesn’t preserve size, since

$\frac{|A^G|}{|G|} = \frac{|A|^{|G|}}{|G|} \neq |A|.$

Idea Look for some invariant $\|\cdot\|$ of $G$-sets such that $\|A^G\|$ = $|A|$ for sets $A$.

Roughly, this is the idea of dynamicalization.

In Masaki’s talk, we stuck to the group $G = \mathbb{Z}$. In that case, $[G, \mathbf{Set}]$ is the category of sets with a $\mathbb{Z}$-action, or equivalently, sets equipped with an automorphism. The right adjoint

$R: \mathbf{Set} \to [\mathbb{Z}, \mathbf{Set}]$

sends a set $A$ to the set $A^\mathbb{Z}$ with the shift automorphism $\sigma_A$.

What I want to say now is that there’s an invariant $\|\cdot\|$ on sets-with-automorphism which, when applied to $(A^\mathbb{Z}, \sigma_A)$, gives just $|A|$ for any finite set $A$. That’s the idea, but I don’t know whether that’s quite true. It seems that we need to treat the set $A$ as a discrete topological space and bring the profinite topology on $A^\mathbb{Z}$ into play. So we’re looking at the functors

$\mathbf{Set} \stackrel{D}{\to} \mathbf{Top} \stackrel{R}{\to} [\mathbb{Z}, \mathbf{Top}],$

where $D$ means discrete and $R$ is the right adjoint of the forgetful functor. For $X \in [\mathbb{Z}, \mathbf{Top}]$ (i.e. for a topological space $X$ equipped with an automorphism), define $\|X\|$ to be the exponential of the topological entropy of $X$. Then

$\|R D(A)\| = |A|$

for all finite sets $A$. This is equivalent to the assertion $h_{top}(A) = \log |A|$ in my notes on Masaki’s talk, which was given as the basic example of dynamicalization.

So dynamicalization is something to do with right adjoints to forgetful functors preserving size.

Posted by: Tom Leinster on December 10, 2023 11:03 AM | Permalink | Reply to this

### Luigi Caputi: On finite generation in magnitude cohomology of graphs, and its torsion

Back to magnitude, with Luigi Caputi from Torino/Turin.

The talks here have ranged from analysis to homological algebra to dynamical systems to geometry to category theory, but the theme for this one is representation theory.

Plan:

• Recap, problems, questions
• Representation theory of categories
• Applications

Recap, problems, questions

$G$ denotes an undirected graph, usually connected, and not simple (i.e. may have loops and multiple edges). But can generalize most things to directed graphs.

Such a graph is a metric space under the path metric.

Maps: there are the ordinary morphisms of graphs that send vertices to vertices and edges to edges. But we’ll concentrate on minor morphisms $\phi: G' \to G$. Such a thing is a function

$V(G') \coprod E(G') \coprod \{\ast\} \to V(G) \coprod E(G) \coprod \{\ast\}$

such that

• vertices to go vertices and $\phi(\ast) = \ast$

• if $e \in E(G')$ is incident to $v, w$ then:

• $\phi(e) = \ast$, or
• $\phi(e) = \phi(v) = \phi(w)$, or
• $\phi(e) \in E(G)$ and it is incident to $\phi(v), \phi(w)$
• $\phi^{-1}(E(G))$ maps bijectively to $E(G)$

• $\phi^{-1}(v)$ is a tree for all $v$.

It is a contraction if $\phi^{-1}\{\ast\} = \{\ast\}$. Write $Graph$ for the category of graph and minor morphisms, and $CGraph$ for graphs and contractions.

Perspective on magnitude homology: $MH_{k, \ell}(G)$ is a subquotient of the free module $\Lambda_k(G)$ generated by all $(k + 1)$-tuples of vertices. Write $I_k(G)$ for the submodule generated by $(x_0, \ldots, x_k)$ such that $x_{i - 1} = x_i$ for some $i$. Then

$MC_{k, \ell}(G) \subseteq R_k(G) = \Lambda_k(G)/I_k(G).$

Note that $MH_{k, \ell}$ is a functor $CGraph \to Mod_R$.

Luigi then recalled some computations for graph magnitude homology, including:

$dim MH_{k, \ell}(C_m) = a(k, \ell)m + b(k, \ell)$

for some constants $a(k, \ell)$ and $b(k, \ell)$.

Question Can we understand better the ranks of the magnitude homology groups?

Kaneta and Yoshinaga’s idea: starting from a simplicial complex $K$, we get a poset by adding a top and bottom to the face poset of $K$, and from that we get a graph. Kaneta and Yoshinaga proved that if $M$ is a manifold and $K$ is a triangulation of $M$, then $\tilde{H}_{k - 2}(M)$ embeds into $MH_{k, \ell}(G(K))$ for a certain $\ell$. This provides a source of examples of torsion in magnitude homology.

Theorem (Sazdanovic and Summers) For any finitely generated abelian group $A$, there exist a graph $G$ and $k, \ell$ such that $MH_{k, \ell}(G)$ contains $A$ as a subgroup.

Question Can we understand the torsion structurally?

Representation theory of categories

For a small category $C$, a $C$-module is a functor $F: C \to Mod_R$ (where $R$ is a commutative ring with 1, Noetherian if need be).

For a subset $S$ of $\bigoplus_{c \in C} F(c)$, we denote by $span(S)$ the minimal submodule of $F$ containing $S$. If $S$ is finite and $F = span(S)$ then $F$ is finitely generated. In practice, we don’t use this definition but the result of a lemma: $F$ is f.g. iff there exist objects $e_1, \ldots, e_k \in C$ and a surjection $\bigoplus \mathbb{Z}\cdot C(c_i, -) \to F$ We write $P_c = \mathbb{Z}\cdot C(c, -)$.

A module is Noetherian if all its submodules are finitely generated.

Write $Rep(C)$ for the category of $C$-modules and natural transformations. We say that $Rep(C)$ is Noetherian if all f.g. $C$-modules are Noetherian. But this is not equivalent to the category algebra of $C$ being Noetherian.

Example Let $FI$ be the category of finite sets and inejctions. Then $Rep(FI)$ is Noetherian.

Idea Translate the algebraic problem into a combinatorial problem. (Follow the Hilbert basis theorem: if $K$ is Noetherian then so is $K[x_1, \ldots, x_n]$, using Dickson’s lemma: the poset $\mathbb{N}^r$ is Noetherian.)

Next, Luigi explained some of the theory of quasi-Gröbner categories (which include the opposite of the category of graphs with genus bounded above by some constant). I’ve learned this week that my ability to live-blog is basically nil when the material gets into zones that are really unfamiliar to me.

Magnitude cohomology was introduced by Richard Hepworth in a fantastic paper. More people should study it! Magnitude cohomology deserves much more investigation and use.

Luigi and collaborator Carlo Collari proved that the functor

$MH^{k, *}: CGraph_{\leq g}^{op} \to Mod_R$

is finitely generated. Here the domain is the opposite of the category of graphs with genus $\leq g$ and contractions between them.

Corollary Among graphs of genus $\leq g$, torsion in the magnitude (co)homology is bounded, i.e. there’s a common annihilator.

(We should all call this the Caputi-Collari Corollary. It’s fun to say.)

A second corollary: assume our base ring is a field. Let $k \in \mathbb{N}$. Then there’s a polynomial $f$ over $\mathbb{Z}$ such that $f$ is of degree $\leq g + 1$ and

$\dim MH^{k, *}(G) \leq f(E(G))$

for all $G$ of genus $\leq g$. So, we’ve got numerical control over the ranks of the magnitude cohomology groups!

It’s also worth noting that in Remark 2.5 of Richard’s paper on magnitude cohomology, there’s a universal coefficient sequence that relates the dimensions of the magnitude homology and cohomology groups. Using this, we also get control over the dimensions of the homology groups.

Posted by: Tom Leinster on December 6, 2023 7:36 AM | Permalink | Reply to this

### Juan Pablo Vigneaux: A combinatorial approach to Möbius inversion and pseudoinversion

In the final talk of the day, Juan Pablo Vigneaux will excavate the foundations of magnitude.

Plan:

• Magnitude
• Combinatorial linear algebra
• Pseudoinversion
• Comparison with previous results

Notation For a finite category $A$, write $\zeta: ob A \times ob A \to \mathbb{Z}$ for $(a, b) \mapsto |Hom(a, b)|$. A weighting $k^\bullet$ on $A$ is a function $ob A \to \mathbb{Q}$ such that

$\sum_b \zeta(a, b) k^b = 1$

for all $a$, and dually coweightings $k_\bullet$. If there is a weighting and a coweighting on $A$ then their sums are equal, and called the magnitude of $A$. When $\zeta$ is invertible, there’s a unique weighting and a unique coweighting, given by the column- and row-sums of $\zeta$, and the magnitude is the sum of the elements of $\zeta^{-1}$.

Back in my 2006 paper on the magnitude/Euler characteristic of a category, I showed that the matrix $\zeta$ has an inverse $\mu$ if $A$ is finite, skeletal and the only idempotents are identities; it’s given by

$\mu(a, b) = \sum \frac{(-1)^k}{|Hom(c_0, c_0)| \cdots |Hom(c_k, c_k)|},$

where the sum is over all paths $c_0 \to \cdots \to c_k$ where $c_0 \neq \cdots \neq c_k$. This generalizes an old formula attributed to Philip Hall for the Möbius function of a poset.

Akkaya and Ünlü on the one hand, and independently Chen and Vigneaux on the other, showed that the magnitude of a category can be extended to all categories by putting $|A| = \sum_{a, b} \zeta^+(a, b),$ where $\zeta^+$ is the Moore-Penrose inverse of $\zeta$. (This is always defined, for every complex matrix.) Akkaya and Ünlü showed that this extended definition is invariant under equivalence of categories.

Moreover, if $A$ has a weighting, then it’s given by the row/column sums of $\zeta^+$, and dually. (I say “row/column” because I’m not fast enough to figure out which it is.) So the new definition of the magnitude of a category is compatible with the old weight/coweight one.

(All this is really about magnitude of matrices more generally.)

Juan Pablo gave a quite complicated formula for the magnitude of a finite metric space involving ratios of determinants, which I didn’t have time to type up. Further, in combinatorial linear algebra (“a completely ignored subject”), there’s a formula for $\det(\zeta)$ that involves analysing cycles in $A$. We can feed the latter formula into the former.

Then he gave a new formula for the Möbius function of a category $A$ (assuming it has Möbius inversion), again involving counting cycles in $A$, as well as spanning graphs in $A$.

(The basic idea here is that you can take the formula for the determinant as a signed sum over permutations, then use the disjoint cycle decomposition of each permutation.)

Much of the above works for enriched categories. So we can seek applications to metric spaces. For instance, applying the Möbius formula just mentioned gives a new proof that

$\lim_{t \to \infty} |tX| = \# X$

for any finite metric space $X$. Challenge: use a similar approach to understand when it is that

$\lim_{t \to 0} |tX| = 1.$

We know that this is true under certain mild conditions, but it’s not always true; e.g. Simon Willerton found a six-point space for which the right-hand side is $6/5$.

Juan Pablo proposed a general question. The formulas he presented earlier, involving sums over particular types of cycles, are complicated. But maybe they can be simplified using magnitude homology? For instance, the cancellations that appear in the new formula for the magnitude of a finite metric space are related to magnitude homology.

(A little point I’d never seen before: the $(i, j)$-entry of the adjugate of a matrix $A = (a_{i j})$ is the partial derivative $\frac{\partial \det A}{\partial a_{i j}},$ at least up to sign and the possibility that I’ve muddled up rows and columns.)

There’s a generalization of Cramer’s rule (i.e. $A^{-1} = adj(A)/det(A)$) to the Moore-Penrose inverse, called Berg’s formula. This gives us a combinatorial interpretation of the entries of $\zeta^+$, hence the magnitude of a category (and of a matrix more generally).

Juan Pablo finished by taking us back to some of the basic formulas for magnitude, including as the decategorification of magnitude homology. I found myself deep in thought and didn’t take notes on this bit.

Posted by: Tom Leinster on December 6, 2023 9:04 AM | Permalink | Reply to this

### Giuliamaria Menara: Eulerian magnitude homology

First in today’s programme is Giuliamaria Menara on “Eulerian magnitude homology: introduction and application to random graphs” (joint work with Chad Giusti). Café readers may remember Giulia’s post here on Eulerian magnitude homology a little while ago. She also organized a minisymposium on Applications of magnitude and magnitude homology to network analysis at the big SIAM Conference on Applied Algebraic Geometry in Eindhoven this summer.

Question What kind of information about a graph $G$ is contained in the magnitude homology groups of $G$?

The plan is to modify the definition of magnitude homology — that’s Eulerian MH — and apply this modified definition to Erdős-Rényi random graphs.

Giulia began by recalling the definition of the magnitude homology of a graph, which I won’t reproduce here. But a basic point to note is that the generators of the normalized chain groups are tuples $(x_0, \ldots, x_k)$ where $x_{i - 1} \neq x_i$ for all $i$ — i.e. neighbours are distinct, but we might, for instance, have $x_0 = x_2$.

In Eulerian magnitude homology, we restrict further by asking that $x_i \neq x_j$ for all $i \neq j$. This gives us smaller chain groups $EMC_{k, \ell}(G)$, and a different homology theory $EMH_{\ast, \ast}$.

Lemma The number of triangles in a graph is $1/6$ times the number of basis elements of $EMC_{2, 2}$ whose differential is zero.

Then there’s a similar upper bound on the number of cliques of a given size in a graph.

The Erdős-Rényi random graph $G(n, p)$ is constructed by connecting $n$ labelled vertices randomly, where each edge is included with probability $p$ independently.

We’ll consider an ER graph $G(n, n^{-\alpha})$ where $0 \leq \alpha \lt \infty$ and its Eulerian magnitude homology. It’s a little difficult for me to take notes here as the diagrams of graphs are so integral; I guess the slides will appear on the conference web page sooner or later.

Theorem Let $G(n, n^{-\alpha})$ be an ER graph. Then the EMH groups $EMH_{k, k}(G)$ vanish for $\alpha \gt \frac{k + 1}{2k - 1}$.

When $k$ is large, $\frac{k + 1}{2k - 1} \approx \frac{1}{2}$, so we’re roughly saying that the $k$th diagonal EMH group vanish when the probability of each edge being present is $\lt 1/\sqrt{n}$. For the sake of my own feeble mind, I’m going to note that $\frac{k + 1}{2k - 1} = \frac{1}{2} \biggl( 1 + \frac{2}{2k - 1} \biggr),$ so it’s a decreasing function of $k$. That is, when $k$ is small, the threshold on $\alpha$ is larger, so we require the edge probability to be smaller in order to conclude that $EMH_{k, k}$ vanishes. Or more crudely put, $EMH_{k, k}$ is more likely to vanish when $k$ is large, as we’d expect.

The next step starts with results of Matthew Kahle and Elizabeth Meckes on random clique complexes. Giusti and Menara were able to prove similarly flavoured results for the Betti numbers $\beta_{k, k}$ of the diagonal Eulerian magnitude homology groups $EMH_{k, k}$, including a central limit type theorem:

If $p = o(n^{-1/n})$ then $\frac{\beta_{k, k} - \mathbb{E}(\beta_{k, k})}{\sqrt{Var(\beta_{k, k})}}$ is normally distributed, $N(0, 1)$.

Giulia stated further results on the random geometric graph, another kind of random graph. Again, there are results on the vanishing of the diagonal EMH groups and a central limit theorem for the Betti numbers, with the proof got by modifying the proofs of the results by Kahle and Elizabeth Meckes.

Posted by: Tom Leinster on December 7, 2023 1:04 AM | Permalink | Reply to this

### Tsubasa Kamiyama: Metric fibrations over one dimensional spaces are trivial

Next up is Tsubasa Kamiyama from Tohoku University.

The point of “metric fibrations” is that there’s a product formula for magnitude: the magnitude of the total space is the product of the magnitude of the base with the magnitude of the (any) fibre.

Question When are metric fibrations trivial?

Roughly speaking, “trivial” means that the total space is the $\ell^1$ product of the base space with the fibre.

This talk is about a new result:

Metric fibrations are trivial when the base space is one-dimensional.

Definition A map $p: E \to X$ of metric spaces is a metric fibration if for all $x, y \in X$ and $a \in p^{-1}(x)$, there exists $\alpha \in p^{-1}(y)$ such that

$d_E(a, b) = d_X(x, y) + d_E(\alpha, b)$

for all $b \in p^{-1}(y)$:

The idea here is that a metric fibration is locally an $\ell^1$ product. And $\alpha$ is the closest point on $p^{-1}(y)$ to $a$.

In a metric fibration, all fibres are isometric. (I think we have to assume here that distances are finite.)

We’ll write $\gamma^x_y(a) = \alpha$, so that $\gamma^x_y$ transports from $p^{-1}(x)$ to $p^{-1}(y)$.

Lemma 1 We have

$\gamma^y_x \circ \gamma^x_y = id$

for all $x, y \in X$.

Definition A fibration $p: E \to X$ is trivial if $E$ is isometric (via $p$) to $p^{-1}(x) \times_1 X$ for all $x$, where $\times_1$ is the $\ell^1$ product.

Lemma 2 A fibration $p: E \to X$ is trivial iff

$\gamma^y_z \circ \gamma^x_y = \gamma^x_z$

for all $x, y, z \in X$.

Lemma 3 Let $x, y, z \in X$. If any of these three points is between the other two then

$\gamma^y_z \circ \gamma^x_y = \gamma^x_z.$

To get to the definition of “1-dimensional”, write

$K_n = \{(t \cos (2\pi j/n), t\sin(2\pi j/n)): 0 \leq t \lt 1, j = 1, 2, \ldots, n\}.$

This is a star shape, e.g. $K_3$ looks like $\mathsf{Y}$. Then give $K_n$ the geodesic metric.

A metric space $X$ is one-dimensional if for all $x \in X$, there exist some open neighbourhood $U$ of $x$ and $n \in \mathbb{N}, r \gt 0$ such that $U$ is isometric to $r K_n$ with $x$ mapping to $0 \in K_n$. Write $n$ as $deg x$. (I guess $n$ must be uniquely determined.) Call $x$ a vertex if $deg x \neq 2$.

(Note to self: $K_1 = [0, 1)$ and $K_2 = (-1, 1)$, so degrees 1 and 2 mean different things.)

Not all 1-dimensional manifolds are 1-dimensional in this sense, e.g. a circle with the flat metric is 1-dim but a circle with the round metric is not.

Main theorem Let $p: E \to X$ be a metric fibration. If $X$ is one-dimensional then the fibration is trivial.

The proof is by induction on the number of vertices, and the speaker sketched it using diagrams which I won’t attempt to put into words.

I’m not sure what happens if $X$ is not compact, in which case the number of vertices could be infinite. I think the answer is that you approximate $X$ as a nested union of compact (or do I mean bounded?) subspaces.

Posted by: Tom Leinster on December 7, 2023 1:41 AM | Permalink | Reply to this

### Sho Shimoyama: Exploring the uniqueness of p-minimizing movement in metric spaces: beyond p = 2

Next is a non-magnitude talk, on flow in metric spaces.

Problem For an initial point $u_0$, find conditions such that:

• (i) the $p$-minimizing movement starting from $u_0$ exists and is unique;

• (ii) it is the $p$-curve of maximizing slope.

Terminology to be defined!

Main theorem Under hypotheses, (i) holds and the $p$-minimizing movement is trivial.

This was another talk with a lot of pictures, and also a lot of definitions in a field that I know nothing about, so this time I’m going to be minimal and just record the basic definition of “slope” of a function on a metric space.

Let $X$ be a complete metric space. Take a convex, lower-semicontinuous function $f: X \to [0, \infty]$. Then the slope of $f$ at $u \in X$ is defined to be

$\left|-\partial f\right|(u) = limsup_{v \to u} \frac{\max\{f(u) - f(v), 0\}}{d(u, v)}$

if $f(u) \neq \infty$, or as $\infty$ if $f(u) = \infty$. (I’m not sure what the minus sign on the left-hand side is doing and wonder whether I misread it.)

The idea is to mimic the gradient flow (in the Euclidean setting), moving down steepest slopes of the function. When you get into this, you have to choose a norm, and this is where the “$p$” in the title arises: we’re using $p$-norms. Mostly people have investigated the case $p = 2$, but this talk is about going beyond that case — more specifically, to $p \gt 2$.

Posted by: Tom Leinster on December 7, 2023 2:21 AM | Permalink | Reply to this

### Yu Tajima: Magnitude homotopy type of graphs and Whitney twist via discrete Morse theory

The last talk of today is by Yu Tajima, who recently wrote a great paper on magnitude homotopy with her PhD supervisor Masahiko Yoshinaga. I’ll quote the abstract for her talk:

The invariance of magnitudes for graphs differ by a Whitney twist (with a certain condition) was proved by Leinster, and the problem that “Are their magnitude homology groups isomorphic” is still open. We approach the problem using discrete Morse theory on magnitude homotopy types. In the course of this approach, we had another proof of the invariance of magnitudes under a Whitney twist. This is joint work with Masahiko Yoshinaga (Osaka University).

Plan:

• Introduction
• Magnitude homotopy type
• Discrete Morse theory
• Whitney twist

Introduction

I have to include a picture of the wonderful first slide:

Write $G$ for a (finite, simple connected) graph, with the usual shortest-path metric. A path is a sequence $(x_0, \ldots, x_k)$ of adjacent vertices, and we write

$L(x_0, \ldots, x_k) = d(x_0, x_1) + \cdots + d(x_{k - 1}, x_k).$

I’ll skip the definitions of the magnitude chain groups and magnitude homology of a graph, which are in the Hepworth-Willerton paper.

Magnitude homotopy type

The definitions here are slightly different from those in Masahiko Yoshinaga’s talk.

For $a, b \in G$, define the simplicial complex

$\Delta_\ell(G; a, b) = \{ \{ (x_{i_0}, i_0), \ldots, (x_{i_k}, i_k)\} \subseteq G \times \{0, \ldots, \ell\} : (x_{i_0}, \ldots, x_{i_k}) \prec \text{some path}   (a= x_0, \ldots, x_\ell = b)\}.$

Here $\prec$ means “subsequence”, I think. This has a subcomplex

$\Delta'_\ell(G; a, b) = \{\{x_{i_0}, \ldots, x_{i_k}\} \in \Delta_\ell(G; a, b): L(x_{i_0}, \ldots, x_{i_k}) \lt \ell \}.$

Then we define the magnitude homotopy type (with or without basepoints) to be

$M^\ell(G; a, b) = \Delta_\ell(G; a, b)/\Delta'_\ell(G; a, b)$

and

$M^\ell(G) = \bigvee_{a, b \in G} M^\ell(G; a, b).$

I’m hoping Yu will upload her slides, because I think this talk is going to be full of useful pictures (e.g. examples) that I can’t easily reproduce here.

Theorem $MH_{k, \ell}(G) = \tilde{H}_k(M^\ell(G))$.

Discrete Morse theory

More pictures! I’m not one of those ninjas who can live-TikZ diagrams (especially nice colourful ones), so I’m afraid I’ll have to skip this bit.

Whitney twist

I’ll refer back to an old Café post for the definition of Whitney twist.

Two graphs $X, Y$ that differ by a Whitney twist have the same magnitude, as long as the two joining vertices are adjacent. Emily Roff generalized this result to sycamore twists.

Open question Do $X$ and $Y$ have the same magnitude homology?

Motivated by this, the speaker then presented a new proof of the invariance of magnitude under Whitney twists. (Again, this means Whitney twists where the gluing vertices are adjacent; it’s false if they’re not.)

And guess what? There are more crucial diagrams that I can’t reproduce here! This is a well-presented, well-explained talk and it pains me not to be able to do it justice here, so I really hope the slides will be put up.

In question time, Richard Hepworth asked whether the answer to the following is known: are there two graphs with the same magnitude homology but different magnitude homotopy types? The speaker says it’s unknown, but the answer is “yes” for metric spaces.

Posted by: Tom Leinster on December 7, 2023 3:12 AM | Permalink | Reply to this

### Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

It’s the last day of the meeting, and to wake us up after last night’s conference dinner, we have Rayna Andreeva. Well, we should have had Rayna, but there’s a sad story…

Originally the talk was meant to be a double act by Rayna and Katharina Limbeck, but unfortunately, Katharina both fell ill and had her flights cancelled, so she couldn’t make it to the conference. So Rayna had to undertake the heroic task of completely rewriting her talk to take twice the time, during the conference week. But then, even more unfortunately (and coincidentally), Rayna fell ill last night too. So right now, Emily Roff is doing the also-heroic job of presenting the talk, having only just looked at the slides a few minutes ago. Bravo.

Under these circumstances, I’m not going to attempt to take detailed notes. But the slides are on the conference slides page. The broad theme of Andreeva and Limbeck’s work is to use magnitude and magnitude dimension for deep learning (Andreeva) and representation learning (Limbeck), extending into unsupervised representation learning.

The first part of the talk is based on Andreeva, Limbeck, Rieck and Sarkar’s paper Metric space magnitude and generalisation in neural networks.

“Magnitude dimension” here is Simon Willerton’s notion — the instantaneous growth rate of the magnitude function:

$\frac{d\log|t X|}{d\log t}.$

So it’s scale-dependent. It’s sort of discussed at this blog post, although mostly in the different setting of spread rather than magnitude.

Slogan: the lower the magnitude dimension, the better the generalization.

Theorem Let $X$ be a compact subset of $\mathbb{R}^n$ suc that either the magnitude dimension or the Minkowski dimension of $X$ exists. Then

$\dim_{Mag} X = \dim_{PH}^0 X,$

where the right-hand side is persistent homology dimension in degree $0$.

Also: the lower the magnitude dimension, the higher the test accuracy, as borne out by some experimentation. And the higher the test accuracy, the lower the magnitude (effective number of models).

The second half is based on Limbeck, Andreeva, Sarkar and Rieck’s paper Metric space magnitude for evaluating unsupervised representation learning.

One thing that’s needed here is a good notion of the difference between magnitude functions.

Question Is there a good such notion?

They address this with a scale-invariant difference measure. There’s some principal component analysis involved here, some rescaling of the magnitude functions, and the notion of magnitude weights. And there are lots of experiments to test out their new methods, giving encouraging evidence that magnitude can detect the quality of representations.

There’s also results on the stability of magnitude in terms of continuity. There’s still much to explore, but the results are encouraging.

One bottleneck in the use of magnitude is the computational expense: inverting a matrix takes ages. They speed up computation of magnitude via Cholesky factorization (no inversion needed). For example, if I’ve read the graph correctly, they can compute the magnitude of a space with about 10000 points in about 10 seconds.

I obviously haven’t done justice to any of their work. See their papers and slides for more!

Both speakers have emphasized that they welcome questions by email.

Posted by: Tom Leinster on December 8, 2023 12:37 AM | Permalink | Reply to this

### Re: Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

For speeding up the calculation of weightings see appendix E of https://arxiv.org/abs/2201.11677.

Posted by: Steve Huntsman on December 8, 2023 2:45 AM | Permalink | Reply to this

### Re: Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

BTW appendix D of the same paper outlines a special case wherein coweightings are computed on matrices of size more than 160000x160000.

Posted by: Steve Huntsman on December 8, 2023 2:49 AM | Permalink | Reply to this

### Jun Yoshida: On a logical foundation for parametrized homologies

From machine learning to semantics, with Jun Yoshida. Let’s begin with Jun’s abstract:

The magnitude homology has a variant called the blurred magnitude homology. It is constructed from a filtration by a distance function on a simplex spanned by the points on a metric space.

From this point of view, it is thought of as a sibling of the persistent homology. The goal of this talk is to introduce a common foundation for this type of parameterized homologies. More precisely, I will explain how they are understood as semantics of ordinary simplicial homology in the intuitionistic logic. As an application, one can make a provably correct implementation of these homologies.

The talk began with an overview of persistent homology. Then we had a summary of blurred magnitude homology, a notion due to Nina Otter.

Let $X$ be a metric space. For $\ell \geq 0$, the blurred magnitude complex $MC_{*, \leq\ell}(X)$ is the chain complex whose $n$th group is the free module on the sequences $(x_0, \ldots, x_n)$ such that $x_0 \neq \cdots \neq x_n$ and

$\sum d(x_{i - 1}, x_i) \leq \ell.$

In ordinary magnitude homology, this inequality would be an equality. And the differential is the same as in magnitude homology. You can recover the magnitude complex from the blurred magnitude complex as a quotient:

$MC_{\ast, \ell}(X) \cong MC_{\ast, \ell}(X)/\bigcup_{s \lt \ell} MC_{\ast, \leq s}(X).$

Blurred magnitude homology is the homology of the blurred magnitude complex.

Question Can we generalize results about persistent homology to more general parameter spaces than subsets of $\mathbb{R}$?

This question was also addressed in Emerson Gaw Escolar’s talk. But Jun’s point of view is that multiparameter persistent homology seems not to cover all situations, and he’s going to explain the idea of persistent homology as homology sheaves.

The idea is to consider $\mathbb{T} = (0, \infty]$ with the right-order topology, generated by half-open intervals $(a, \infty]$. (That’s T for “time”.) Then persistent homology and blurred homology are defined “internally” in $Sh(\mathbb{T})$. And Jun advocates using $Sh(\mathbb{T} \times \mathbb{R})$.

Goal Reformulate parametrized homology theory in the language of categorical logic.

• The language and arguments will be independent of the parameter space (e.g. $\mathbb{T}$ or $\mathbb{T} \times \mathbb{R}$).

• Should be applicable to non-spatial toposes, such as persistent homology with infinitesimal time dependency. (There’s a connection here with the vineyard algorithm.)

• A rigorous connection between the theory and the implementation, leading to a formally verified implementation on Lean that the speaker is building.

Jun then reviewed the basics of logic on sheaves, starting with the idea that a truth value on a topological space is an open subset. For example, in $\mathbb{T}$, we have truth values like “true after time $a$”.

(I guess we’re emphasizing “after” over “before” because something already having happened is an observable condition, hence corresponds to an open set.)

The speaker emphasized that the Mitchell-Bénabou language and Kripke-Joyal semantics apply not just in sheaf toposes but in arbitrary elementary toposes. For example, they can be used in a topos suitable for infinitesimally time-dependent logic, where the subobject classifier consists of the open sets in $\mathbb{T} \times \mathbb{R}$ quotiented out by agreement near $\mathbb{T} \times \{0\}$.

He gave a couple of examples, both involving sheaves on $\mathbb{T}$. The first was about continuously time-filtered sets as subobjects of constant sheaves on $\mathbb{T}$, and the second described a correspondence between (not necessarily symmetric) ultrametrics on a set $V$ and preorders on the constant sheaf $V$ on $\mathbb{T}$.

Using the tools of linear algebra in intuitionistic logic, Jun explained how to compute homology groups in a general topos-theoretic setting. The main theorem is a sheaf-theoretic generalization of the Zomorodian-Carlsson theorem on decomposition of persistent homology into interval modules.

There’s also a further theorem, for an arbitrary topos that isn’t necessarily a sheaf topos, where some new complications arise: you don’t necessarily have an interval module decomposition in the same simple sense. But it doesn’t look too much more complicated! Jun’s main application is to infinitesimal time.

Finally, Jun showed us some of his work setting up a formally verified implementation on Lean.

Posted by: Tom Leinster on December 8, 2023 2:11 AM | Permalink | Reply to this

### Re: Jun Yoshida: On a logical foundation for parametrized homologies

A very uninformed comment, but I wonder if there’s a possibility to bring persistent cohomotopy into the story. It’s viewed here (slide 98) as an enhancement of persistent homology.

Posted by: David Corfield on December 8, 2023 8:41 AM | Permalink | Reply to this

### Sergei O. Ivanov: On the path homology of Cayley digraphs and covering digraphs

Sergei Ivanov is here to tell us about path homology (also called GLMY theory). More and more people are working out the connections between path homology and magnitude homology.

Path homology is a homology theory of directed graphs (also called digraphs). The definition of path homology was too fast for me to type up, but there’s a nice summary of it in Section 6 of this paper by Asao, which also began the project of comparing it with magnitude homology.

Here are some properties of path homology, which Sergei explained:

• There’s a Künneth theorem.

• It’s homotopy invariant in an appropriate sense.

• It respects suspension.

• It generalizes homology of simplicial complexes. Given a simplicial complex $K$, we can form the directed graph $X(K)$ whose vertices are the simplices of $K$ and whose arrows are embeddings (“barycentric subdivision digraph”). Then the homology of $K$ is isomorphic to the path homology of $X(K)$.

• There’s an accompanying notion of fundamental group, with a Hurewicz isomorphism theorem.

• It satisfies Eilenberg-Steenrod axioms.

There’s also a path cohomology, and there’s a cup product making it into a graded algebra.

Open questions

• Is the path cohomology algebra graded commutative? Many computations haven’t settled the question. There’s no diagonal map… which reminds me of Hepworth’s results on the circumstances under which the cup product on magnitude cohomology is graded commutative. (We know that it often isn’t.) E.g. see Example 5.6 and Theorem 5.7 of Hepworth’s paper.

• Is the path cohomology algebra isomorphic to the cohomology algebra of a space? If it’s not graded commutative, the answer is no. If the answer is yes, that would be an analogue of the magnitude homotopy type.

• Is the path cohomology of a box product the same as that of the strong product? (Or maybe Sergei said “homology”, not “cohomology”.)

A sequence $x_0, \ldots, x_n$ of vertices in a directed graph $X$ is accessible if $d(x_{i - 1}, x_i)$ is finite for all $i$. The accessible sequences form a simplicial set $N(X)$, called the nerve of $X$. So, it’s the nerve of the free preordered set on $X$. The nerve of $X$ is also exhaustively filtered by length:

$N^1(X) \subseteq N^2(X) \subseteq \cdots \subseteq N(X).$

Idea: Interpret path homology theory in terms of this filtration.

In particular, the magnitude path spectral sequence of $X$, considered by Asao and then Hepworth and Roff, can be described through this filtration.

The next part of the talk was about notions of fundamental group and covering digraphs. He didn’t give the definition of covering, but it’s related to covering for simplicial complexes, via nerves. These coverings can also be called 1-coverings, and there’s a more general notion of $\ell$-covering closely related to the notion of metric fibration that Tsubasa Kamiyama spoke about yesterday, that Yasuhiko Asao recently blogged about here, and that Yasuhiko will also speak about this afternoon.

Proposition There is an equivalence between the categories of $\ell$-coverings of $X$ and coverings of the simplicial set $N^\ell(X)$.

I guess that could be taken as a definition of $\ell$-covering of a directed graph.

There’s a universal 1-covering of any directed graph, and the fundamental group is isomorphic to the group of deck transformations. (It’s cleanest if we do everything in an unbased way.) Going further, there’s a universal 2-covering of any directed graph, and an associated spectral sequence.

Now for Cayley digraphs. Take a group $G$ and a set $S$ of generators (not including $1$), giving a Cayley digraph $Cay(G, S)$. Homological and homotopical properties of $Cay(G, S)$ give information about the relations satisfied by the generators. It turns out that the universal covering of a Cayley digraph is another Cayley digraph — of a different group and generating set.

Example Consider $G = (\mathbb{Q}, +)$ with

$S = \{1/n! : n \geq 2\}.$

Put $X = Cay(G, S)$. Then the path homology consists of the exterior algebra of the countably infinitely generated free abelian group.

Posted by: Tom Leinster on December 8, 2023 2:59 AM | Permalink | Reply to this

### Yasuhiko Asao: Classification of metric fibrations

Now for Yasuhiko Asao on the classification of metric fibrations, subject of a recent guest post here.

The definition of metric fibration was given in Tsubasa Kamiyama’s talk, so I won’t repeat it here. The basic property is that if $p: E \to X$ is a metric fibration, with $E$ finite, then all the fibres $F$ are isometric and

$Mag(E) = Mag(F) Mag(X).$

Motivation

The following isn’t strictly necessary, but is useful philosophical background.

Define an Fset-category, or seminormed category, to be a small category $C$ with a real number $|f| \geq 0$ for each map $f$, such that $|1_c| = 0$ for all $c$ and $|g\circ f| \leq |g| + |f|$.

Examples

• Any small category, with $|f| = 0$ for all $f$.

• Any metric space $X$, where $X(a, b)$ is a one-element set $\{f_{a, b}\}$ and $|f_{a, b}| = d(a, b)$, for all points $a, b$.

So $Fset$-$Cat$ contains both categories and metric spaces. (You can view an Fset-category as a category enriched over the monoidal category $Fset$ of filtered sets, as per Yasuhiko’s post here.)

(Question to self: are the inclusions $Cat \hookrightarrow FsetCat \hookleftarrow Met$ induced by monoidal inclusions $Set \hookrightarrow Fset \hookleftarrow [0, \infty]?$ I should go back to Yasuhiko’s post and see if this question is answered there.)

The notion of Grothendieck fibration can be generalized to Fset-category, whose restriction to $Met$ is exactly the metric fibrations.

Generally, Grothendieck fibrations over a category $C$ correspond to lax functors $C \to Cat$. This suggests a correspondence between metric fibrations over a space $X$ and “lax functors” $X \to Met$, whatever they may be… and we’ll see next!

(There was a question here over whether the correct analogy was really with fibrations or prefibrations.)

The main story

A lax functor $F: X \to Met$ consists of:

• a metric space $F(x)$ for each $x \in X$

• a map $F_{x x'}: F(x) \to F(x')$ for each $x, x'$

satisfying

• $F_{x x'}^{-1} = F_{x' x}$

• for all $x, x', x'' \in X$, $d_\infty(F_{x' x''} F_{x, x'}, F_{x, x''}) \leq d(x, x') + d(x', x'') - d(x, x'')$ where $d_\infty$ is the sup distance beteween functions.

The lax functors $X \to Met$ form a category $Met_X$ in the obvious way. On the other hand, the fibrations over $X$ form a category $Fib_X$ in the obvious way (with commutative triangles).

Theorem $Fib_X \simeq Met_X$ for all metric spaces $X$.

Examples

• $F_{x x'}$ is functorial if and only if the corresponding metric fibration is trivial (i.e. the total space is globally a product of the base and the fibre, and the map is projection). Well: Yasuhiko was sure of “only if” but not 100 percent sure of “if”.

• There’s an example involving maps $K_2 \times K_3 \to K_3$ and $K_{3, 3} \to K_3$, where we have the same base and same fibre for both maps, but the total spaces are different. (Here $K_n$ is the complete graph on $n$ vertices and $K_{m, n}$ is the complete bipartite graph.)

Actions and torsors

We mimic the topological case:

$Fib/X \leftrightarrow PrinBun/X \leftrightarrow [X, B G] = Hom(\pi_1 X, G),$

where that last equality assumes $G$ is discrete.

For us, the group $G$ should be a metric group, i.e. a group with a bi-invariant metric, or equivalently, a conjugation-invariant norm.

Example For a finite metric space $Y$, we have the group $Aut(Y)$ with sup-distance.

For a metric group $G$, a $G$-lax functor $F: X \to Met$ is a lax functor such that:

• $F(x) = G$ for all $x \in X$

• $F_{x x'}: G \to G$ is left multiplication by an element of $G$, which we also denote by $F_{x x'}$.

Then there’s a notion of morphism between $G$-lax functors, so that they form a subcategory $G Met_X$ of $Met_X$.

A map $\pi: E to X$ is a $G$-torsor if:

• $\pi$ is a metric fibration with fibre-preserving right action $G$ on $E$

• the map $E \times G \to E\times_X E$ defined by $(e, g) \mapsto (e, e g)$ is an isomorphism over $X$.

The $G$-torsors over $X$ form a category $G Tor_X$.

Proposition The equivalence of categories $Fib_X \simeq Met_X$ induces a further equivalence $G Tor_X \simeq G Met_X$.

Classification

Write $Fib_X^Y$ for the subcategory of $Fib_X$ with fibres $Y$ (a finite metric space?). It can be shown that

$core(Fib_X^Y) \simeq (Aut(Y))Met_X,$

i.e. to $G Met_X$ where $G = Aut(Y)$.

In the other direction,

$G Met_X \simeq Hom(\pi_1^{met} X, G).$

Here $\pi_1^{met} X$ is the fundamental metric group. Choosing a basepoint $x$, it consists of the set of tuples $(x, x_1, \ldots, x_n, x)$ quotiented out by a certain equivalence relation defined in Yasuhiko’s post, which also contains the definition of the metric on $\pi_1^{met} X$.

Examples

• Take the cycle graph $C_5$, with vertices labelled cyclically as 1, 2, 3, 4, 5. The loop 123451 is identified with 13451, then in turn to 1351. And then $d((1351), (1)) = d((1351), (131)) = d(3, 5) + d(5, 1) - d(3, 1) = 2 + 1 - 2 = 1.$ So 12351 has norm 1, and the fundamental metric group is $\mathbb{Z}$, with $|1| = 1$.

• For similar reasons, every odd cycle graph has fundamental metric group $\mathbb{Z}$. Hence on any odd cycle graph, there are nontrivial metric fibrations.

• On the other hand, over any even cycle graph, every metric fibration is trivial and the fundamental metric group of the graph is trivial.

• For a geodesic circle and $\mathbb{R}$, the fundamental metric group is again trivial, so again, every metric fibration over these bases is trivial.

• But for the circle with the subspace metric from $\mathbb{R}^2$, the fundamental metric group is nontrivial.

Conjecture There is a homotopy relation $\sim$ such that $\pi_1^{met}$ is invariant under $\sim$ and any 1-dimensional space in the sense of Tsubasa Kamiyama’s talk is equivalent to a wedge of circles.

Posted by: Tom Leinster on December 8, 2023 6:06 AM | Permalink | Reply to this

### Re: Yasuhiko Asao: Classification of metric fibrations

It was clear from the talks that lots of people have read and appreciated Yasuhiko’s paper Magnitude homology and path homology (as have I). This establishes a relationship between magnitude homology of directed graphs and the path homology of Alexander Grigor’yan, Yong Lin, Yuri Muranov and Shing-Tung Yau. More specifically, there is a spectral sequence in which the first page is magnitude homology and the first row of the second page is path homology.

But what I hadn’t understood is that Yasuhiko’s paper is so popular that even manga characters read it!

This is from volume 1 of a book series called 18 = 80 by Iwabuchi Ryuuko, about an 80-year-old woman who becomes 18 again.

Posted by: Tom Leinster on December 10, 2023 10:24 AM | Permalink | Reply to this

### Re: Yasuhiko Asao: Classification of metric fibrations

Ryuuko Iwabuchi is my friend, and I was asked by her to supervise the mathematical settings of that manga. She left it to me to fill the back of the book in that page, so I used “magnitude homology and path homology”, a little playful.

Posted by: Yasuhiko Asao on December 11, 2023 6:47 AM | Permalink | Reply to this

### Re: Yasuhiko Asao: Classification of metric fibrations

:-) Yasuhiko very kindly gave me a copy of his friend’s book during the conference week, and explained how his paper title came to be in it. I was tickled pink.

Posted by: Tom Leinster on December 11, 2023 10:46 AM | Permalink | Reply to this

### Kiyonori Gomi: A direct proof for the positive definiteness of four point metric spaces

Kiyonori Gomi has done lots of important work on magnitude homology, listed at the magnitude bibliography. I referred to his work in both of my talks.

Today he’s speaking on this new paper, in which he proves directly that metric spaces with four points are always positive definite.

A finite metric space $X$ is positive definite if its similarity matrix $(e^{-d(x, y)})_{x, y \in X}$ is positive definite.

• It’s easy enough to prove directly that a metric space with $\leq 3$ points is positive definite, and that’s done in my paper “The magnitude of metric spaces”.

• Mark Meckes, in Theorem 3.6 of his paper “Positive definite metric spaces”, gave an indirect proof that 4-point spaces are also positive definite. It relied on a result about embedding into the 2-dimensional normed space $\ell_\infty^2$.

• The complete bipartite graph $K_{2,3}$, rescaled so that edges each have length $t \lt \log\sqrt{2}$, is not positive definite: hence spaces with 5 points need not be positive definite.

• From this, one can show that for every $n \geq 5$, there’s some $n$-point space that is not positive definite.

The only remaining challenge, then, is to find a direct proof for 4-points spaces. That’s what this talk is about. An unexpected byproduct of the new proof is new insight into inclusion-exclusion for magnitude of metric spaces.

The proof uses Sylvester’s criterion: a matrix is positive definite iff each of its leading principal submatrices has positive determinant. Since all spaces with $\leq 3$ points are positive definite, we only have to prove that the similarity matrix $Z$ itself has positive determinant.

That’s not the main insight! It’s the starting point. I remember trying to prove the 4-point case long ago, and failing, and I knew Sylvester’s criterion could be applied.

What Gomi has done is much more ingenious than that. You should really read his paper if you want to see the argument. My limitations as a live blogger are very apparent to me here. I can’t compete with the paper. But during the talk, he revealed that he’d come up with the argument using Mathematica to explore the space of factorizations of the algebraic expressions involved, inspecting them to find out which ones would help with proving positivity.

The inclusion-exclusion principle

We know that magnitude of finite metric spaces satisfies the inclusion-exclusion law

$|A \cup B| = |A| + |B| - |A \cap B|$

under some very restrictive conditions, which have a lot in common with the definition of metric fibration (see previous talks). But the proof suggests other conditions under which it holds. First of all, it’s about taking the union of two 3-point spaces along a 2-point space, to get a 4-point space.

But eventually, Gomi found a different sufficient condition, as follows. Let $x$ and $y$ be distinct points of a finite metric space $X$. Then the inclusion-exclusion law

$|X| = |X \setminus \{x\}| + |X \setminus \{y\}| - |X \setminus \{x, y\}|$

holds as long as:

• the similarity matrices of $X\setminus\{x\}$, $X\setminus\{y\}$ and $X \setminus\{x,y\}$ are all invertible

• a quite complicated condition on $d(x, y)$ that it’s in the paper (“$Z_{12} = b_0$”, where $b_0$ is defined in Definition 3.4), but which I haven’t understood at a conceptual level.

Better still (Theorem 3.10), he showed that if inclusion-exclusion holds then this mysterious condition must hold! An equality between similarities (or equivalently, distances) is something very strict, so this confirms the intuition that inclusion-exclusion rarely holds.

(An aside: work of Gimperlein, Goffeng and Louca shows that asymptotic inclusion-exclusion holds under many circumstances. “Asymptotic” means scaling all the spaces up by a scale factor tending to $\infty$. This is mentioned in my colloquium slides.)

Now Gomi is going back to the previous known conditions for inclusion-exclusion holds. These go back to a couple of old papers of mine; originally I stated it for metric spaces but didn’t get the best condition, then I stated it for graphs and did get it right, but didn’t say anything about more general metric spaces. I’m happy that Gomi has now put in print the “right” condition for general metric spaces; that’s Theorem 3.2 in his paper.

And to wrap this up, he’s summarizing the various implications between the known sufficient conditions for inclusion-exclusion to hold. And he’s also talking about sufficient conditions for Mayer-Vietoris for magnitude homology, which is the categorification of inclusion-exclusion.

Posted by: Tom Leinster on December 8, 2023 7:19 AM | Permalink | Reply to this

### Richard Hepworth: Bigraded path homology and reachability homology

The very last talk of this very wonderful week is by Richard Hepworth. I’ll quote the abstract:

Important recent work of Asao conclusively demonstrated that magnitude homology and path homology of graphs, both previously unrelated, are in fact just two aspects of a much larger object, the magnitude-path spectral sequence or MPSS. Magnitude homology is precisely the $E^1$ page of this sequence, while path homology is an axis of the $E^2$ page. In this talk I will present joint work in progress with Emily Roff. We show that the $E^2$ page of the MPSS satisfies Künneth, excision and Mayer-Vietoris theorems. These, together with the homotopy-invariance property proved by Asao, show that the entire $E^2$ term should be regarded as a homology theory, which we call bigraded path homology. Second, we show that bigraded path homology is a strictly stronger invariant than path homology, by demonstrating that it can completely distinguish the directed cycles, none of which can be told apart under the original path homology.

The first joint Hepworth-Roff paper is here, on reachability homology. But there’s more to come!

For a directed graph $G$, define the reachability chains $RC_*(G)$ over a ring $R$ as follows: $RC_k(G)$ is generated by tuples $(x_0, \ldots, x_k)$ with $x_0 \neq \cdots \neq x_k$ and $d(x_{i - 1}, x_i) \lt \infty$ for each $i$. The differential is given in the usual kind of way — see notes from previous talks! So reachability homology is the homology of the preorder associated with $G$.

Filter this complex as follows: $F_\ell RC_k(G)$ is spanned by $(x_0, \ldots, x_k)$ with $\sum d(x_{i - 1}, x_i) \leq \ell$.

The main character in this drama is the magnitude-path spectral sequence (MPSS) $(E_{\ast, \ast}^r(G), d^r)$. We have

$E^0_{i, j}(G) = \frac{F_i RC_{i + j}(G)}{F_{i-1} RC_{i+j}(G)} = MC_{i + j, i}(G),$

where $MC$ means magnitude chains. Then

$E^1_{i, j}(G) = MH_{i + j, i}(G).$

In other words:

The $E^1$ page of the magnitude-path spectral sequence is exactly magnitude homology.

I have a looming dread that Richard is about to start drawing spectral sequence diagrams that I won’t be able to reproduce here… yup, he’s doing it. Sorry!

The existence of the spectral sequence goes back to the original Hepworth-Willerton paper on magnitude homology, but it wasn’t investigated then.

Richard has now devoted two whole blackboards to a table that he’s apparently going to spend the rest of the talk building up. He just asked me whether I was keeping up… I’m going to have to resort to photographing it later.

The columns of the table are the pages of the spectral sequence (well, some of them), plus several other columns for other things… you’d really need a video to appreciate Richard’s performance.

One important point from the table:

The first row of the $E^2$ page of the magnitude-path spectral sequence is exactly path homology.

The terminology on-axis in the table means “in position $\ast, 0$”. That’s where you find path homology on the $E^2$ page, as Asao showed. But he also showed that the whole $E^2$ page is invariant under 1-homotopy, suggesting that the whole $E^2$ page is interesting and worthy of investigation. Bigraded path homology is defined exactly like this.

Some abbreviations in the table: les and ses mean long and short exact sequence.

All I can give you is photos! Here’s the overall scene:

Richard built up the table in chronological order — but what he called his “personal chronology”. Here’s what the boards looked like at the moment when he’d just reached the present. Click any of these photos for a larger, higher-resolution version:

(The boards were kept extremely clean and the chalk was very good, both of which are great for photo quality.)

Here’s what they looked like when he’d finished the table:

Notes

• There’s a hidden “excision” row.

• Mayer-Vietoris for $E^2$ combines:

• excision for $E^1$
• a splitting built from no-entry/projection
• a LES built from a SES by taking homology.
• Excision and Mayer-Vietoris work for pushouts.

Rhetorical questions

• What do we gain from all the extensions and expansions explained in the table?

• What do we gain from moving off the axis?

Example (by way of an answer) Let $C_m$ be the directed cycle of length $m \geq 3$. Then $PH_*(C_m) \cong H_\ast(S^1)$, so $C_m$ is not 1-homotopy equivalent to a point. But it is $(m - 1)$-homotopy equivalent to a point. So $E^r_{\ast\ast}$ is trivial for $r \geq m$ and $C_m$ is long homotopy equivalent to a point.

The spectral sequence is nontrivial for as long as it can be, i.e. up to $E^{m - 1}$, and then it’s trivial from $E^m$ onwards. (Again, I’m not up to reproducing the full spectral sequence diagram.)

In any case, the bigraded path homology $PH_{\ast,\ast}(C_m)$ detects $m$, in contrast to the fact that ordinary path homology can’t tell any of the graphs $C_m$ apart.

Example Let $m \geq n \geq 1$ with $m \geq 3$. The graph $C_{m, n}$ consists of a left-to-right path of length $m$ and a left-to-right path of length $n$, with the two leftmost vertices identified and the two rightmost vertices identified. Then $E_{\ast,\ast}(C_{m,n})$ detects $m$ but not $n$ — i.e. the length of the long left-to-right path, but not the short one. Richard explained why.

What’s left to do?

• Optimize and complete the table! There’s a big gap in the “$r = r$” column.

• Take other results about path homology and extend them to bigraded path homology.

• Look for analogues of features of other homology theories (e.g. Lefschetz fixed point theorems).

• Investigate what happens on each page, on and off the axis.

• This is all for graphs. But what about metric spaces? What does a spectral sequence look like when you’re filtering by the real numbers?

Posted by: Tom Leinster on December 8, 2023 9:03 AM | Permalink | Reply to this

### Re: Magnitude 2023

Bravo, Tom! These posts are amazing, and will be much more useful for me to refer to than my own notes.

Posted by: Mark Meckes on December 8, 2023 9:09 AM | Permalink | Reply to this

### Re: Magnitude 2023

Seconded. Thanks so much for doing this!!

Posted by: Emily Roff on December 9, 2023 12:36 AM | Permalink | Reply to this

Post a New Comment