Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 6, 2016

Magnitude Homology

Posted by Tom Leinster

I’m excited that over on this thread, Mike Shulman has proposed a very plausible theory of magnitude homology. I think his creation could be really important! It’s general enough that it can be applied in lots of different contexts, meaning that lots of different kinds of mathematician will end up wanting to use it.

However, the story of magnitude homology has so far only been told in that comments thread, which is very long, intricately nested, and probably only being followed by a tiny handful of people. And because I think this story deserves a really wide readership, I’m going to start afresh here and explain it from the beginning.

Magnitude is a numerical invariant of enriched categories. Magnitude homology is an algebraic invariant of enriched categories. The Euler characteristic of magnitude homology is magnitude, and in that sense, magnitude homology is a categorification of magnitude. Let me explain!

I’ll explain twice: a short version, then a long version. After that, there’s a section going into some of the details that I wanted to keep tucked out of the way. Choose the level of detail you want!

So that I don’t have to keep saying it, almost everything here that’s new is due to Mike Shulman, who put these ideas together on the other thread. Some aspects were present in work that Aaron Greenspan did during his master’s year with me (2014–15); you can read his MSc thesis here. But Aaron and I didn’t get very far, and it was Mike who made the decisive contributions and to whom this theory should be attributed.

The short version

I won’t actually give the definition here — I’ll just sketch its shape.

Let VV be a semicartesian monoidal category. Semicartesian means that the unit object of VV is terminal. This isn’t as unnatural a condition as it might seem!

Let XX be a small VV-category (== category enriched in VV). Small means that the collection of objects of XX is small (a set).

Let A:VAbA\colon V \to Ab be a small functor. In this context, small means that AA is the left Kan extension of its restriction to some small full subcategory of VV. This condition holds automatically if the category VV is small, as it often will be for us.

From this data, we define a sequence (H n(X;A)) n0\bigl( H_n(X; A) \bigr)_{n \geq 0} of abelian groups, called the (magnitude) homology of XX with coefficients in AA. Dually, given instead a contravariant functor A:V opAbA \colon V^{op} \to Ab, there is a sequence (H n(X;A)) n0\bigl( H^n(X; A) \bigr)_{n \geq 0} of cohomology groups. But we’ll concentrate on homology.

As for any notion of homology, we can attempt to form the Euler characteristic

χ(X;A)= n0(1) nrank(H n(X;A)). \chi(X; A) = \sum_{n \geq 0} (-1)^n rank(H_n(X; A)).

Depending on XX and AA, it may or may not be possible to make sense of this infinite sum.

Examples:

  • When V=SetV = Set and AA is chosen suitably, we recover the notion of homology and Euler characteristic of an ordinary category. What do “homology” and “Euler characteristic” mean for an ordinary category? There are several equivalent answers; one is that they’re just the homology and Euler characteristic of the topological space associated to the category, called its geometric realization or classifying space. The Euler characteristic of a category is also called its magnitude.

  • When VV is the poset ({},)(\mathbb{N} \cup \{\infty\}, \geq), made monoidal by taking \otimes to be addition, graphs can be understood as special VV-categories. By choosing suitable values of AA, we obtain Hepworth and Willerton’s magnitude homology of a graph. Its Euler characteristic is the magnitude of a graph.

  • When VV is the poset ([0,],)([0, \infty], \geq), made monoidal by taking \otimes to be addition, metric spaces can be understood as special VV-categories. By choosing suitable values of AA, we obtain a new notion of the magnitude homology of a metric space. Subject to convergence issues that haven’t been fully worked out yet, its Euler characteristic is the magnitude of a metric space.

The long version

Again, let’s start by fixing a semicartesian monoidal category VV. I’ll use the letter \ell for a typical object of VV, because an important motivating case is where V=[0,]V = [0, \infty], and in that case the objects of VV are thought of as lengths.

Aside   Actually, you can be a bit more general and work with an arbitrary monoidal category VV equipped with an augmentation, as described here, or you can do something more general still. But I’ll stick with the simpler hypothesis of semicartesianness.

Step 1   Let XX be a small VV-category. We define a kind of nerve N(X)N(X). The nerve of an ordinary category is a single simplicial set, but for us N(X)N(X) will be a functor V opsSetV^{op} \to sSet into the category sSetsSet of simplicial sets. For V\ell \in V, the simplicial set N(X)()N(X)(\ell) is defined by

N(X)() n= x 0,,x nXV(,X(x 0,x 1)X(x n1,x n)) N(X)(\ell)_n = \coprod_{x_0, \ldots, x_n \in X} V\bigl(\ell, X(x_0, x_1) \otimes \cdots \otimes X(x_{n - 1}, x_n)\bigr)

(n0n \geq 0). The degeneracy maps are given by inserting identities. The inner face maps are given by composition. The outer face maps are defined using the unique maps from the first factor X(x 0,x 1)X(x_0, x_1) and the last factor X(x n1,x n)X(x_{n - 1}, x_n) to the unit object of VV. (There are unique such maps because VV is semicartesian.)

Mike wrote MSMS instead of NN. I guess he intended the M to stand for magnitude and the S to stand for simplicial. I’m using NN because I want to emphasize that it’s a kind of nerve. Still, half of me regrets removing the notation MS from a construction described by Mike Shulman.

Steps 2 and 3   Let C(X)C(X) be the composite functor

V opN(X)sSetsAbCh. V^{op} \stackrel{N(X)}{\longrightarrow} sSet \stackrel{\mathbb{Z} \cdot -}{\longrightarrow} sAb \longrightarrow Ch.

Here sAbsAb is the category of simplicial abelian groups, ChCh is the category of chain complexes of abelian groups, and the functor :sSetsAb\mathbb{Z} \cdot - \colon sSet \to sAb is induced by the free abelian group functor :SetAb\mathbb{Z} \cdot - \colon Set \to Ab. The unlabelled functor sAbChsAb \to Ch sends a simplicial abelian group to either its unnormalized chain complex or its normalized chain complex. It won’t matter which we use, for reasons I’ll explain in the details section below.

Notice that C(X)C(X) isn’t a single chain complex; it’s a functor into the category of chain complexes. There’s one chain complex C(X)()C(X)(\ell) for each object \ell of VV.

Step 4   Now we bring in the other piece of data: a small functor A:VAbA \colon V \to Ab, which I’ll call the functor of coefficients. Actually, everything that follows makes sense in the more general context of a functor A:VChA \colon V \to Ch, where AbAb is thought of as a subcategory of ChCh by viewing an abelian group as a chain complex concentrated in degree zero. But we don’t seem to have found a purpose for that extra generality, so I’ll stick with AbAb.

We form the tensor product of C(X):V opChC(X)\colon V^{op} \to Ch with A:VAbA \colon V \to Ab. By definition, this is the chain complex defined by the coend formula

C(X) VA= VC(X)()A(). C(X) \otimes_V A = \int^{\ell \in V} C(X)(\ell) \otimes A(\ell).

The tensor product on the right-hand side is the tensor product of chain complexes. Under our assumption that A()A(\ell) is concentrated in degree zero, its nnth component is simply C(X)() nA()C(X)(\ell)_n \otimes A(\ell).

Explicitly, this coend is the coproduct over all V\ell \in V of the chain complexes C(X)()A()C(X)(\ell) \otimes A(\ell), quotiented out by one relation for each map \ell \to \ell' in VV. Which relation? Well, given such a map, you can write down two maps from C(X)()A()C(X)(\ell') \otimes A(\ell) to the coproduct I just mentioned, and the relation states that they’re equal.

This coend exists because of the smallness assumption on AA. Indeed, by definition of small functor, there exists some small full subcategory WW of VV such that AA is the left Kan extension of A| WA|_W along the inclusion WVW \hookrightarrow V. Then C(X)| W WA| WC(X)|_W \otimes_W A|_W exists because ChCh has small colimits, and you can show that it has the defining universal property of the coend above. So C(X) VAC(X) \otimes_V A exists and is equal to C(X)| W WA| WC(X)|_W \otimes_W A|_W.

We have now constructed from XX and AA a single chain complex C(X) VAC(X) \otimes_V A.

If you choose to use unnormalized chains, you can unwind the coend formula to get a simple explicit formula for C(X) VAC(X) \otimes_V A:

(C(X) VA) n= x 0,,x nXA(X(x 0,x 1)X(x n1,x n)) (C(X) \otimes_V A)_n = \coprod_{x_0, \ldots, x_n \in X} A\bigl(X(x_0, x_1) \otimes \cdots \otimes X(x_{n - 1}, x_n)\bigr)

with the differential that you’d guess. (This formula does assume that AA is a functor from AA into AbAb rather than ChCh. For ChCh-valued AA, the formula becomes slightly more complicated.) I don’t think there’s such a simple formula for normalized chains, at least for general VV.

Step 5   The (magnitude) homology of XX with coefficients in AA, written as H *(X;A)H_\ast(X; A), is the homology of the chain complex C(X) VAC(X) \otimes_V A. In other words, H n(X;A)H_n(X; A) is the nnth homology group of C(X) VAC(X) \otimes_V A, for n0n \geq 0.

For the definition of cohomology, let AA instead be a small contravariant functor V opAbV^{op} \to Ab. Then we can form the chain complex

Hom(C(X),A) V= VHom(C(X)(),A()). Hom(C(X), A)_V = \int_{\ell \in V} Hom(C(X)(\ell), A(\ell)).

The HomHom on the right-hand side denotes the closed structure on the monoidal category of chain complexes. And H *(X;A)H^\ast(X; A), the cohomology of XX with coefficients in AA, is defined as the homology of the chain complex Hom(C(X),A) VHom(C(X), A)_V.

Everything is functorial in the way it should be: homology H *(X;A)H_\ast(X; A) is covariant in XX, cohomology H *(X;A)H^\ast(X; A) is contravariant in XX, and both are covariant in the functor AA of coefficients.

Example: ordinary categories

When V=SetV = Set, a small VV-category is just a small category XX.

The functor N(X):Set opsSetN(X) \colon Set^{op} \to sSet sends Set\ell \in Set to the \ellth power of the ordinary nerve. So, we might suggestively write N(X)()N(X)(\ell) as N(X) N(X)^\ell instead.

Now let’s think about the functor of coefficients, which is some small functor A:SetAbA \colon Set \to Ab. For AA to be small means exactly that there is some small full subcategory WW of SetSet such that AA is the left Kan extension of A| WA|_W along the inclusion WSetW \hookrightarrow Set. For instance, choose an abelian group BB and define A()A(\ell) to be the coproduct B\ell \cdot B of \ell copies of BB. Then AA is small, since if we take WSetW \subset Set to be the full subcategory consisting of just the one-element set then AA is the left Kan extension of its restriction to WW. Let’s write AA as B:SetAb- \cdot B \colon Set \to Ab.

The general definition gives us homology groups H *(X;B)H_\ast(X; -\cdot B) for every small category XX and abelian group BB. These homology groups, more normally written as H *(X;B)H_\ast(X; B), are actually something familiar. In simplicial terms, they’re simply the homology of the ordinary nerve of XX (with coefficients in BB). In terms of topological spaces, they’re just the homology of the geometric realization (classifying space) of XX.

Example: graphs

Let V=({},)V = (\mathbb{N} \cup \{\infty\}, \geq), a poset seen as a category. The objects of VV are the natural numbers together with \infty, there’s exactly one map m\ell \to m when m\ell \geq m, and there are no maps m\ell \to m when <m\ell \lt m. It’s a monoidal category under addition. Any graph XX can be seen as a VV-category: the objects are the vertices, and X(x,y)VX(x, y) \in V is the number of edges in a shortest path from xx to yy (understood to be \infty if there is no such path at all).

So, we’re going to get a homology theory of graphs.

What about the coefficients? Well, the first point is that we don’t have to worry about the smallness condition. The category VV is small, so it’s automatic that any functor on VV is small too.

The second, important, point is that every object \ell of VV gives rise to a functor δ :VAb\delta_\ell \colon V \to Ab, defined by

δ (m)={ if m=, 0 if m. \delta_\ell(m) = \begin{cases} \mathbb{Z} &\text{if }\,\, m = \ell,\\ 0 &\text{if }\,\, m \neq \ell. \end{cases}

(mVm \in V). We’re going to use δ \delta_\ell as our functor of coefficients.

So, for any graph XX and natural number \ell, we get homology groups H *(X;δ )H_\ast(X; \delta_\ell). It turns out that H n(X;δ )H_n(X; \delta_\ell) is exactly what Richard Hepworth and Simon Willerton called the magnitude homology group MH n,(X)MH_{n, \ell}(X).

I’ll repeat Richard and Simon’s definition here, so that you can see concretely what Mike’s general theory actually produces in a specific situation. Let XX be a graph. For integers n,0n, \ell \geq 0, let MC n,(X)MC_{n, \ell}(X) be the free abelian group on the set

{(x 0,,x n)X n+1:x 0x 1x n,d(x 0,x 1)++d(x n1,x n)=}. \{ (x_0, \ldots, x_n) \in X^{n + 1} \,\,:\,\, x_0 \neq x_1 \neq \cdots \neq x_n, \,\, d(x_0, x_1) + \cdots + d(x_{n - 1}, x_n) = \ell \}.

For 1in11 \leq i \leq n - 1, define i:MC n,(X)MC n1,(X)\partial_i \colon MC_{n, \ell}(X) \to MC_{n - 1, \ell}(X) by

i(x 0,,x n)={(x 0,,x i1,x i+1,,x n) if d(x i1,x i+1)=d(x i1,x i)+d(x i,x i+1), 0 otherwise. \partial_i(x_0, \ldots, x_n) = \begin{cases} (x_0, \ldots, x_{i - 1}, x_{i + 1}, \ldots, x_n) & \text{if }\,\, d(x_{i - 1}, x_{i + 1}) = d(x_{i - 1}, x_i) + d(x_i, x_{i + 1}), \\ 0 & \text{otherwise}. \end{cases}

Then define :MC n,(X)MC n1,(X)\partial \colon MC_{n, \ell}(X) \to MC_{n - 1, \ell}(X) by = i=1 n1(1) i i\partial = \sum_{i = 1}^{n - 1} (-1)^i \partial_i. This gives a chain complex MC *,(X)MC_{\ast, \ell}(X) for each natural number \ell. The Hepworth–Willerton magnitude homology group MH n,(X)MH_{n, \ell}(X) is defined to be its nnth homology.

So, this two-case formula for the differential, involving the triangle inequality, somehow comes out of Mike’s general definition. I’ll explain how in the details section below.

Incidentally, Richard and Simon proved a Künneth theorem, an excision theorem and a Mayer–Vietoris theorems for their magnitude homology of graphs. Can these be generalized magnitude homology of arbitrary enriched categories?

Example: metric spaces

Let VV be the poset ([0,],)([0, \infty], \geq), made into a monoidal category in the same way that {}\mathbb{N} \cup \{\infty\} was. As Lawvere pointed out long ago, any metric space can be seen as a VV-category.

So, we get a homology theory of metric spaces. More exactly, we have a graded abelian group H *(X;A)H_\ast(X; A) for each metric space XX and functor A:[0,]AbA \colon [0, \infty] \to Ab. Exactly as for graphs, every element [0,]\ell \in [0, \infty] gives rise to a functor δ :[0,]Ab\delta_\ell \colon [0, \infty] \to \Ab, taking value \mathbb{Z} at \ell and 00 elsewhere. So we get a group H n(X;δ )H_n(X; \delta_\ell) for each nn \in \mathbb{N} and [0,]\ell \in [0, \infty].

Explicitly, this group H n(X,δ )H_n(X, \delta_\ell) turns out to be the same as the group MH n,(X)MH_{n, \ell}(X) that you get from Hepworth and Willerton’s definition above by simply crossing out the word “graph” and replacing it by “metric space”, and letting \ell range over [0,][0, \infty] rather than {}\mathbb{N} \cup \{\infty\}.

But here’s the thing. There are some metric spaces, including most finite ones, where the triangle inequality is never an equality (except in the obvious trivial situations). For such spaces, the Hepworth–Willerton differential \partial is always 00. Hence the homology groups are the same as the chain groups, which tend to be rather large. For instance, that’s almost always the case when XX is a random finite collection of points in Euclidean space. So homology fails to do its usual job of summarizing useful information about the space.

In that situation, we might prefer to use different coefficients. So, let’s think again about the construction of the functor δ :VAb\delta_\ell \colon V \to Ab from the object V\ell \in V. This construction makes sense for any partially ordered set VV, and it also makes sense not only for single elements (objects) of VV, but arbitrary intervals in VV.

What I mean is the following. An interval JJ in a poset VV is a subset with the property that if 1 2 3\ell_1 \leq \ell_2 \leq \ell_3 in VV with 1, 3J\ell_1, \ell_3 \in J then 2J\ell_2 \in J. For any interval JVJ \subseteq V, there’s a functor δ J:VAb\delta_J \colon V \to Ab defined on objects by

δ J()={ if J, 0 otherwise. \delta_J(\ell) = \begin{cases} \mathbb{Z} &\text{if }\,\, \ell \in J, \\ 0 &\text{otherwise}. \end{cases}

It’s defined on maps by sending everything to either a zero map or the identity on \mathbb{Z}. For instance, if JJ is a trivial interval {}\{\ell\} then δ J\delta_J is the functor δ \delta_\ell that we met before.

I observed a few paragraphs back that when XX is a finite metric space, H *(X;δ )H_\ast(X; \delta_\ell) typically isn’t very interesting. However, it seems likely that H *(X;δ J)H_\ast(X; \delta_J) is more interesting for nontrivial intervals J[0,]J \subseteq [0, \infty]. The idea is that it introduces some blurring, to compensate for the fact that the triangle inequality is never exactly an equality. And here we get into territory that seems close to that of persistent homology… but this connection still needs to be explored!

Decategorification: from homology to magnitude

For any homology theory of any kind of object XX, we can attempt to define the Euler characteristic of XX as the alternating sum of the ranks of the homology groups. We immediately have to ask whether that sum makes sense.

It may be that only finitely many of the homology groups are nontrivial, in which case there’s no problem. Or it may be that infinitely many of the groups are nontrivial, but the Euler characteristic can be made sense of using one or other technique for summing divergent series. Or, it may be that the sum is beyond salvation. Typically, if you want the Euler characteristic to make sense — or even just in order for the ranks to be finite — you’ll need to impose some sort of finiteness condition on the object that you’re taking the homology of.

The idea — perhaps the entire point of magnitude homology — is that its Euler characteristic should be equal to magnitude. For some enriching categories VV, we have a theorem saying exactly that. For others, we don’t… but we do have some formal calculations suggesting that there’s a theorem waiting to be found. We haven’t got to the bottom of this yet.

I’ll say something about the general situation, then I’ll explain the state of the art in the three examples above.

In general, for a semicartesian monoidal category VV, a small VV-category XX, and a small functor A:VAbA \colon V \to Ab, we want to define the Euler characteristic of XX with coefficients in AA as

χ(X;A)= n0(1) nrank(H n(X;A)). \chi(X; A) = \sum_{n \geq 0} (-1)^n rank(H_n(X; A)).

Here’s how it looks in our three running examples: categories, graphs and metric spaces.

  • In the case V=SetV = Set, we’re talking about the Euler characteristic of a category XX. Take A=A = -\cdot \mathbb{Z}, as defined above. Then the homology group H n(X;A)H_n(X; A) is equal to H n(X;)H_n(X; \mathbb{Z}), the nnth homology of the category XX with coefficients in \mathbb{Z}. That’s the same as the nnth homology of the nerve (or its geometric realization).

    To make sense of χ(X;)\chi(X; \mathbb{Z}), we impose a finiteness condition. Assume that the category XX is finite, skeletal, and contains no nontrivial endomorphisms. Then the nerve of XX has only finitely many nondegenerate simplices, from which it follows that only finitely many of the homology groups are nontrivial. So, the sum is finite and χ(X;)\chi(X; \mathbb{Z}) makes sense.

    Under these finiteness hypotheses, what actually is χ(X;)\chi(X; \mathbb{Z})? Since H n(X;)H_n(X; \mathbb{Z}) is the nnth homology of the nerve of XX with integer coefficients, χ(X;)\chi(X; \mathbb{Z}) is the ordinary (simplicial/topological) Euler characteristic of the nerve of XX. And it’s a theorem that this is equal to the Euler characteristic of the category XX, defined combinatorially and also called the “magnitude” of XX.

    So for a small category XX, the Euler characteristic of the magnitude homology H *(X;)H_\ast(X; -\cdot\mathbb{Z}) is indeed the magnitude of XX. In other words: magnitude homology categorifies magnitude.

  • Take a graph XX, seen as a category enriched in V=({},)V = (\mathbb{N} \cup \{\infty\}, \geq). For each natural number \ell, we can try to define the Euler characteristic

    χ(X;δ )= n0(1) nrank(H n(X;δ )). \chi(X; \delta_\ell) = \sum_{n \geq 0} (-1)^n rank(H_n(X; \delta_\ell)).

    I said earlier that these homology groups are the same as Hepworth and Willerton’s homology groups MH n,(X)MH_{n, \ell}(X), and I described them explicitly.

    To make sure that the ranks are all finite, let’s assume that the graph XX is finite. That alone is enough to guarantee that the sum defining χ(X;δ )\chi(X; \delta_\ell) is finite. Why? Well, from the definition of the chain groups MC n,(X)MC_{n, \ell}(X), it’s clear that MC n,(X)MC_{n, \ell}(X) is trivial when n>n \gt \ell. Hence the same is true of MH n,(X)MH_{n, \ell}(X), which means that the sum defining χ(X;δ )\chi(X; \delta_\ell) might as well run only from n=0n = 0 to n=n = \ell.

    At the moment, our graph has not one Euler characteristic but an infinite sequence of them:

    χ(X;δ 0),χ(X;δ 1),χ(X;δ 2), \chi(X; \delta_0), \,\, \chi(X; \delta_1), \,\, \chi(X; \delta_2), \,\, \cdots

    Let’s assemble them into a single formal power series over \mathbb{Z}:

    χ(X):= χ(X;δ )q = n0(1) n rank(H n(X;δ ))q , \chi(X) := \sum_{\ell \in \mathbb{N}} \chi(X; \delta_\ell) q^\ell = \sum_{n \geq 0} (-1)^n \sum_{\ell \in \mathbb{N}} rank(H_n(X; \delta_\ell)) q^\ell,

    where qq is a formal variable. (You might wonder what’s happened to =\ell = \infty. In principle, it should be present in the sum. However, if we adopt the convention that q =0q^\infty = 0 then it might as well not be. It will become clear when we look at metric spaces that this is the right convention to adopt.)

    On the other hand, viewing graphs as enriched categories leads to the notion of the magnitude of a graph. The magnitude of a finite graph XX is a formal expression in a variable qq, and can be understood either as a rational function in qq or as a power series in qq. Hepworth and Willerton showed that the power series χ(X)\chi(X) above is precisely the magnitude of XX, seen as a power series.

    So in the case of graphs too, magnitude homology categorifies magnitude.

  • Finally, consider a metric space XX, viewed as a category enriched in V=([0,],)V = ([0, \infty], \geq). For each V\ell \in V, we want to define

    χ(X;δ )= n0(1) nrank(H n(X;δ )). \chi(X; \delta_\ell) = \sum_{n \geq 0} (-1)^n rank(H_n(X; \delta_\ell)).

    I have no idea what these homology groups look like when XX is a familiar geometric object such as a disk or line, so I don’t know how often these ranks are finite. But they’re certainly finite if XX has only finitely many points, so let’s assume that.

    The sum on the right-hand side is, then, automatically finite. To see this, the argument is almost the same as for graphs. For graphs, we used the fact that the distance between two distinct vertices is always at least 11, from which it followed that the homology groups H n(X;δ )H_n(X; \delta_\ell) can only be nonzero when nn \leq \ell. Now in a finite metric space, distances can of course be less than 11, but finiteness implies that there’s a minimal nonzero distance: η\eta, say. Then H n(X;δ )H_n(X; \delta_\ell) can only be nonzero when n/ηn \leq \ell/\eta. That’s why the sum is finite.

    We’ve now assigned to our metric space not one Euler characteristic but a one-parameter family of them. That is, we’ve got an integer χ(X,δ )\chi(X, \delta_\ell) for each [0,]\ell \in [0, \infty]. Actually, all but countably many of these integers are zero. Better still, for each real mm there are only finitely many m\ell \leq m such that χ(X,δ )0\chi(X, \delta_\ell) \neq 0. (I’ll explain why in the details section.) So, it’s not too crazy to write down the formal expression

    χ(X)= [0,)χ(X;δ )q . \chi(X) = \sum_{\ell \in [0, \infty)} \chi(X; \delta_\ell) q^\ell.

    There are a couple of ways to think about the expression on the right-hand side. You can treat qq as a formal variable and the expression as a Hahn series (like a power series, but with non-integer real powers allowed). Or you can (attempt to) evaluate at a particular value of qq in \mathbb{R} or \mathbb{C} or some other setting where the sum makes analytic sense.

    So far no one knows how exactly we should proceed from here, but it looks as if the story goes something like this.

    Remember, we’re trying to show that magnitude homology categorifies magnitude, which in this instance means that χ(X)\chi(X) should be equal to the magnitude of a metric space XX. That’s a real number, and it’s defined in terms of negative exponentials e de^{-d} of distances dd, so let’s put q=e 1q = e^{-1}. (This explains why we can ignore =\ell = \infty, since then q =e =0q^\ell = e^{-\infty} = 0.) I’m not claiming that anything converges! You can treat e 1e^{-1} as a formal variable for the time being, although at some stage we’ll want to interpret it as an actual real number.

    It’s a useful little lemma that when you have a bounded chain complex CC, the alternating sum of the ranks of the groups C nC_n is equal to the alternating sum of the homology groups H n(C)H_n(C). So,

    χ(X;δ )= n0(1) nrank(MC n,(X)) \chi(X; \delta_\ell) = \sum_{n \geq 0} (-1)^n rank(MC_{n, \ell}(X))

    where MCMC denotes the Hepworth–Willerton chain groups that I defined earlier. Substituting this into the definition of χ(X)\chi(X) gives

    χ(X)= n0(1) n [0,)rank(MC n,(X))e . \chi(X) = \sum_{n \geq 0} (-1)^n \sum_{\ell \in [0, \infty)} rank(MC_{n, \ell}(X)) e^{-\ell}.

    That’s potentially a doubly infinite sum. But we can do some formal calculations leading to the conclusion that χ(X)\chi(X) is indeed equal to the magnitude of the metric space XX (that is, the sum of all the entries of the inverse of the matrix (e d(x,y)) x,yX(e^{-d(x, y)})_{x, y \in X}). Again, that’s deferred to the details section below. It’s not clear how to make rigorous sense of it, but I’m confident that it can somehow be done.

So, magnitude homology categorifies magnitude in all three of our examples… well, definitely in the first two cases, and tentatively in the third. Of course, we’d like to make a general statement to the effect that homology categorifies magnitude over an arbitrary base category VV. The metric space case illustrates some of the difficulties that we might expect to encounter in making a general statement.

Details and proofs

The rest of this post mostly consists of supporting details that we figured out in the other thread. I’ve mostly only bothered to include the points that weren’t immediately obvious to us (or me, at least).

If you’ve read this far, bravo! You can think of what follows as an appendix.

From simplicial abelian groups to chain complexes

The relationship between simplicial abelian groups and chain complexes is a classical part of homological algebra, but there’s at least one aspect of it that some of us in the old thread hadn’t previously appreciated.

First, the definitions. Let GG be a simplicial abelian group. The unnormalized chain complex C(G)C(G) is defined by C n(G)=G nC_n(G) = G_n, the differentials being the alternating sums of the face maps. The degenerate elements of C n(G)C_n(G) generate a subgroup D n(G)D_n(G), which assemble to give a subcomplex of C(G)C(G). The normalized chain complex is C(G)/D(G)C(G)/D(G).

Now here are two facts. First, there’s an isomorphism of chain complexes C(G)D(G)C(G)D(G)C(G) \cong D(G) \oplus \frac{C(G)}{D(G)}, natural in GG. Second, the projection and inclusion maps between C(G)C(G) and C(G)/D(G)C(G)/D(G) are mutually inverse up to a chain homotopy that is natural in GG (in the obvious sense). That naturality will be crucial for us. We therefore say that C(G)C(G) and C(G)/D(G)C(G)/D(G) are naturally chain homotopy equivalent.

The functoriality of the nerve construction

Given a monoidal category VV and a small VV-category XX, we defined a functor

N(X):V opsSet N(X): V^{op} \to sSet

by

N(X)() n= x 0,,x nXV(,X(x 0,x 1)X(x n1,x n)). N(X)(\ell)_n = \coprod_{x_0, \ldots, x_n \in X} V \bigl(\ell, X(x_0, x_1) \otimes \cdots \otimes X(x_{n - 1}, x_n)\bigr).

Obviously NN is functorial in XX: any VV-functor F:XYF \colon X \to Y induces a map of simplicial sets N(F) :N(X)()N(Y)()N(F)_\ell \colon N(X)(\ell) \to N(Y)(\ell) for each V\ell \in V, and this map is natural in \ell.

Less obvious is that NN is functorial in the following 2-dimensional sense. Take VV-functors

F,G:XY F, G \colon X \rightrightarrows Y

and a VV-natural transformation α:FG\alpha \colon F \to G. The claim is that for each V\ell \in V, there’s an induced simplicial homotopy α \alpha_\ell from N(F) N(F)_\ell to N(G) N(G)_\ell. Moreover, α \alpha_\ell is natural in \ell.

How does this work? I’m pretty much a klutz with things simplicial, so let me explain it in a concrete way and refer to this comment of Mike’s for a more abstract perspective.

Fix \ell. We have our two maps

N(F) ,N(G) :N(X)()N(Y) . N(F)_\ell, N(G)_\ell \colon N(X)(\ell) \rightrightarrows N(Y)_\ell.

By definition, a simplicial homotopy from N(F) N(F)_\ell to N(G) N(G)_\ell is a map h:N(X)()×Δ 1N(Y)()h \colon N(X)(\ell) \times \Delta^1 \to N(Y)(\ell) of simplicial sets that satisfies the appropriate boundary conditions. Here Δ 1\Delta^1 means the representable simplicial set Δ(,[1])\Delta(-, [1]). There are two maps from the terminal simplicial set 1=Δ 01 = \Delta^0 to Δ 1\Delta^1, corresponding to the two maps [0][1][0] \rightrightarrows [1] in Δ\Delta. The “boundary conditions” are that the two composites in the diagram

N(X)()N(X)()×1N(X)()×Δ 1hN(Y)() N(X)(\ell) \cong N(X)(\ell) \times 1 \rightrightarrows N(X)(\ell) \times \Delta^1 \stackrel{h}{\longrightarrow} N(Y)(\ell)

are equal to N(F) N(F)_\ell and N(G) N(G)_\ell.

Concretely, a simplicial homotopy hh from N(F) N(F)_\ell to N(G) N(G)_\ell consists of a map of sets

h ϕ:N(X)() nN(Y)() n h_\phi \colon N(X)(\ell)_n \to N(Y)(\ell)_n

for each map ϕ:[n][1]={0,1}\phi \colon [n] \to [1] = \{0, 1\} in Δ\Delta. When ϕ\phi is the map with constant value 00,   h ϕh_\phi is required to be equal to N(F) ,nN(F)_{\ell, n}, and when hh has constant value 11,   h ϕh_\phi is required to be equal to N(G) ,nN(G)_{\ell, n}. The maps h ϕh_\phi also have to satisfy some other equations which I don’t need to mention.

There are n+2n + 2 maps [n][1][n] \to [1] in Δ\Delta, so what this means really explicitly is that a simplicial homotopy from N(F)N(F) to N(G)N(G) consists of an ordered list of n+2n + 2 functions N(X)() nN(Y)() nN(X)(\ell)_n \to N(Y)(\ell)_n for each n0n \geq 0. The first has to be N(F) ,nN(F)_{\ell, n}, the last has to be N(G) ,nN(G)_{\ell, n}, and the whole lot have to hang together in some reasonable way. So roughly speaking, a simplicial homotopy is a kind of discrete path between two simplicial maps, as you’d probably expect.

We’re supposed to be building a simplicial homotopy from N(F)N(F) to N(G)N(G) out of a VV-natural transformation α:FG\alpha \colon F \to G. So, let’s recall what a VV-natural transformation actually is. More or less by definition, α\alpha consists of a map

α x,x:X(x,x)Y(F(x),G(x)) \alpha_{x, x'} \colon X(x, x') \to Y(F(x), G(x'))

in VV for each x,xXx, x' \in X (subject to some axioms). For instance, when V=SetV = Set, this map sends fX(x,x)f \in X(x, x') to the diagonal of the naturality square for ff.

Now let n0n \geq 0. For any objects x 0,,x nx_0, \ldots, x_n of XX, we can build from FF, GG and α\alpha a sequence of n+2n + 2 maps in VV, which for ease of typesetting I’ll show for n=3n = 3 (and you’ll guess the general pattern):

C(x 0,x 1)C(x 1,x 2)C(x 2,x 3) D(Fx 0,Fx 1)D(Fx 1,Fx 2)D(Fx 2,Fx 3), C(x 0,x 1)C(x 1,x 2)C(x 2,x 3) D(Fx 0,Fx 1)D(Fx 1,Fx 2)D(Fx 2,Gx 3), C(x 0,x 1)C(x 1,x 2)C(x 2,x 3) D(Fx 0,Fx 1)D(Fx 1,Gx 2)D(Gx 2,Gx 3), C(x 0,x 1)C(x 1,x 2)C(x 2,x 3) D(Fx 0,Gx 1)D(Gx 1,Gx 2)D(Gx 2,Gx 3), C(x 0,x 1)C(x 1,x 2)C(x 2,x 3) D(Gx 0,Gx 1)D(Gx 1,Gx 2)D(Gx 2,Gx 3). \begin{aligned} C(x_0, x_1) \otimes C(x_1, x_2) \otimes C(x_2, x_3) & \,\,\longrightarrow\,\, D(F x_0, F x_1) \otimes D(F x_1, F x_2) \otimes D(F x_2, F x_3), \\ C(x_0, x_1) \otimes C(x_1, x_2) \otimes C(x_2, x_3) & \,\,\longrightarrow\,\, D(F x_0, F x_1) \otimes D(F x_1, F x_2) \otimes D(F x_2, G x_3), \\ C(x_0, x_1) \otimes C(x_1, x_2) \otimes C(x_2, x_3) & \,\,\longrightarrow\,\, D(F x_0, F x_1) \otimes D(F x_1, G x_2) \otimes D(G x_2, G x_3), \\ C(x_0, x_1) \otimes C(x_1, x_2) \otimes C(x_2, x_3) & \,\,\longrightarrow\,\, D(F x_0, G x_1) \otimes D(G x_1, G x_2) \otimes D(G x_2, G x_3), \\ C(x_0, x_1) \otimes C(x_1, x_2) \otimes C(x_2, x_3) & \,\,\longrightarrow\,\, D(G x_0, G x_1) \otimes D(G x_1, G x_2) \otimes D(G x_2, G x_3). \end{aligned}

These n+2n + 2 maps in VV induce, in the obvious way, n+2n + 2 maps of sets

x 0,,x nXV(,C(x 0,x 1)C(x n1,x n)) y 0,,y nYV(,D(y 0,y 1)D(y n1,y n)) \coprod_{x_0, \ldots, x_n \in X} V(\ell, C(x_0, x_1) \otimes \cdots \otimes C(x_{n-1}, x_n)) \,\,\longrightarrow\,\, \coprod_{y_0, \ldots, y_n \in Y} V(\ell, D(y_0, y_1) \otimes \cdots \otimes D(y_{n-1}, y_n))

for each V\ell \in V. The domain and codomain here are just N(X)() nN(X)(\ell)_n and N(Y)() nN(Y)(\ell)_n: so we have n+2n + 2 maps N(X)() nN(Y)() nN(X)(\ell)_n \to N(Y)(\ell)_n. The first of these maps is N(F) ,nN(F)_{\ell, n} and the last is N(G) ,nN(G)_{\ell, n}. Some checking reveals that these maps, taken over all nn, do indeed determine a simplicial homotopy from N(F) N(F)_\ell to N(G) N(G)_\ell. Moreover, everything is obviously natural in \ell. So that’s our natural simplicial homotopy!

Functoriality of the tensor product

Let A:VChA \colon V \to Ch be a small functor. For any functor B:V opChB \colon V^{op} \to Ch, we can form the tensor product

B VA= VB()A(), B \otimes_V A = \int^{\ell \in V} B(\ell) \otimes A(\ell),

which is a chain complex. Obviously this determines a functor

VA:[V op,Ch]Ch. - \otimes_V A \colon [V^{op}, Ch] \to Ch.

A little less obviously, VA- \otimes_V A transforms any natural chain homotopy into a chain homotopy.

In other words, take functors P,Q:V opChP, Q \colon V^{op} \rightrightarrows Ch and natural transformations κ,λ:PQ\kappa, \lambda \colon P \rightrightarrows Q. (So, κ\kappa and λ\lambda consist of chain maps κ ,λ :P()Q()\kappa_\ell, \lambda_\ell \colon P(\ell) \rightrightarrows Q(\ell) for each V\ell \in V, natural in \ell.) Suppose we also have a chain homotopy h :κ λ h_\ell \colon \kappa_\ell \to \lambda_\ell for each \ell, and that h h_\ell is natural in \ell. The claim is that there’s an induced chain homotopy h VAh \otimes_V A between the chain maps

κ VA,λ VA:P VAQ VA. \kappa \otimes_V A, \, \lambda \otimes_V A \colon P \otimes_V A \rightrightarrows Q \otimes_V A.

To show this, the key point is that a chain homotopy between the chain maps κ ,λ :P()Q()\kappa_\ell, \lambda_\ell \colon P(\ell) \to Q(\ell) can be understood as a chain map P()IQ()P(\ell) \otimes \mathbf{I} \to Q(\ell) satisfying appropriate boundary conditions. Here I\mathbf{I} (for “interval”) is the chain complex

00id00 \cdots \to 0 \to 0 \to \mathbb{Z} \stackrel{id}{\to} \mathbb{Z} \to 0 \to 0 \to \cdots

with the two copies of \mathbb{Z} in degrees 00 and 11. Once you adopt this viewpoint, it’s straightforward to prove the claim, using only the associativity of \otimes and the fact that \otimes distributes over colimits.

An important consequence is that if two functors B,B:V opChB, B' \colon V^{op} \rightrightarrows Ch are naturally chain homotopy equivalent, then the complexes B VAB \otimes_V A and B VAB' \otimes_V A are chain homotopy equivalent.

It doesn’t matter whether you normalize your chains

Let XX be a small VV-category. The functor C(X):V opChC(X) \colon V^{op} \to Ch was defined by first building from XX a certain functor N(X):V opsAb\mathbb{Z}\cdot N(X) \colon V^{op} \to sAb, then turning simplicial sets into chain complexes. I (or rather Mike) said that it doesn’t matter whether you do that last step with unnormalized or normalized chains. Why not?

Earlier in this “details” section, I recalled the fact that the two chain complexes coming from a simplicial abelian group GG are not only chain homotopy equivalent, but chain homotopy equivalent in a way that’s natural in GG. We can apply this fact to the simplicial abelian group N(X)()\mathbb{Z}\cdot N(X)(\ell), for each V\ell \in V. It implies that the two chain complexes coming from N(X)()\mathbb{Z}\cdot N(X)(\ell) are chain homotopy equivalent naturally in \ell. Or, said another way, the two versions of C(X):V opChC(X) \colon V^{op} \to Ch that you get by choosing the “unnormalized” or “normalized” option are naturally chain homotopy equivalent.

But we just saw that when two functors B,B:V opChB, B' \colon V^{op} \rightrightarrows Ch are naturally chain homotopy equivalent, their tensor products with AA are chain homotopy equivalent. So, the two versions of C(X)C(X) have the same tensor product with AA, up to chain homotopy equivalence. In other words, the chain homotopy equivalence class of C(X) VAC(X) \otimes_V A is unaffected by which version of C(X)C(X) you choose to use. The homology H *(C;A)H_\ast(C; A) of that chain complex is, therefore, also unaffected by this choice.

Invariance of magnitude homology under equivalence of categories

It’s a fact that the magnitude of an enriched category is invariant not only under equivalence, but even under the existence of an adjunction (at least, if both magnitudes are well-defined). Something similar is true for magnitude homology, as follows.

Let F,G:XYF, G \colon X \rightrightarrows Y be VV-functors between small VV-categories. We’ll show that if there exists a VV-natural transformation from FF to GG then the maps

H *(X;A)H *(Y;A) H_\ast(X; A) \rightrightarrows H_\ast(Y; A)

induced by FF and GG are equal (for any coefficients AA). It will follow that whenever you have VV-categories that are equivalent, or even just connected by an adjunction, their homologies are isomorphic. (Even “adjunction” can be weakened further, but I’ll leave that as an exercise.)

The proof is mostly a matter of assembling previous observations. Take a VV-natural transformation α:FG:XY\alpha \colon F \to G \colon X \to Y. We have functors

N(X),N(Y):V opsSet, N(X), N(Y) \colon V^{op} \rightrightarrows sSet,

natural transformations

N(F),N(G):N(X)N(Y), N(F), N(G) \colon N(X) \rightrightarrows N(Y),

and (as we saw previously) a natural simplicial homotopy from N(F)N(G)N(F) \to N(G) induced by α\alpha. When we pass from simplicial sets to chain complexes, this natural simplicial homotopy turns into a natural chain homotopy (Lemma 8.3.13 of Weibel’s book). So, the natural transformations C(F)C(F) and C(G)C(G) between the functors

C(X),C(Y):V opCh C(X), C(Y) \colon V^{op} \rightrightarrows Ch

are naturally chain homotopic. It follows from another of the previous observations that the chain maps

C(F) VA,C(G) VA:C(X) VAC(Y) VA C(F) \otimes_V A, \,\,\, C(G) \otimes_V A\,\, \colon \,\, C(X) \otimes_V A \rightrightarrows C(Y) \otimes_V A

are chain homotopic. Hence they induce the same map H *(X;A)H *(Y;A)H_\ast(X; A) \to H_\ast(Y; A) on homology, as claimed.

Homology of graphs and of metric spaces

Earlier, I claimed that Mike’s general theory of homology of enriched categories reproduces Richard Hepworth and Simon Willerton’s theory of magnitude homology of graphs, by choosing the coefficients suitably. It’s trivial to extend Richard and Simon’s theory from graphs to metric spaces, as I did earlier; and I claimed that this too is captured by the general theory.

I’ll prove this now in the case of metric spaces. It will then be completely clear how it works for graphs.

Let XX be a metric space, seen as a category enriched in V=[0,]V = [0, \infty]. Let 0\ell \geq 0 be a real number, and recall the functor δ :VAb\delta_\ell \colon V \to Ab from earlier. The aim is to show that the groups H n(X,δ )H_n(X, \delta_\ell) and MH n,(X)MH_{n, \ell}(X) are isomorphic, where the latter is defined à la Hepworth–Willerton.

The nerve functor N(X):V opsSetN(X) \colon V^{op} \to sSet is given by

N(X)() n={(x 0,,x n):d(x 0,x 1)++d(x n1,x n)}. N(X)(\ell')_n = \{ (x_0, \ldots, x_n) : d(x_0, x_1) + \cdots + d(x_{n-1}, x_n) \leq \ell' \}.

The unnormalized chain group C(X)() nC(X)(\ell')_n is simply the free abelian group on this set, but in order to make the connection with Richard and Simon’s definition, we’re going to use the normalized version of C(X)C(X). It’s not too hard to see that the normalized C(X)() nC(X)(\ell')_n is the free abelian group on the set

{(x 0,,x n):x 0x 1x n,d(x 0,x 1)++d(x n1,x n)} \{ (x_0, \ldots, x_n) : \, x_0 \neq x_1 \neq \cdots \neq x_n, \,\, d(x_0, x_1) + \cdots + d(x_{n-1}, x_n) \leq \ell' \}

with differentials = i=0 n(1) i i\partial = \sum_{i = 0}^n (-1)^i \partial_i, where

i(x 0,,x n)={(x 0,,x i1,x i+1,,x n) if i=0 or i=korx i1x i+1, 0 otherwise . \partial_i(x_0, \ldots, x_n) = \begin{cases} (x_0, \ldots, x_{i - 1}, x_{i + 1}, \ldots, x_n) & \text{if } \,\, i = 0 \,\,\text{ or }\,\, i = k \,\,\text{or}\,\, x_{i - 1} \neq x_{i + 1}, \\ 0 & \text{otherwise }. \end{cases}

Now we have to compute C(X) Vδ C(X) \otimes_V \delta_\ell. I know two ways to do this. You can use the definition of coend directly, as Mike does here. Alternatively, note that for any functor of coefficients A:VAbA \colon V \to Ab,

(C(X) VA) n = V(C(X)()A()) n = VC(X)() nA() = V x 0x nV(,d(x 0,x 1)++d(x n1,x n))A() = x 0x n VV(,d(x 0,x 1)++d(x n1,x n))A() = x 0x nA(d(x 0,x 1)++d(x n1,x n)), \begin{aligned} (C(X) \otimes_V A)_n & = \int^{\ell' \in V} (C(X)(\ell') \otimes A(\ell'))_n \\ & = \int^{\ell' \in V} C(X)(\ell')_n \otimes A(\ell') \\ & = \int^{\ell' \in V} \coprod_{x_0 \neq \cdots \neq x_n} \mathbb{Z} \cdot V(\ell', d(x_0, x_1) + \cdots + d(x_{n-1}, x_n)) \otimes A(\ell') \\ & = \coprod_{x_0 \neq \cdots \neq x_n} \int^{\ell' \in V} \mathbb{Z} \cdot V(\ell', d(x_0, x_1) + \cdots + d(x_{n-1}, x_n)) \otimes A(\ell') \\ & = \coprod_{x_0 \neq \cdots \neq x_n} A(d(x_0, x_1) + \cdots + d(x_{n-1}, x_n)), \end{aligned}

where the last step is by the density formula. We’re interested in the case A=δ A = \delta_\ell, and then the expression A()A(\cdots) in the last line is either \mathbb{Z} if the distances sum to \ell, or 00 if not. So

(C(X) Vδ ) n={(x 0,,x n):x 0x 1x n,d(x 0,x 1)++d(x n1,x n)=}. (C(X) \otimes_V \delta_\ell)_n = \{ (x_0, \ldots, x_n) : \, x_0 \neq x_1 \neq \cdots \neq x_n, \,\, d(x_0, x_1) + \cdots + d(x_{n-1}, x_n) = \ell \}.

That’s exactly Richard and Simon’s chain group MC n,(X)MC_{n, \ell}(X). With a little more thought, you can see that the differentials agree too. Thus, the chain complexes C(X) Vδ C(X) \otimes_V \delta_\ell and MC *,(X)MC_{\ast, \ell}(X) are isomorphic. It follows that their homologies are isomorphic, as claimed.

Decategorification for metric spaces

The final stretch of this marathon post is devoted to finite metric spaces — specifically, how the magnitude of a finite metric space can be obtained as the Euler characteristic of its magnitude homology. Here’s where there are some gaps.

Let XX be a finite metric space. For each V\ell \in V, we have the Euler characteristic

χ(X;δ )= n0(1) nrank(H n(X;δ )). \chi(X; \delta_\ell) = \sum_{n \geq 0} (-1)^n rank(H_n(X; \delta_\ell)).

The ranks here are finite because the sets N(X)() nN(X)(\ell)_n are manifestly finite. We saw earlier that the sum itself is finite, but let me repeat the argument slightly more carefully. First, these homology groups are the same as the Hepworth–Willerton homology groups. Second, the Hepworth–Willerton chain groups MC n,(X)MC_{n, \ell}(X) are trivial when n>/ηn \gt \ell/\eta, where η\eta is the minimum nonzero distance occurring in XX. So, the same is true of the homology groups MH n,(X)=H n(X;δ )MH_{n, \ell}(X) = H_n(X; \delta_\ell).

Let 𝕃 X[0,]\mathbb{L}_X \subseteq [0, \infty] be the set of (extended) real numbers occurring as finite sums d(x 0,x 1)++d(x n1,x n)d(x_0, x_1) + \cdots + d(x_{n - 1}, x_n) of distances in XX. Although this set is usually infinite, it’s always countable. Better still, 𝕃 X[0,L]\mathbb{L}_X \cap [0, L] is finite for all real L0L \geq 0. It’s easy to prove this, again using the fact that there’s a minimum nonzero distance.

For a number \ell that’s not in 𝕃 X\mathbb{L}_X, the Hepworth–Willerton chain groups MC *,(X)MC_{\ast, \ell}(X) are trivial, so the homology groups MH *,(X)=H *(X;δ )MH_{\ast, \ell}(X) = H_\ast(X; \delta_\ell) are trivial too. Hence χ(X;δ )=0\chi(X; \delta_\ell) = 0. Or in other words: χ(X;δ )\chi(X; \delta_\ell) only stands a chance of being nonzero if \ell belongs to the countable set 𝕃 X\mathbb{L}_X.

So, in the definition

χ(X)= [0,]χ(X;δ )e , \chi(X) = \sum_{\ell \in [0, \infty]} \chi(X; \delta_\ell) e^{-\ell},

that scary-looking sum over all [0,]\ell \in [0, \infty] might as well only be over the relatively tame range 𝕃 X\ell \in \mathbb{L}_X.

Now let’s do a formal calculation. Back in the main part of the post (just before the start of this “details” section), I observed that

χ(X)= n0(1) n [0,)rank(MC n,(X))e . \chi(X) = \sum_{n \geq 0} (-1)^n \sum_{\ell \in [0, \infty)} rank(MC_{n, \ell}(X)) e^{-\ell}.

Now MC n,(X)MC_{n, \ell}(X) is the free abelian group on the set

{(x 0,,x n):x 0x 1x n,d(x 0,x 1)++d(x n1,x n)=}, \{ (x_0, \ldots, x_n) : \, x_0 \neq x_1 \neq \cdots \neq x_n, \,\, d(x_0, x_1) + \cdots + d(x_{n-1}, x_n) = \ell \},

so rank(MC n,(X))rank(MC_{n, \ell}(X)) is the cardinality of this set. Hence, working formally,

[0,)rank(MC n,(X))e = x 0x ne d(x 0,x 1)e d(x 1,x 2)e d(x n1,x n). \sum_{\ell \in [0, \infty)} rank(MC_{n, \ell}(X)) e^{-\ell} = \sum_{x_0 \neq \cdots \neq x_n} e^{-d(x_0, x_1)} e^{-d(x_1, x_2)} \cdots e^{-d(x_{n-1}, x_n)}.

Let Z XZ_X be the square matrix with rows and columns indexed by the points of XX, and entries Z X(x,y)=e d(x,y)Z_X(x, y) = e^{-d(x, y)}. Write II for the X×XX \times X identity matrix, and write sum(M)sum(M) for the sum of all the entries of a matrix MM. Then

x 0x ne d(x 0,x 1)e d(x n1,x n)=sum((Z XI) n). \sum_{x_0 \neq \cdots \neq x_n} e^{-d(x_0, x_1)} \cdots e^{-d(x_{n-1}, x_n)} = sum((Z_X - I)^n).

So our earlier formula

χ(X)= n0(1) n [0,)rank(MC n,(X))e \chi(X) = \sum_{n \geq 0} (-1)^n \sum_{\ell \in [0, \infty)} rank(MC_{n, \ell}(X)) e^{-\ell}

now gives

χ(X)= n0(1) nsum(Z XI) n=sum( n0(IZ X) n). \chi(X) = \sum_{n \geq 0} (-1)^n sum(Z_X - I)^n = sum \biggl( \sum_{n \geq 0} (I - Z_X)^n \biggr).

Again formally speaking, the part inside the brackets is a geometric series whose sum is Z X 1Z_X^{-1}. So, the conclusion is that

χ(X)=sum(Z X 1). \chi(X) = sum(Z_X^{-1}).

The right-hand side is by definition the magnitude of the metric space XX (at least, assuming that Z XZ_X is invertible).

So, using non-rigorous formal methods, we’ve achieved our goal. That is, we’ve shown that the magnitude of a finite metric space is the Euler characteristic of its magnitude homology.

We know how to make some of this rigorous. The basic idea is that to sum a possibly-divergent series n0(1) na n\sum_{n \geq 0} (-1)^n a_n, we “vary the value of 1-1” by replacing it with a formal variable tt. Thus, we define the formal power series f(t)= n0a nt nf(t) = \sum_{n \geq 0} a_n t^n, hope that ff is formally equal to a rational function, hope that the rational function ff doesn’t have a pole at 1-1, and if not, interpret n0(1) na n\sum_{n \geq 0} (-1)^n a_n as f(1)f(-1).

That’s a time-honoured technique for summing divergent series. To apply it in this situation, here’s a little theorem about matrices that essentially appears in a paper by Clemens Berger and me:

Theorem Let ZZ be a square matrix of real numbers. Then:

  • The formal power series f(t)= n0sum((ZI) n)t nf(t) = \sum_{n \geq 0} sum((Z - I)^n) \cdot t^n is rational.

  • If ZZ is invertible, the value of the rational function ff at 1-1 is (defined and) equal to sum(Z 1)sum(Z^{-1}).

This result provides a respectable way to interpret the last part of the unrigorous argument presented above — the bit about the geometric series. But the earlier parts remain to be made rigorous.

Posted at September 6, 2016 12:04 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2902

103 Comments & 0 Trackbacks

Re: Magnitude Homology

Incidentally, I agonized over notation.

  • I called the base category VV. Everyone agrees on that. (Well, if I was Latexing I’d use 𝒱\mathcal{V}, but on the blog it’s easier to stick to a plain VV.)

  • Usually in enriched category theory, the objects of the base category VV are called things like XX (or xx). I’ve used \ell instead, for two reasons. First, I didn’t use XX and xx because I wanted them for something else. Second, \ell is what we used in earlier conversations, it’s what Richard and Simon used, and in the important examples of graphs and metric spaces, it stands for length.

  • Usually I’d call an enriched category something like AA or CC, at the opposite end of the alphabet from VV. But Mike used AA for the coefficients (reasonably enough), so I wanted to avoid that. He used CC for the category. However, he also used C to stand for chain, and just about everyone writing on homological algebra does the same, so I wanted to avoid that. I chose XX, because it’s a normal kind of letter for a graph, a metric space, or generally something that you might take the homology of.

  • I used NN, CC and HH for the nerve, chain complex and homology functors. Mike used MSMS, MCMC and HH, with M standing for magnitude. As I said in the post, I think it’s good to use NN to signal that it’s a nerve construction, but I’m agnostic on whether the CC and HH should have MMs in front of them.

    I don’t know whether Mike’s homology theory should be called “magnitude homology” or simply “homology”. Since magnitude homology is the categorification of homology in the same sense as Khovanov homology is the categorification of the Jones polynomial, calling it “magnitude homology” is like saying “Jones polynomial homology” (or more euphonically, “Jones homology”) instead of “Khovanov homology”. That would seem entirely reasonable. On the other hand, if there are no other theories of homology for enriched categories, maybe it should just be called “homology” without adornment.

    But that comes with a risk. If Mike writes this up and just calls it “homology”, someone else will call it “Shulman homology” and the name will stick. Much as he’ll deserve that, I’m a firm believer that descriptive names are better than named-after-people names — e.g. “Kullback–Leibler divergence” vs. “relative entropy”. In particular, “magnitude homology” is better than “Shulman homology” (sorry, Mike!). To avert the possibility of the terminology heading that way, the correct tactic must be to call it magnitude homology from the start :-)

I’m writing all this here because I want everyone to use this comments thread to discuss notation and terminology rather than mathematical substance, of course.

Posted by: Tom Leinster on September 6, 2016 2:04 AM | Permalink | Reply to this

Re: Magnitude Homology

As for “MSMS”, I was just copying the notation used by Hepworth and Willerton for the same thing in the case of graphs (Remark 44). I only realized later that it was also my initials. (-:O I’m very happy to call it NN instead.

Posted by: Mike Shulman on September 6, 2016 2:07 PM | Permalink | Reply to this

Re: Magnitude Homology

I’m writing all this here because I want everyone to use this comments thread to discuss notation and terminology rather than mathematical substance, of course.

Being British, I realise that you mean this literally; so with regard to your comments about ‘Shulman homology’, I note that in the post you use the term ‘Hepworth-Willerton chain groups’.

Anyway, nice “summary”.

Posted by: Simon Willerton on September 6, 2016 10:33 PM | Permalink | Reply to this

Re: Magnitude Homology

nice “summary”.

Thanks! I know you’re kidding, in some respects at least; this might be the longest post I’ve ever written. But I do hope that the “short version” does function as a summary. Even the long version doesn’t take that long to get to the definition, and the pace of it was intended to be leisurely.

Re terminology named after people, I’m aware of my hypocrisy, and actually this gives me an insight into why so many things in mathematics are named after people rather than having useful descriptive names. It’s not because there are legions of mathematicians who think it’s better that way — it’s simply easier.

So, I want a name for the chain groups that you and Richard called MC *,*MC_{\ast, \ast}, to distinguish them from the ones in Mike’s theory. What should I call them? I lazily named them after the two of you, but as you know, I’d prefer to use a descriptive name instead. What do you suggest?

(This is a test of both your linguistic flair and your selflessness. If you don’t suggest anything good, those chain groups will go on being named after you.)

Posted by: Tom Leinster on September 6, 2016 10:51 PM | Permalink | Reply to this

Re: Magnitude Homology

As long as you’re trying to write something for a pretty general audience, would you define “V-category”? By analogy with “R-algebra” I would assume it’s a category equipped with a functor from V. But then you say base, so I’d think a functor to V.

Posted by: Allen Knutson on September 6, 2016 3:06 AM | Permalink | Reply to this

Re: Magnitude Homology

Oh, sorry. “VV-category” is a synonym for “category enriched in VV”. I’ve edited the post to say this.

Posted by: Tom Leinster on September 6, 2016 3:10 AM | Permalink | Reply to this

Re: Magnitude Homology

Thanks Tom for such a monumental summary of those findings!

In case people from relevant fields tune in, it would be good to hear of potentially interesting magnitude homologies for enrichment by different semicartesian categories.

From the other thread, we have the possibility of enrichment in convex spaces.

And there was a risky punt on Bruhat-Tits buildings.

Posted by: David Corfield on September 6, 2016 8:49 AM | Permalink | Reply to this

Re: Magnitude Homology

monumental

Haha, yes, it’s a whopper!

Apart from wanting to tell the world about magnitude homology, there’s a secret reason why I wanted to get everything typed up now. The semester that’s about to start for me is very heavy on teaching and admin, and I’m going to have extremely limited time for anything else. So I wanted to make a good record of exactly where we’re at (as far as my understanding permits) in order that I can come back to it later when I’ve forgotten all the details.

Thanks for linking to those developing ideas on the monoidal categories with projections thread. I’ve been reading them without contributing. It would be spectacular if the Bruhat–Tits idea came off.

Posted by: Tom Leinster on September 6, 2016 9:00 AM | Permalink | Reply to this

Re: Magnitude Homology

The semester that’s about to start for me is very heavy on teaching and admin, and I’m going to have extremely limited time for anything else. So I wanted to make a good record of exactly where we’re at (as far as my understanding permits) in order that I can come back to it later when I’ve forgotten all the details.

Sounds great to me; I’m also embarking on a very busy semester, and additionally we are expecting our second baby in November.

Posted by: Mike Shulman on September 6, 2016 2:13 PM | Permalink | Reply to this

Re: Magnitude Homology

Many congratulations!

Posted by: Richard Williamson on September 6, 2016 7:50 PM | Permalink | Reply to this

Re: Magnitude Homology

Congratulations!

Posted by: Tom Leinster on September 6, 2016 3:54 PM | Permalink | Reply to this

Re: Magnitude Homology

Thanks!

Posted by: Mike Shulman on September 6, 2016 6:12 PM | Permalink | Reply to this

Re: Magnitude Homology

I’ve been thinking a bit about the application to categories enriched in convex spaces. Feel free to move this comment to the other thread if you would prefer keeping the discussion threads somewhat separate, and we could continue the discussion there.

For one thing, it’s hard to find interesting coefficient functors A:ConvSpAbA:ConvSp\to Ab, where “interesting” would mean that AA does not factor through the enveloping vector space functor ConvSpVect ConvSp\to Vect_{\mathbb{R}}, which is the left adjoint to the forgetful functor going the other way. For if AA factors like this, then we change the enrichment of our original category from ConvSpConvSp to Vect Vect_{\mathbb{R}}, use the fact that ConvSpVect ConvSp\to Vect_{\mathbb{R}} is strongly monoidal, and conclude that already the chain complex defining the homology as a ConvSpConvSp-category coincides with the chain complex defining the homology as a Vect Vect_{\mathbb{R}}-category (with the factored coefficient functor).

So at the moment, I don’t have any candidate coefficient functors that would both be interesting and somehow natural or meaningful. Any ideas?

Concerning the homology of FinStochFinStoch as a category enriched over ConvSpConvSp, I’m afraid that it’s trivial, since FinStochFinStoch shares an adjunction with the terminal ConvSpConvSp-category. (Either via the existence of an initial or a terminal object.)

Posted by: Tobias Fritz on September 8, 2016 1:16 PM | Permalink | Reply to this

Re: Magnitude Homology

conclude that already the chain complex defining the homology as a ConvSpConvSp-category coincides with the chain complex defining the homology as a Vect Vect_{\mathbb{R}}-category (with the factored coefficient functor).

What exactly does that mean, since ConvSpConvSp is semicartesian and Vect Vect_{\mathbb{R}} is not?

Concerning the homology of FinStochFinStoch as a category enriched over ConvSpConvSp, I’m afraid that it’s trivial, since FinStochFinStoch shares an adjunction with the terminal ConvSpConvSp-category.

Ah, well, too bad. Are there any other interesting ConvSpConvSp-categories?

Posted by: Mike Shulman on September 8, 2016 4:53 PM | Permalink | Reply to this

Re: Magnitude Homology

What exactly does that mean, since ConvSpConvSp is semicartesian and Vect Vect_{\mathbb{R}} is not?

Whoops! It means that you should replace Vect Vect_{\mathbb{R}} by Aff Aff_{\mathbb{R}} to make sense of my statement. An interesting coefficient functor will thus be one that doesn’t factor across the enveloping affine space functor ConvSpAff ConvSp\to Aff_{\mathbb{R}}.

Posted by: Tobias Fritz on September 8, 2016 6:00 PM | Permalink | Reply to this

Re: Magnitude Homology

Given that constant drive in certain parts to homotopify everything in sight, what scope for a homology from enriching in semicartesian monoidal (,1)(\infty, 1)-categories?

We have a very sketchy entry at nLab for cartesian monoidal (infinity,1)-category.

Can a semicartesian version be far away?

We also have an entry enriched (infinity,1)-category. Enrichment seems to be possible in an arbitrary monoidal ∞-category.

Posted by: David Corfield on September 6, 2016 11:50 AM | Permalink | Reply to this

Re: Magnitude Homology

No bites yet? With everyone heading out of the sunlit uplands of Summer research into the dark valleys of Autumn teaching, can we not just put down a marker here?

From that Gepner and Hausgeng article I mentioned:

despite the large amount of work that has been carried out on the foundations of ∞-category theory, above all by Joyal and Lurie, the theory is in many ways still in its infancy, and the analogues of many concepts from ordinary category theory remain to be explored. In this paper we begin to study the natural analogue in the ∞-categorical context of one such concept, namely that of enriched categories.

our theory gives a good setting in which to develop ∞-categorical analogues of many concepts from enriched category theory, as we hope to demonstrate in future work.

The theory we set up in this article is the first completely general theory of weak enrichment.

Seeing that everything tends to go through right to the (,1)(\infty, 1) case when things are done properly, can we say that it’s plausible that magnitude homology carries over here?

Instead of coefficients in AbAb, perhaps spectra.

I see such a thing as the Simplicial nerve of an A-infinity category is being devised.

Is there anything to compare in the (,1)(\infty, 1) world with Lawvere’s surprising ([0,],)([0, \infty], \geq)?

Posted by: David Corfield on September 7, 2016 10:06 AM | Permalink | Reply to this

Re: Magnitude Homology

Yes, I certainly believe that it should work the way these sorts of things usually do. I don’t know many interesting semicartesian (,1)(\infty,1)-categories other than 1-categories, though. In particular, I don’t know of any categorified [0,][0,\infty].

Posted by: Mike Shulman on September 7, 2016 10:53 PM | Permalink | Reply to this

Re: Magnitude Homology

I don’t know many interesting semicartesian (,1)(\infty,1)-categories other than 1-categories, though.

The homotopified analogues of examples from the nLab page, such as

The opposite of the category of associative algebras over a given base field kk with its usual tensor product?

Maybe not interesting, though.

Just to check all the bases, and since you and Tom are leading experts in such matters, could some form of magnitude homology come into play with enrichment by bicategories or by virtual double categories?

Maybe there you would need to take things relative to two points, and peg down the ends, as here.

Posted by: David Corfield on September 8, 2016 8:54 AM | Permalink | Reply to this

Re: Magnitude Homology

Tom observed that categorifying magnitude to magnitude homology is similar to how the Jones polynomial has been categorified to Khovanov homology. So could it be that Khovanov homology is itself an instance of some variant of magnitude homology? More concretely, can be knot be regarded as a category enriched in a bicategory or virtual double category? (I’m aware that this question is a bit wacky…)

Pretty much all other (co)homology theories that I can think of feel like they should be instances of magnitude (co)homology. (That doesn’t mean much.) There’s one further exception: cyclic homology. Does anyone see whether magnitude homology recovers it? Alternatively, at least in the case of symmetric monoidal VV, maybe one could come up with a general cyclic version of magnitude homology: there’s an obvious way to take the quotient of the explicit formula (C(X) VA) n=⨿ x 0,,x nA(X(x 0,x 1)X(x n1,x n))(C(X)\otimes_V A)_n = \amalg_{x_0,\ldots,x_n} A(X(x_0,x_1)\otimes\cdots\otimes X(x_{n-1},x_n)) by the cyclic shift.

Posted by: Tobias Fritz on September 8, 2016 12:57 PM | Permalink | Reply to this

Re: Magnitude Homology

I don’t know anything about cyclic homology, but Richard has suggested a “Hochschild” version of magnitude homology, which as I noted here can be obtained as an instance of a general version whose coefficients involve a two-variable functor.

Posted by: Mike Shulman on September 8, 2016 4:58 PM | Permalink | Reply to this

Re: Magnitude Homology

One can view knots (and links with more than one component) as 2-arrows in a certain cubical 2-category (also known as an edge-symmetric double category). Not that this answers your question, but I thought somebody might find figuring out how to do this a fun puzzle (you’ll have to rely on your wits; it’s not something written up anywhere)! Take the ‘knots/links’ to be formed from ‘pieces of string’ as in everyday life, without glueing the ends together in the end as we usually do when formalising knots mathematically. Hint: the idea is a bit akin to the definition of the braid groups, but we wish our notion of isotopy to take into account all Reidemeister moves, including the R0\mathsf{R0} move (planar isotopy).

While I’m on the subject, one can also view knots/links as (certain) presheaves on a certain category. Again, I’ll leave this as a puzzle to anybody interested (one or two denizens of the n-café have discussed this with me before, but if you’re not one of them, you’ll have to rely on your wits again!).

Posted by: Richard Williamson on September 8, 2016 9:10 PM | Permalink | Reply to this

Re: Magnitude Homology

one can also view knots/links as (certain) presheaves on a certain category.

I’m a bit too tired now to think about your puzzles, so let me ask some further questions: What do you get upon forming the category of elements of this functor? I’m thinking that if the projection functor from the category of elements down to the base category is faithful, then the category of elements is canonically enriched in a quantaloid, as described in Section 3 here. Could we then apply some version of magnitude homology to knots and possibly recover Khovanov homology? I’m speculating wildly…

Posted by: Tobias Fritz on September 9, 2016 7:12 PM | Permalink | Reply to this

Re: Magnitude Homology

I’m thinking that if the projection functor from the category of elements down to the base category is faithful

This is definitely not the case for the category of elements of almost any knot/link, when viewing a knot/link as a presheaf on the category I had in mind. It seems to me indeed that it will almost never be true for presheaves in general.

I am skeptical that one can recover Khovanov homology as a special case of magnitude homology, because I don’t see a way that the Khovanov/Kauffman bracket, which is the heart of the construction, could be captured by the formalism of enriched categories. One can consider the Kauffman bracket as a functor out of the category of braids into a certain ‘Temperley-Lieb category’. Investigating magnitude homology in a category such as the latter might be a more promising endeavour, but it still seems rather unlikely to me.

Which would of course make it all the more interesting if my skepticism were unfounded :-).

Posted by: Richard Williamson on September 13, 2016 8:50 PM | Permalink | Reply to this

Re: Magnitude Homology

Continuing the bicategory (or perhaps virtual equipment) thought, is there anything to stop a magnitude homology for two objects in such an enriched category?

If VV is the bicategory, and xx and yy are objects in a small VV-enriched category XX, could we have for X(x,y)\ell \in X(x, y):

N(x,y)() n= (x 1,,x n)XV(x,y)(,(X(x,x 1)X(x 1,x 2)X(x n1,x n)X(x n,y))?N(x,y)(\ell)_n = \coprod_{(x_1, \ldots, x_n) \in X} V(x,y)(\ell, \circ(X(x, x_1) \otimes X(x_1, x_2) \otimes \cdots \otimes X(x_{n-1}, x_n) \otimes X(x_n, y)) \;?

Posted by: David Corfield on September 8, 2016 1:45 PM | Permalink | Reply to this

Re: Magnitude Homology

Yes, that works. More generally, suppose VV is a bicategory (or double category — virtual doesn’t cut it) and XX is a VV-enriched category, while GG and FF are left and right XX-modules with extents v,wVv,w\in V. That means for each object xXx\in X we have a hom-object G(x)V(v,ϵx)G(x) \in V(v,\epsilon x), with actions G(x)X(x,y)G(y)G(x) \otimes X(x,y) \to G(y) and so on. Then we can define

B(G,X,F) n()= x 0,,x nXV(v,w)(,G(x 0)X(x 0,x 1)X(x n1,x n)F(x n))B(G,X,F)_n(\ell) = \sum_{x_0,\dots,x_n\in X} V(v,w)(\ell, G(x_0) \otimes X(x_0,x_1)\otimes \cdots \otimes X(x_{n-1},x_n)\otimes F(x_n))

as a functor V(v,w)sSetV(v,w) \to sSet, and proceed from there.

Posted by: Mike Shulman on September 8, 2016 5:05 PM | Permalink | Reply to this

Re: Magnitude Homology

I remember there’s something like a homology of proof or of rewriting. And that’s usually linked to the directed algebraic topology program.

A short search brings up these slides by Jérémy Dubut. Slide 16/36 shows directed homology of computation traces, assigned to pairs of points as expected.

Posted by: David Corfield on September 9, 2016 10:25 AM | Permalink | Reply to this

Re: Magnitude Homology

A little bit of lunchtime exploration, from Marco Grandis’s Directed Algebraic Topology we are brought back to Lawvere’s metric spaces:

In Chapter 6 we end this study by investigating ‘spaces’ where paths have a ‘weight’, or ‘cost’, expressing length or duration, price, energy, etc. The general aim is now: measuring the cost of (possibly non-reversible)phenomena.

The weight function takes values in [0,][0, \infty] and is not assumed to be invariant up to path-reversion. Thus, ‘weighted algebraic topology’ can be developed as an enriched version of directed algebraic topology, where illicit paths are penalised with an infinite cost, and the licit ones are measured. Its algebraic counterpart will be ‘weighted algebraic structures’, equipped with a sort of directed seminorm.

A generalised metric space in the sense of Lawvere [Lw1] yields a prime structure for this purpose. For such a space we define a fundamental weighted category, by providing each homotopy class of paths with a weight, or seminorm, which is subadditive with respect to composition.

We also study a more general framework, w-spaces or spaces with weighted paths (a natural enrichment of d-spaces), whose relationship with noncommutative geometry also takes into account the metric aspects - in contrast with cubical sets and d-spaces.

Chapter 6

As we have seen, directed algebraic topology studies ‘directed spaces’ with ‘directed algebraic structures’ produced by homotopy or homology functors: on the one hand the fundamental category (and possibly its higher dimensional versions), on the other preordered homology groups. Its general aim is modelling non-reversible phenomena.

We now sketch an enrichment of this subject: we replace the truth-valued approach of directed algebraic topology (where a path is licit or not) with a measure of costs, taking values in the interval [0,][0, \infty] of extended (weakly) positive real numbers. The general aim is, now, measuring the cost of (possibly non-reversible) phenomena.

Weighted algebraic topology will study ‘weighted spaces’, like (generalised) metric spaces, with ‘weighted’ algebraic structures, like the fundamental weighted (or normed) category, defined here, and the weighted homology groups, developed in [G10] for weighted (or normed) cubical sets.

These weighted categories (6.3.1) include a certain category of Lawvere generalized metric spaces.

Posted by: David Corfield on September 9, 2016 1:57 PM | Permalink | Reply to this

Re: Magnitude Homology

Wow, thank you for this very impressive writeup! I want to emphasize that the definition and its properties developed over the course of a lengthly discussion with (mainly) Tom, so he should definitely also get some credit. Richard Williamson also made some useful contributions, such as the sufficiency of natural chain homotopy and pushing me to think harder about the smallness hypothesis on AA.

In particular, last night I independently performed the “unwinding” of the coend that Tom mentions above to get

(C(X) VA) n= x 0,,x nXA(X(x 0,x 1)X(x n1,x n))(C(X) \otimes_V A)_n = \bigoplus_{x_0,\dots,x_n\in X} A(X(x_0,x_1)\otimes\dots\otimes X(x_{n-1},x_n))

and noticed that this formula doesn’t depend on AA being a small functor! So actually, the magnitude homology can be defined for any functor A:VAbA:V\to Ab (and I’m sure that A:VChA:V\to Ch works too).

In particular, this perspective that de-emphasizes smallness gives a different perspective on the ordinary case. The functor -\cdot \mathbb{Z} that Tom used to get out ordinary homology of ordinary categories has another name: it’s the free abelian group functor! So the homology of an ordinary category with \mathbb{Z} coefficients is its magnitude homology with coefficients in the free abelian group functor.

This also makes for a somewhat simpler version of the generalization to non-semicartesian VV. In that case the “coefficients” consist not only of A:VAbA:V\to Ab but also F:XVF:X\to V and G:X opVG:X^{op}\to V, and the nerve is replaced by a two-sided simplicial bar construction

B(G,X,F)() n=V(,F(x 0)X(x 0,x 1)X(x n1,x n)G(x n)) B(G,X,F)(\ell)_n = V(\ell,F(x_0) \otimes X(x_0,x_1)\otimes\cdots\otimes X(x_{n-1},x_n)\otimes G(x_n))

giving in place of the simple formula above

x 0,,x nXA(F(x 0)X(x 0,x 1)X(x n1,x n)G(x n))\bigoplus_{x_0,\dots,x_n\in X} A(F(x_0)\otimes X(x_0,x_1)\otimes\dots\otimes X(x_{n-1},x_n)\otimes G(x_n))

If V=AbV=Ab and we take AA to be the identity functor, and specialize to the case when XX has one object and is thus just a ring, then I believe the homology of this simplicial abelian group gives the “relative Tor” Tor * X/(F,G)Tor_\ast^{X/\mathbb{Z}}(F,G). So at least in that case, we get something known.

Posted by: Mike Shulman on September 6, 2016 2:04 PM | Permalink | Reply to this

Re: Magnitude Homology

I fear I’m replying too quickly to this post, but I have to start catching up on other commitments now so here goes…

We both noticed that for the unnormalized version of C(X)C(X), there is an explicit formula for C(X) VAC(X) \otimes_V A that makes sense regardless of whether AA is small. So, you could write down a much shorter definition of magnitude homology (let’s say in the case of a semicartesian monoidal category VV):

The magnitude homology of a small VV-category XX, with coefficients in a functor A:VAbA \colon V \to Ab, is the homology of the chain complex whose nnth group is x 0,,x nXA(X(x 0,x 1)X(x n1,x n)) \bigoplus_{x_0, \ldots, x_n \in X} A\bigl( X(x_0, x_1) \otimes\cdots\otimes X(x_{n-1}, x_n) \bigr) and whose differential is given in the “obvious” way.

That’s great. But at some stage we want to use normalized chains, e.g. to make the connection with Hepworth and Willerton’s magnitude homology for graphs. How do we do this without the assumption of smallness? Our existing argument uses properties of the functor VA:[V op,Ch]Ch- \otimes_V A \colon [V^{op}, Ch] \to Ch — but no such functor exists if AA is not small.

I guess it’s all OK in the sense that when you unwind all the arguments sufficiently, they’re just elementary algebraic manipulations that need no smallness condition. But that’s not a proof!

I’m sure that A:VChA \colon V \to Ch works too

By my calculation, if we use the unnormalized version of C(X)C(X) then for an arbitrary small functor A:VChA \colon V \to Ch,

(C(X) VA) n= i+j=n x 0,,x iXA j(X(x 0,x 1)X(x i1,x i)). (C(X) \otimes_V A)_n = \bigoplus_{i + j = n} \bigoplus_{x_0, \ldots, x_i \in X} A_j\bigl( X(x_0, x_1) \otimes\cdots\otimes X(x_{i-1}, x_i) \bigr).

[For general monoidal category VV, not necessarily semicartesian,] the “coefficients” consist not only of A:VAbA \colon V \to Ab but also F:XVF \colon X \to V and G:X opVG \colon X^{op} \to V.

I wonder whether this begins to resolves a question about coefficients that had been bothering me.

In what I think of as the “standard” or “classical” framework, one takes the (co)homology of an ordinary category XX with coefficients in a functor X opAbX^{op} \to Ab. (Or maybe XAbX \to Ab; I won’t attempt to get the variance right.) However, in the magnitude homology framework as described in my post, the coefficents are a functor SetAbSet \to Ab.

So, the two approaches simply use different kind of coefficient systems. They do agree in some sense: given an abelian group BB, you can form either the functor from XX to AbAb with constant value BB or the functor SetAbSet \to Ab that takes copowers of BB, and the two homologies are then the same. But still, it puzzles me that the coefficients are of different types.

In the approach you’re describing now, the magnitude homology of an ordinary category XX would have coefficients in a triple of functors (A,F,G)(A, F, G) where

A:SetAb,F:XSet,G:X opSet. A: Set \to Ab, \qquad F: X \to Set, \qquad G: X^{op} \to Set.

We still aren’t seeing a functor X (op)AbX^{(op)} \to Ab here! (Though you could imagine using AA, FF and GG to build one.) What’s going on?

Posted by: Tom Leinster on September 6, 2016 3:39 PM | Permalink | Reply to this

Re: Magnitude Homology

at some stage we want to use normalized chains, e.g. to make the connection with Hepworth and Willerton’s magnitude homology for graphs

Indeed. However, the calculation that unwinds the coend makes sense even before we make our simplicial objects into chain complexes. That is, we have a simplicial abelian group whose nnth group is

$$ \bigoplus{x0,\dots,xn\in X} A(X(x0,x1)\otimes\cdots\otimes X(x{n-1},xn)andtheunnormalized and the unnormalized C(X)\otimesV Aistheunnormalizedoneassociatedtothis.Isuspectthatthenormalizedversionof is the unnormalized one associated to this. I suspect that the *normalized* version of C(X)\otimesV Aisjustthe*normalized*chaincomplexassociatedtothissimplicialabeliangroup.Infact,Ithinkthisshouldfollowfromabstractnonsense:thenormalizedandunnormalizedchaincomplexesassociatedtoasimplicialabeliangroupshouldbeobtainedbytensoringitwithsomecanonicalcosimplicialchaincomplexes,anoperationwhichcommuteswith is just the *normalized* chain complex associated to this simplicial abelian group. In fact, I think this should follow from abstract nonsense: the normalized and unnormalized chain complexes associated to a simplicial abelian group should be obtained by tensoring it with some canonical cosimplicial chain complexes, an operation which commutes with \otimesV$. If this is true, then we should be fine.

Posted by: Mike Shulman on September 6, 2016 8:05 PM | Permalink | Reply to this

Re: Magnitude Homology

In what I think of as the “standard” or “classical” framework, one takes the (co)homology of an ordinary category XX with coefficients in a functor X opAbX^{op} \to Ab…. However, in the magnitude homology framework as described in my post, the coefficents are a functor SetAbSet \to Ab.

Good question! I hadn’t thought of that.

One thing we can do is replace FF and GG by a single VV-functor H:X opXPH:X^{op}\otimes X \to P, where PP is a VV-category with copowers, and then take AA to be a functor PAbP\to Ab. Then the chain groups become

x 0,,x nXA((X(x 0,x 1)X(x n1,x n))H(x n,x 0))\bigoplus_{x_0,\dots,x_n\in X} A((X(x_0,x_1)\otimes \cdots \otimes X(x_{n-1},x_n)) \odot H(x_n,x_0))

where \odot denotes the copower V×PPV\times P\to P. Given FF and GG we take P=VP=V and H(x,y)=G(x)F(y)H(x,y) = G(x)\otimes F(y). If we instead take H(x,y)=X(x,y)H(x,y) = X(x,y), we get the “magnitude Hochshild homology” suggested by Richard here.

And in the case V=SetV=Set, we could start with B:XAbB:X\to Ab and take P=AbP=Ab, A=Id AbA=Id_Ab, and H(x,y)=B(y)H(x,y) = B(y) (or, if we instead used B:X opAbB:X^{op}\to Ab, then H(x,y)=B(x)H(x,y) = B(x)). I don’t know whether this reproduces the usual homology of a category with coefficients in BB, but at least it has the same input.

Posted by: Mike Shulman on September 6, 2016 8:15 PM | Permalink | Reply to this

Re: Magnitude Homology

With the appearance of ‘relative Tor’, I’d just like to mention, for when people come back to this story, that over on the other thread I outline a possible construction of a ‘Hochschild homology/cohomology’ theory as well using Mike’s ideas, just using a slightly different collection of simplicial sets. Assuming that I’ve not made a mistake, it’d be interesting to know whether the corresponding Euler characteristic is interesting; what it detects for graphs, metric spaces, etc.

Posted by: Richard Williamson on September 6, 2016 8:01 PM | Permalink | Reply to this

Re: Magnitude Homology

Perhaps this has already been said, but the nerve of a metric space reminds me of the various complexes used in the study of persistent homology. In fact, I think it’s exactly the filtered Cech complex they use there.

It also occurs to me that once you’ve taken the nerve, you have a nice simplicial presheaf on VV and for a lot of purposes, you might just stop there! You can do a lot of homotopy theory with simplicial presheaves. This would be even more interesting if VV carries the structure of a site, to get a more interesting model structure on simplicial presheaves.

It seems natural to ask what are the derived functors of the homology and cohomology functors you’ve defined. But maybe the nerve of a VV-category is automatically bifibrant in a suitable model structure on simplicial presheaves, so that these functors are “already derived”?

Another thing this suggests is that you might lift a model structure from simplicial presheaves on VV to a model structure on VV-categories, and then ask how it relates to other model structures on VV-categories.

Posted by: Tim Campion on September 6, 2016 2:09 PM | Permalink | Reply to this

Re: Magnitude Homology

From the brief foray into it that I mentioned here, aren’t the Cech and Rips complexes as used in persistent homology somewhat different? The Cech one looks for inhabited intersections of balls of a given radius, while the Rips looks at limiting edge lengths.

As I mentioned also there, there is ‘squeezing’ of a kind going on.

Posted by: David Corfield on September 6, 2016 2:19 PM | Permalink | Reply to this

Re: Magnitude Homology

Yes, I think everyone who’s run into persistent homology feels some resonance here! It would be really fantastic if someone could make a definite connection — actually prove some theorems — and I think we’re close to the stage where that’s a real possibility.

In fact, when John started this conversation with the question

Is there any way to generalize the Hepworth–Willerton homology from graphs to general finite metric spaces?

my instant reply was

I would love it if someone found a way to do this.

Aaron Greenspan and I spent a while trying to do it ourselves, but we didn’t get too far. Then recently, I was at a fantastic applied topology conference where all the talk of persistent homology revived my urge to do it.

My talk at that conference was an attempt to get applied topologists interested in magnitude. The very last slide summarizes some of the connections between the two subjects that I envisaged.

Then later in that thread, starting here, David started talking about persistent homology too, going into more detail — e.g. about Čech and Rips complexes. You really need signposts for a thread that long!

As he just said, there’s a difference between the Čech and Rips complexes, and page 3 of the paper by Ghrist he linked to is a good source for this.

(When consulting the literature, it’s useful to know that some people say “Vietoris complex” instead of “Rips complex”. Others try to be even-handed by saying “Vietoris–Rips”. I guess there are other people still with delicate alphabetic sensibilities who say “Rips–Vietoris” — the same people who speak of “Čech–Stone compactification” :-))

Here’s a categorically pertinent point that I haven’t seen made explicitly in applied topology: the Rips complex is something associated to a metric space, whereas the Čech complex is something associated to a subspace of a metric space. In Ghrist’s paper and many other places, all the spaces concerned are embedded in n\mathbb{R}^n, so that distinction is invisible. But in principle that’s the way it is.

Posted by: Tom Leinster on September 6, 2016 4:00 PM | Permalink | Reply to this

Re: Magnitude Homology

Right, unless I’m confused, the nerve of a metric space is not the same as its Cech or Rips complex, although they all live in the same category (simplicial presheaves on [0,][0,\infty]) and in each case the nn-simplices are (n+1)(n+1)-tuples (x 0,,x n)(x_0,\dots,x_n) satisfying some condition — the conditions are different in each case. In N(X)N(X) we require that d(x 0,x 1)++d(x n1,x n)d(x_0,x_1)+\cdots+d(x_{n-1},x_n) \le \ell. In the Cech complex we require that iB(x i,/2)\bigcap_i B(x_i,\ell/2) \neq \emptyset in some ambient metric space. And in the Rips complex we require that each d(x i,x j)d(x_i,x_j)\le\ell (so in particular, the Rips complex is determined by its 1-skeleton, which apparently makes it more computationally tractable).

Posted by: Mike Shulman on September 6, 2016 6:13 PM | Permalink | Reply to this

Re: Magnitude Homology

I’m not quite sure what you mean by “derived functor” in this context. Tom explained why the magnitude homology of a category is “homotopy invariant” as an operation on the category (respects equivalences), by way of showing that the nerve itself is also homotopy invariant. The only question along these lines I can think of that isn’t answered is whether the magnitude homology of a VV-category factors through its nerve by a homotopy-invariant operation on sSet V opsSet^{V^{op}}, i.e. whether H *(X)H_\ast(X) depends only on the homotopy type of N(X)N(X). That is an interesting question, and I agree that the answer should be yes if N(X)N(X) is sufficiently cofibrant (I don’t think fibrancy is necessary, nor is it likely to happen unless we use some “quasicategory-type” model structure on sSet V opsSet^{V^{op}}).

In fact, now that I think about it, N(X)N(X) looks fairly cofibrant: in each simplicial degree it is a coproduct of representables. This seems close to being projectively cofibrant, which in turn ought to be enough to make everything homotopy-invariant. Projective cell complexes of simplicial presheaves are obtained by “gluing on representable simplices”, i.e. pushing out along maps of the form Δ n×V(,v)Δ n×V(,v)\partial \Delta^n \times V(-,v) \to \Delta^n \times V(-,v); so we could try to construct N(X)N(X) in this way by inducting up along nn and at each step using all the possible values of v=C(x 0,x 1)C(x n1,x n)v = C(x_0,x_1)\otimes\cdots\otimes C(x_{n-1},x_n).

I think this could only fail because of degeneracies: if some x i=x i+1x_i=x_{i+1} and a map C(x 0,x 1)C(x n1,x n)\ell \to C(x_0,x_1)\otimes\cdots\otimes C(x_{n-1},x_n) factors through the unit IC(x i,x i+1)I \to C(x_i,x_{i+1}), then it will already be present before we “glue on that cell”, whereas gluing on the cell would produce another copy of it that we don’t want. If the unit maps IC(x,x)I \to C(x,x) are all isomorphisms (as they must be for V=[0,]V=[0,\infty], for instance) then we should be able to avoid this by only gluing on cells corresponding to the nondegenerate sequences with x ix i+1x_i\neq x_{i+1} for all ii. Otherwise, we still do have to glue something on, but I suspect we can do something fancier to make it work. For instance, suppose that

  1. The unit maps IC(x,x)I \to C(x,x) are all monomorphisms. This is automatic if CC is semicartesian, since any map out of a terminal object is mono, but I still have the general case in mind as well. Conditions like this are also fairly common when we try to do homotopy theory involving enriched categories.

  2. The Day tensor product on sSet V opsSet^{V^{op}} satisfies the pushout-product axiom for monomorphisms. I don’t recall seeing this condition anywhere before, nor have I thought about what sort of condition it imposes on VV.

Then I suspect we can build a “degeneracies” monomorphism of presheaves C(x 0,,x n)V(,C(x 0,x 1)C(x n1,x n))\partial C(x_0,\dots,x_n) \to V(-,C(x_0,x_1)\otimes\cdots\otimes C(x_{n-1},x_n)) by repeated pushout-products of the monos V(,I)V(,C(x i,x i+1))V(-,I) \to V(-,C(x_i,x_{i+1})) for all ii such that i=i+1i=i+1. Taking the pushout-product of this with Δ nΔ n\partial \Delta^n \to \Delta^n we get a projective cofibration that we can glue on to make the degeneracies correct.

Here’s another thought that I may as well throw out there: it doesn’t feel quite right to me to regard N(X)N(X) as a simplicial presheaf on VV. I would rather regard it as a functor elXVel \sharp X \to V where elXel \sharp X is the category of elements of the codiscrete simplicial set on the objects of XX, in which case we can just set

N(X)(x 0,,x n)=C(x 0,x 1)C(x n1,x n). N'(X)(x_0,\dots,x_n) = C(x_0,x_1) \otimes\cdots\otimes C(x_{n-1},x_n).

The simplicial presheaf that Tom called N(X)N(X) is obtained by applying the Yoneda embedding and then left Kan extending along the discrete opfibration elXΔ opel \sharp X \to \Delta^{op}. But N(X)N'(X) has the advantage that the “unwound coend” formulation factors through it; it’s just obtained by composing with A:VAbA:V\to Ab rather than the Yoneda embedding, and then left Kan extending to Δ op\Delta^{op} (and then making a simplicial abelian group into a chain complex). This feels to me like an even more obviously “homotopy-invariant operation”, except that I don’t know what category N(X)N'(X) lives in (as XX varies) or what homotopy theory it might have.

Posted by: Mike Shulman on September 6, 2016 6:14 PM | Permalink | Reply to this

Re: Magnitude Homology

Here’s a cool feature of magnitude homology of metric spaces:

1st magnitude homology measures lack of convexity.

Here I’m talking about completely arbitrary metric spaces, not just finite ones.

To explain this, I’m going to use the Hepworth–Willerton approach to magnitude homology. Let XX be a metric space and >0\ell \gt 0 a real number. The first few of the associated chain groups are

MC 0,(X) =0, MC 1,(X) ={(x 0,x 1):x 0x 1,d(x 0,x 1)=}, MC 2,(X) ={(x 0,x 1,x 2):x 0x 1x 2,d(x 0,x 1)+d(x 1,x 2)=}. \begin{aligned} MC_{0, \ell}(X)& = 0, \\ MC_{1, \ell}(X)& = \mathbb{Z} \cdot \{ (x_0, x_1) \,:\, x_0 \neq x_1, \,\, d(x_0, x_1) = \ell \},\\ MC_{2, \ell}(X)& = \mathbb{Z} \cdot \{ (x_0, x_1, x_2) \,:\, x_0 \neq x_1 \neq x_2, \,\, d(x_0, x_1) + d(x_1, x_2) = \ell \}. \end{aligned}

Here I’m writing S\mathbb{Z}\cdot S for the free abelian group on a set SS. I’m also ignoring the case =0\ell = 0. (It’s trivial: MC n,0(X)MC_{n, 0}(X) is 00 unless n=0n = 0, in which case it’s X\mathbb{Z} \cdot X.) The differential

:MC 1,(X)MC 0,(X) \partial: MC_{1, \ell}(X) \to MC_{0, \ell}(X)

is zero. So, its kernel is MC 1,(X)MC_{1, \ell}(X). The differential

:MC 2,(X)MC 1,(X) \partial: MC_{2, \ell}(X) \to MC_{1, \ell}(X)

is defined on generators by

(x 0,x 1,x 2)={(x 0,x 2) if d(x 0,x 1)+d(x 1,x 2)=d(x 0,x 2), 0 otherwise. \partial(x_0, x_1, x_2) = \begin{cases} -(x_0, x_2) &\text{if }\,\, d(x_0, x_1) + d(x_1, x_2) = d(x_0, x_2),\\ 0 &\text{otherwise}. \end{cases}

Let’s say that a point x 1x_1 is between points x 0x_0 and x 2x_2 if d(x 0,x 1)+d(x 1,x 2)=d(x 0,x 2)d(x_0, x_1) + d(x_1, x_2) = d(x_0, x_2), and strictly between if also x 0x 1x 2x_0 \neq x_1 \neq x_2. Then the image of \partial is generated by the pairs (x 0,x 2)(x_0, x_2) such that there exists a point strictly between x 0x_0 and x 2x_2.

The 1st homology H 1,(X)H_{1, \ell}(X) is the quotient of the kernel just computed by the image just computed. In other words, H 1,(X)H_{1, \ell}(X) is the free abelian group on the set of pairs (x 0,x 1)(x_0, x_1) such that d(x 0,x 1)=d(x_0, x_1) = \ell and there is no point strictly between x 0x_0 and x 1x_1.

A metric space is Menger convex if for any pair of distinct points there exists a point strictly between them. Our calculation immediately implies:

Theorem  Let XX be a metric space. Then XX is Menger convex if and only if H 1,(X)=0H_{1, \ell}(X) = 0 for all 0\ell \geq 0.

Menger convexity looks like a rather weak condition, but it’s not. In fact, let XX be a metric space with the property that closed bounded subsets are compact. The following are equivalent:

  • XX is Menger convex.

  • XX is geodesic, i.e. for all x,yXx, y \in X, say distance DD apart, there is an isometry [0,D]X[0, D] \to X joining xx and yy.

(This appears as Theorem 2.6.2 of Athanase Papadopoulos’s 2005 book Metric spaces, convexity and nonpositive curvature, though I assume it’s much older than that.)

For instance:

Corollary   A closed set X nX \subseteq\mathbb{R}^n is convex if and only if H 1,(X)=0H_{1, \ell}(X) = 0 for all 0\ell \geq 0.

Generally, the more a space fails to be convex, the larger the groups H 1,(X)H_{1, \ell}(X) will tend to be. That’s because there will be more pairs of points with no point strictly between them, and these pairs are the generators of the first homology groups.

You could be a bit more subtle and ask what happens at different length scales. For instance, consider the metric space \mathbb{Z} with its usual metric. All the homology groups H *,()H_{\ast, \ell}(\mathbb{Z}) vanish unless \ell is an integer.

  • =0\ell = 0: we have H 1,0()=0H_{1, 0}(\mathbb{Z}) = 0 (as for any space).

  • =1\ell = 1: the abelian group H 1,1()H_{1, 1}(\mathbb{Z}) is generated by pairs of points distance 11 apart with nothing in between them. That’s all pairs of points distance 11 apart, and there are two such pairs for each integer (two, because they’re ordered pairs).

  • 2\ell \geq 2: a pair of points of \mathbb{Z} distance 2 or more apart always has something strictly in between them, so H 1,2()=H 1,3()==0H_{1, 2}(\mathbb{Z}) = H_{1, 3}(\mathbb{Z}) = \cdots = 0.

Actually, this metric space is a graph, so presumably everything I’ve just said follows from the Hepworth–Willerton paper.

The point is that although the metric space \mathbb{Z} fails to be Menger convex, it only fails because of the points distance 11 apart; for further-separated points it’s fine. And the intuition that \mathbb{Z} is “nearly but not quite Menger convex” is given precise expression by the fact that H 1,()=0H_{1, \ell}(\mathbb{Z}) = 0 for all but one value of \ell.

Posted by: Tom Leinster on September 6, 2016 4:55 PM | Permalink | Reply to this

Re: Magnitude Homology

First off, let me add my thanks for writing this post. The other thread long since outstripped both my actual ability to follow it in real time, and any ambition I might have had to go back and make sense of it later.

1st magnitude homology measures lack of convexity.

Awesome! This sounds like exactly the kind of geometric information we would hope to see encoded in a homology theory for metric spaces.

Just before seeing this comment I was wondering if negative type could be characterized in terms of magnitude homology. Any thoughts there?

Incidentally, the hypothesis that no two points have a point strictly between them, or equivalently that the triangle inequality is always strict, has come up a couple times. I can’t remember precisely where, but I’ve seen a hypothesis similar to that somewhere before. (Maybe in Nik Weaver’s book Lipschitz Algebras?) Here’s one way of producing many examples of spaces with this property: given any metric space (X,d)(X,d) and α(0,1)\alpha \in (0,1), the triangle inequality is always strict in the metric space (X,d α)(X, d^\alpha). It may be that the condition that I’m dimly remembering is actually that the metric is the α\alpha power of a metric.

Posted by: Mark Meckes on September 6, 2016 6:18 PM | Permalink | Reply to this

Re: Magnitude Homology

Damn, I thought I dimly remembered that there was a name for metric spaces in which the triangle inequality is always strict, and that you (Mark) had once told me that name. I was trying to remember it a week or two ago, but it seems that if you ever knew it, you’ve forgotten too!

Posted by: Tom Leinster on September 6, 2016 7:54 PM | Permalink | Reply to this

Re: Magnitude Homology

The first homology of a metric space says something about the existence of geodesic paths between points. I haven’t got a full description of second homology, but it seems that it has something to do with uniqueness/multiplicity of geodesics.

Specifically, I claim that H 2,(X)=0H_{2, \ell}(X) = 0 for any convex subset XX of n\mathbb{R}^n and any 0\ell \geq 0. And I think the reason for this is to do with the fact that in n\mathbb{R}^n, there’s a unique shortest path between any two points.

The calculation is below, but before getting stuck in, let me point out that matters are very different for graphs. I tend to think of graphs as the metric spaces that are as unlike subspaces of Euclidean space as it’s possible to be. For instance, in any metric space we can ask how many midpoints exist between a given pair of points. (By a midpoint I mean a point whose distance to each of the two given points is half the overall distance.) In a subspace of n\mathbb{R}^n, a given pair of points has at most one midpoint, but in a graph there can be any number of them.

Anyway, Richard and Simon computed lots of examples of magnitude homology of graphs in their paper, and H 2H_2 very often isn’t zero. For instance, the 5-cycle C 5C_5 has

H 2,2(C 5)= 20,H 2,3(C 5)= 40,H 2,4(C 5)= 20 H_{2, 2}(C_5) = \mathbb{Z}^{20}, \quad H_{2, 3}(C_5) = \mathbb{Z}^{40}, \quad H_{2, 4}(C_5) = \mathbb{Z}^{20}

(and the rest are zero).

OK. Now I’ll prove my claim that when XX is a convex subset of n\mathbb{R}^n, the homology groups H 2,(X)H_{2, \ell}(X) are trivial for all 0\ell \geq 0.

A typical element of MC 2,(X)MC_{2, \ell}(X) is a linear combination

α= x,y,za xyz(x,y,z) \alpha = \sum_{x, y, z} a_{x y z} (x, y, z)

where the sum is over all points xyzx \neq y \neq z such that d(x,y)+d(y,z)=d(x, y) + d(y, z) = \ell, the coefficients a xyza_{x y z} are integers, and all but finitely many coefficients are zero. By the formula for \partial I just mentioned,

(α)= xz:d(x,z)=( y strictly between x and za xyz)(x,z). \partial(\alpha) = - \sum_{x \neq z \,:\, d(x, z) = \ell} \biggl( \sum_{y \,\,\text{ strictly between }\,\, x \,\,\text{ and } \,\, z} a_{x y z} \biggr) \,\, (x, z).

So, α\alpha is a cycle if and only if for all xx and zz such that d(x,z)=d(x, z) = \ell,

y strictly between x and za xyz=0. \sum_{y \,\,\text{ strictly between }\,\, x \,\,\text{ and } \,\, z} a_{x y z} = 0.

Suppose now that α\alpha is a cycle. To prove my claim, I have to show that it is a boundary.

The sum α=a xyz(x,y,z)\alpha = \sum a_{x y z} (x, y, z) splits into two parts: those for which yy is between xx and zz, and those for which it’s not. Those for which it’s not are boundaries, since by convexity we can choose some uu strictly between yy and zz, and then

(x,y,u,z)=(x,y,z). \partial(x, y, u, z) = (x, y, z).

So we can assume that a xyz=0a_{x y z} = 0 unless xx, yy and zz are collinear. Now, fixing xx and zz such that d(x,z)=d(x, z) = \ell, it’s enough to prove that

α xz:= y strictly between x and za xyz(x,y,z) \alpha_{x z} := \sum_{y \,\,\text{ strictly between }\,\, x \,\,\text{ and } \,\, z} a_{x y z} (x, y, z)

is a boundary. But we know that

y strictly between x and za xyz=0, \sum_{y \,\,\text{ strictly between }\,\, x \,\,\text{ and } \,\, z} a_{x y z} = 0,

and from this it follows that α xz\alpha_{x z} can be expressed as a \mathbb{Z}-linear combination of expressions of the form

(x,y 1,z)(x,y 2,z) (x, y_1, z) - (x, y_2, z)

where y 1y_1 and y 2y_2 are both strictly between xx and zz.

Now we use something special about n\mathbb{R}^n — something closely related to the uniqueness of geodesics. Whenever we have points y 1y_1 and y 2y_2 both between xx and zz, they must all lie on a line. That is, one of the following two possibilities must occur:

  • d(x,y 1)+d(y 1,y 2)+d(y 2,z)=d(x,z)d(x, y_1) + d(y_1, y_2) + d(y_2, z) = d(x, z), or

  • d(x,y 2)+d(y 2,y 1)+d(y 1,z)=d(x,z)d(x, y_2) + d(y_2, y_1) + d(y_1, z) = d(x, z).

That’s not true for a general metric space. For instance, consider the geodesic metric on the sphere. My home (y 1y_1) is between the north pole (xx) and the south pole (zz), and your home (y 2y_2) is too, but my home probably isn’t between your home and the north pole or vice versa.

Back to the calculation. It remains to prove that when y 1y_1 and y 2y_2 are points strictly between xx and zz, the element

(x,y 1,z)(x,y 2,z) (x, y_1, z) - (x, y_2, z)

of MC 2,(X)MC_{2, \ell}(X) is a boundary. If y 1=y 2y_1 = y_2 that’s immediate. If not, we have four distinct points on a line, WLOG in the order x,y 1,y 2,zx, y_1, y_2, z. And then (x,y 1,y 2,z)MC 3,(X)-(x, y_1, y_2, z) \in MC_{3, \ell}(X), with

((x,y 1,y 2,z))=(x,y 1,z)(x,y 2,z), \partial(-(x, y_1, y_2, z)) = (x, y_1, z) - (x, y_2, z),

as required.

Posted by: Tom Leinster on September 6, 2016 8:57 PM | Permalink | Reply to this

Re: Magnitude Homology

That makes sense. The statement that (x,y,z)(x,y,z) is a boundary if yy is not between xx and zz is also using something special about n\mathbb{R}^n, right? Something like that if yy is between xx and zz, and zz is between yy and ww, then yy and zz are between xx and ww?

Now I’m tempted to conjecture something about H 2,(S 1)H_{2,\ell}(S^1). Like maybe that it’s zero unless =π\ell=\pi, in which case it’s generated by something to do with pairs of antipodal points. But that could be way off…

Posted by: Mike Shulman on September 6, 2016 10:15 PM | Permalink | Reply to this

Re: Magnitude Homology

Yes, I agree that this step also uses something special about n\mathbb{R}^n. I only spotted that after posting.

I’ve run into a few similar “betweenness” properties of metric spaces. I think at least one of them might even have a name. But I’m not at all on top of them.

I was thinking that it would be good to compute H 2,(S 1)H_{2, \ell}(S^1) with its geodesic metric; is that what you had in mind? Oh, I guess it must be, because if it was the Euclidean metric then the differentials would all be zero.

Posted by: Tom Leinster on September 6, 2016 10:26 PM | Permalink | Reply to this

Re: Magnitude Homology

Tom, thinking of the antipodal points on spheres case, did you miss out a word above:

In a subspace of n\mathbb{R}^n, a given pair of points has at most one midpoint, but in a graph there can be any number of them?

Posted by: David Corfield on September 7, 2016 8:21 AM | Permalink | Reply to this

Re: Magnitude Homology

Sorry for the noise, I’m thinking of the geodesic metric and you weren’t.

Posted by: David Corfield on September 7, 2016 8:39 AM | Permalink | Reply to this

Re: Magnitude Homology

Mike wrote:

Now I’m tempted to conjecture something about H 2,(S 1)H_{2, \ell}(S^1). Like maybe that it’s zero unless =π\ell=\pi, in which case it’s generated by something to do with pairs of antipodal points.

Well, I’ve only done this calculation in my head, and I really shouldn’t be trusted with calculating homology in my head, but for what it’s worth I agree. To be precise: if S 1S^1 denotes the circle with geodesic metric and circumference 2π2\pi, then I believe that H 2,(S 1)={S 1 if =π, 0 otherwise. H_{2, \ell}(S^1) = \begin{cases} \mathbb{Z}\cdot S^1 & \text{if }\,\, \ell = \pi,\\ 0 &\text{otherwise}. \end{cases} Here S 1\mathbb{Z}\cdot S^1 is the free abelian group on the set S 1S^1 of points of the circle. Probably a better way to say it (as you did) is that it’s free on the set of antipodal pairs of points, but they’re ordered pairs so it comes to the same thing.

Posted by: Tom Leinster on September 7, 2016 6:11 PM | Permalink | Reply to this

Re: Magnitude Homology

Interesting; so H 2(S 1)H_2(S^1) is (if we’re right) “counting the number of pairs of points that are connected by two distinct geodesics”. Or perhaps “the number of pairs of distinct geodesics connecting the same two points”? Or something like that.

Interesting that here the information is encoded in an uncountably generated free abelian group. I guess this is an advantage of algebraic invariants like homology over numerical invariants like magnitude; it’s not clear how to decategorify that to a number or even a power series. In particular, the magnitude of an infinite metric space (defined in any of the equivalent ways) can’t be obtained from a naive decategorification of its magnitude homology the same way the magnitude of a finite metric space is (we hope). I suppose we might still hope that there is some subtler way to recover the magnitude from the homology, though.

Posted by: Mike Shulman on September 7, 2016 11:00 PM | Permalink | Reply to this

Re: Magnitude Homology

uncountably generated free abelian group.

This is indeed puzzling. I can think of three possible ways that I’d become less puzzled, and that decategorification to magnitude would become more plausible:

  • If the group was equipped with some structure. In this case, the group is freely generated by the points of the space, so presumably it could be given some metric/topological structure. Then it could be “counted” in some more sophisticated way than taking its rank in the ordinary sense.

  • If by using different coefficients than the “delta functions” δ \delta_\ell, we obtained finite-rank groups. As you previously suggested, it’s also worth thinking about δ J\delta_J for nontrivial real intervals JJ, but there are other possibilities still. I don’t know.

  • If the calculation of H 2,π(S 1)H_{2,\pi}(S^1) turned out to be wrong :-)

Posted by: Tom Leinster on September 7, 2016 11:23 PM | Permalink | Reply to this

Re: Magnitude Homology

What is the magnitude (function) of S 1S^1 with its geodesic metric, so we know what we’re aiming for?

Here’s one possible way to equip magnitude homology groups with extra structure. I suspect we can define magnitude homology for enriched indexed categories over a suitably “semicartesian” indexed monoidal category. And I think we should be able to make [0,][0,\infty] into a TopTop-indexed monoidal category, whose (small) enriched indexed categories are something like topological spaces equipped with a metric structure d:X×X[0,]d:X\times X \to [0,\infty] that is continuous (so, in particular, including any metric space equipped with its metric topology). Then maybe the magnitude homology could end up somewhere like topological abelian groups. (This is very vague, but I don’t have time to pursue it further right now.)

Posted by: Mike Shulman on September 8, 2016 5:13 PM | Permalink | Reply to this

Re: Magnitude Homology

What is the magnitude (function) of S 1S^1 with its geodesic metric, so we know what we’re aiming for?

If S 1S^1 means the circle with circumference 2π2\pi then we know

|tS 1|=tπ1e tπ. |t S^1| = \frac{t\pi}{1-e^{-t\pi}}.

Posted by: Simon Willerton on September 8, 2016 6:33 PM | Permalink | Reply to this

Re: Magnitude Homology

Wow! I think this is our first indication that the magnitude homology of an infinite metric space carries interesting information.

What about H 2H_2? (-:

Posted by: Mike Shulman on September 6, 2016 9:01 PM | Permalink | Reply to this

Re: Magnitude Homology

What about H 2H_2? (-:

I see that you’re ahead of me! (4 minutes ahead of me, to be precise.)

Posted by: Mike Shulman on September 6, 2016 10:00 PM | Permalink | Reply to this

Re: Magnitude Homology

You were looking at the wrong table in our paper when you wrote

H 2,2(C 5)= 20,H 2,3(C 5)= 40,H 2,4(C 5)= 20 H_{2, 2}(C_5) = \mathbb{Z}^{20}, \quad H_{2, 3}(C_5) = \mathbb{Z}^{40}, \quad H_{2, 4}(C_5) = \mathbb{Z}^{20}

They are actually the chain groups, not the homology groups. The actual non-trivial ones are

H 2,2(C 5)= 10,H 2,3(C 5)= 10. H_{2, 2}(C_5) = \mathbb{Z}^{10}, \quad H_{2, 3}(C_5) = \mathbb{Z}^{10}.

I believe that H 2,2(G)H_{2,2}(G) for a graph GG is trivial if and only if the graph is discrete, ie. has no edges.

On the one hand if the graph is discrete then H 0,0(G)H_{0,0}(G) is the only non-trivial homology group.

On the other hand, Owen Biesel told us that the coefficient of q 2q^2 in graph magnitude is 2E+6Δ+2Λ2 E + 6\Delta + 2\Lambda', whatever this is, but in particular it is at least 2E2E, which is twice the number of edges of the graph.

The coefficient of q 2q^2 is rank(H 2,2(G))rank(H 1,2(G))rank (H_{2,2}(G))-rank (H_{1,2}(G)), thus rank(H 2,2(G))2Erank (H_{2,2}(G))\ge 2E. So H 2,2(G)H_{2,2}(G) is non-trivial if the graph has any edges.

Posted by: Simon Willerton on September 6, 2016 11:02 PM | Permalink | Reply to this

Re: Magnitude Homology

Nice! And thanks for the correction.

Posted by: Tom Leinster on September 7, 2016 5:51 PM | Permalink | Reply to this

Re: Magnitude Homology

Backing all the way up to the Euler characteristic / magnitude of a finite category: does this construction shed any light on the meaning of weightings and coweightings (which don’t appear in this post at all)?

Posted by: Mark Meckes on September 7, 2016 3:31 PM | Permalink | Reply to this

Re: Magnitude Homology

Interesting question. I have no idea!

The magnitude of a category is the sum of the weights. We’ve categorified magnitude to magnitude homology. So we can dream about categorifying the weights, too. Maybe this produces some sort of decomposition of the magnitude homology. Who knows?

Posted by: Tom Leinster on September 7, 2016 6:06 PM | Permalink | Reply to this

Re: Magnitude Homology

A related question: As far as I can tell from this post, you know that magnitude homology categorifies Euler characteristic for categories satisfying a finiteness condition. But Euler characteristic / magnitude is also defined (via (co)weightings) for many categories which don’t satisfy that finiteness condition. Is it known whether magnitude homology categorifies Euler characteristic for these more general categories?

In particular, does magnitude homology categorify Euler characteristic for any categories XX such that the topological Euler characteristic of the nerve of XX doesn’t exist?

Posted by: Mark Meckes on September 7, 2016 9:32 PM | Permalink | Reply to this

Re: Magnitude Homology

In particular, does magnitude homology categorify Euler characteristic for any categories XX such that the topological Euler characteristic of the nerve of XX doesn’t exist?

We could start by looking at the case of finite groups (seen as one-object categories). The Euler characteristic of such a category is 1/o(G)1/o(G), where o(G)o(G) is the order of the group. Of course, that’s not an integer.

I think I’m right in saying that a nontrivial finite group always has infinitely many nontrivial homology groups. So, that means that under the usual definition, the topological Euler characteristic is undefined. (People have proposed extensions to the usual definition that sometimes give non-integer results.)

Maybe it’s true that in some formal sense, n0(1) nrank(H n(G,))=1/o(G)\sum_{n \geq 0} (-1)^n rank(H_n(G, \mathbb{Z})) = 1/o(G) for a finite group. (Here H nH_n means the group homology, or the category homology, or the magnitude homology — they’re all the same.) Perhaps someone who knows something about group homology can comment?

Let me try out an example. According to p.168 of Weibel’s book, the homology groups of the cyclic group C mC_m over \mathbb{Z}, for n=0,1,2,n = 0, 1, 2, \ldots, are

,/m,0,/m,0,/m,0,. \mathbb{Z}, \, \mathbb{Z}/m\mathbb{Z},\, 0,\, \mathbb{Z}/m\mathbb{Z},\, 0,\, \mathbb{Z}/m\mathbb{Z},\, 0,\, \ldots.

Well, however we formally compute the alternating sum of the ranks, that’s clearly not going to give us 1/m1/m: the ranks are 1,1,0,1,0,1,0,1, 1, 0, 1, 0, 1, 0, \ldots, which doesn’t depend on mm. What’s going on?

(That last question is about something much older than magnitude homology. People have said for 50+ years that the Euler characteristic of an mm-element group deserves to be 1/m1/m. I know a topological justification for that thought, but I suppose I’d imagined there was also a homological justification.)

Posted by: Tom Leinster on September 7, 2016 10:06 PM | Permalink | Reply to this

Re: Magnitude Homology

For space that aren’t finite cell complexes, it’s better to define the Euler characteristic as the alternating sum of the number of cells of each dimension, rather than as the alternating sum of the ranks of the homology groups. (These agree for finite cell complexes.) As an example, the classifying space of a finite group GG has (|G|1) n(|G|-1)^n cells of dimension nn, with one of the natural cell structures. And the usual resummation techniques tell us that the alternating sum of these numbers is

(1)11+(|G|1)=1|G|. \frac{1}{1 + (|G|-1)} = \frac{1}{|G|} .

This convention also matches the Euler-Schanuel characteristic, which is 1-1 for the real line, not +1+1, as you would get if you used homology.

Posted by: Dan Christensen on September 8, 2016 1:04 AM | Permalink | Reply to this

Re: Magnitude Homology

I’ll buy this point of view on Euler characteristic. Unfortunately, with respect to the meaning of weightings, it leaves me where I started — they appear in a definition that makes a nice theorem turn out to be true, but I still don’t really understand why.

Posted by: Mark Meckes on September 9, 2016 3:32 PM | Permalink | Reply to this

Re: Magnitude Homology

Nice to hear from you, Dan! I know that trick with that alternating sum to get 11+(|G|1)=1|G|, \frac{1}{1 + (|G|-1)} = \frac{1}{|G|}, and I do believe that the reason why I know it is that you taught me it! Probably in about 2006–7.

It’s a very satisfying observation. However, for these purposes I’d like to know whether there’s some way of recovering the order of a finite group from its homology groups. Do you know one?

Posted by: Tom Leinster on September 8, 2016 1:18 AM | Permalink | Reply to this

Re: Magnitude Homology

Right after posting, I realized I should have added “As Tom knows, …” to the start of my message. :-)

I don’t know of a way to determine the cardinality of a finite group from its group homology. If there is a way, I doubt it fits nicely into the picture of Euler characteristic. For example, it can’t depend only on the ranks of the homology groups, as the example of a cyclic group shows. For abelian groups, H 1(G)GH_1(G) \cong G, so one can trivially recover the cardinality of GG, but this doesn’t seem like the right kind of answer. What makes things more puzzling, is that if you allow coefficients other that the implicit choice of \mathbb{Z}, the alternating sum of the ranks of the homology groups depends on this choice.

Another point I like to make is that the alternating sum of the ranks of the homology groups can be obtained formally from the alternating sum of the numbers of cells of each dimension by breaking each term in the latter sum into a sum of three natural numbers and reparenthesizing the result to take into account attaching maps.

Anyways, my point of view is that the alternating sum of the number of cells is the fundamental concept, and that under nice circumstances it can be computed using homology.

Posted by: Dan Christensen on September 8, 2016 2:06 AM | Permalink | Reply to this

Re: Magnitude Homology

Thanks. All I can say is that I’m puzzled.

Taking the rank of a non-free abelian group already makes me uneasy. For that reason, I did wonder about the homology of a cyclic group over a field kk: but that’s just k,k,0,k,0,k,0,k, k, 0, k, 0, k, 0, \ldots (assuming char(k)=0char(k)=0). So you definitely can’t recover the order of a finite group from its homology over a field.

Another point I like to make is that the alternating sum of the ranks of the homology groups can be obtained formally from the alternating sum of the numbers of cells of each dimension by breaking each term in the latter sum into a sum of three natural numbers and reparenthesizing the result to take into account attaching maps.

That sounds interesting… can you explain or give a reference?

Posted by: Tom Leinster on September 8, 2016 3:28 AM | Permalink | Reply to this

Re: Magnitude Homology

Another point I like to make is that the alternating sum of the ranks of the homology groups can be obtained formally from the alternating sum of the numbers of cells of each dimension by breaking each term in the latter sum into a sum of three natural numbers and reparenthesizing the result to take into account attaching maps.

That sounds interesting… can you explain or give a reference?

Over \mathbb{Z}, every chain complex splits as a sum of chain complexes which either are concentrated in one degree (with zero differentials) or in two consecutive degrees (with the identity map as the only non-zero differential). So each chain group splits as a sum of three groups: one of the first type, which is isomorphic to the homology; one which is the domain of an identity differential; and one which is the codomain of an identity differential. The ranks of the last two cancel in pairs occurring in consecutive degrees, so when reparenthesized in this way, the alternating sum using the cell counts is converted to the alternating sum using the ranks of the homology groups.

Posted by: Dan Christensen on September 8, 2016 4:22 PM | Permalink | Reply to this

Re: Magnitude Homology

Oh really? I didn’t know that.

If I understand correctly, this construction tells us that the nnth group A nA_n in a chain complex AA of abelian groups satisfies

A nH n(A)im(d:A n+1A n)im(d:A nA n1). A_n \cong H_n(A) \oplus im(d: A_{n+1} \to A_n) \oplus im(d: A_n \to A_{n-1}).

Why is that true? I can see that something like it is true, because AA has subgroups

0im(d:A n+1A n)ker(d:A nA n1)A 0 \subseteq im(d: A_{n + 1} \to A_n) \subseteq ker(d: A_n \to A_{n - 1}) \subseteq A

whose successive quotients are im(d:A n+1A n)im(d: A_{n + 1} \to A_n),   H n(A)H_n(A), and im(d:A nA n1)im(d: A_n \to A_{n - 1}). But why is it a direct sum when we’re over \mathbb{Z}?

Posted by: Tom Leinster on September 10, 2016 4:43 PM | Permalink | Reply to this

Re: Magnitude Homology

Over \mathbb{Z}, every chain complex splits as a sum of chain complexes which either are concentrated in one degree (with zero differentials) or in two consecutive degrees (with the identity map as the only non-zero differential).

Oh really? I didn’t know that.

That’s probably because it’s not true! I was amalgamating two facts in my head when I wrote that. The first fact is that what I wrote is true over a field. The second fact is that what I wrote is true up to quasi-isomorphism over a PID.

Nevertheless, my statement about reparenthesizing is correct. The argument doesn’t in fact need the splitting, but in my confusion I thought that mentioning it would make it easier to understand. All we really need is the short exact sequence you gave

(1)0B nZ nA n 0 \subseteq B_n \subseteq Z_n \subseteq A_n

(where I’m writing B nB_n and Z nZ_n for the boundaries and cycles), from which it follows that

(2)rankA n=rankB n+rankH n+rankB n1 \rank A_n = \rank B_n + \rank H_n + \rank B_{n-1}

using that the two non-trivial quotients are H nH_n and B n1B_{n-1}. Then the parts involving rankB n\rank B_n cancel when reparenthesized.

Posted by: Dan Christensen on September 10, 2016 9:14 PM | Permalink | Reply to this

Re: Magnitude Homology

Thanks for clearing that up. A counterexample would be

02×0, \cdots \to 0 \to \mathbb{Z} \stackrel{2\times-}{\longrightarrow} \mathbb{Z} \to 0 \to \cdots,

which can’t be decomposed in the way described. (If it could, one of the summands of the codomain copy of \mathbb{Z} would have to be the homology /2\mathbb{Z}/2\mathbb{Z}, which is impossible.)

But anyway, I agree that all we actually need here is the statement about ranks.

Posted by: Tom Leinster on September 10, 2016 9:29 PM | Permalink | Reply to this

Re: Magnitude Homology

For space that aren’t finite cell complexes, it’s better to define the Euler characteristic as the alternating sum of the number of cells of each dimension, rather than as the alternating sum of the ranks of the homology groups.

That seems to me a curious opinion. Is that even homotopy-invariant?

Posted by: Mike Shulman on September 8, 2016 4:19 AM | Permalink | Reply to this

Re: Magnitude Homology

For spaces that aren’t finite cell complexes, it’s better to define the Euler characteristic as the alternating sum of the number of cells of each dimension, rather than as the alternating sum of the ranks of the homology groups.

That seems to me a curious opinion. Is that even homotopy-invariant?

No, it’s not homotopy invariant, since a point gets Euler characteristic +1 while the real line gets Euler characteristic -1.

It’s a bit hard to briefly say why this is a reasonable opinion, so I’ll just make a few remarks.

1) It matches Schanuel’s point of view.

2) While not homotopy invariant, it should be an invariant of some notion of “proper homotopy”. Note that \mathbb{R} is not proper homotopy equivalent to a point. Moreover, the compactly supported cohomology of \mathbb{R} does have Euler characteristic 1-1 instead of +1+1.

3) There are many ad hoc computations for which it gives the right answer, when homology doesn’t.

4) For certain infinite groups, one can compute a “renormalized cardinality” of the group based on the filtration of the group by word length with respect to a chosen generating set, and this can be finite. Moreover, one can also compute a renormalized Euler characteristic in some cases, and recover the result that you get the reciprocal of the renormalized cardinality. In these calculations, both sides depend on choices: the renormalized cardinality depends on a choice of generating set, and the renormalized Euler characteristic depends on a choice of cell structure. So if one is hoping for a general theorem relating these, it’s a feature (not a bug) that they both depend on choices. (Moreover, a choice of generators for π 1\pi_1 of a space can be determined from a choice of cell structure, so these choices are related.)

Posted by: Dan Christensen on September 8, 2016 4:34 PM | Permalink | Reply to this

Re: Magnitude Homology

I was just browsing the latest issue of The Notices of the AMS (September 2016) and found an article Counting in Groups: Fine Asymptotic Geometry, by Moon Duchin, which discusses both the use of growth functions in relation to Euler characteristic (similar to Schanuel’s setup) as well as the use of growth functions in groups (which I mentioned as a way to define a finite cardinality for some infinite groups). The article gives the growth function for the free non-abelian group F 2F_2 on two generators as

(1)1+x13x \frac{1+x}{1-3x}

which leads to a renormalized cardinality of 1-1 (evaluating at x=1x=1), which is the reciprocal of the Euler characteristic of the wedge of two circles (BF 2BF_2). (This is an example I had worked out before, and generalizes to a wedge of nn circles.)

Posted by: Dan Christensen on September 10, 2016 9:46 PM | Permalink | Reply to this

Re: Magnitude Homology

the real line gets Euler characteristic -1.

How does that work? The only cell complex presentation of the real line that I can think of has infinitely many 0-cells and 1-cells.

The only really categorically motivated notion of “Euler characteristic” that I know of is the trace of an identity map. For finitely generated graded groups, finitely generated chain complexes, and finite spectra (hence finite cell complexes, by mapping them into any of these categories), this happens to be computed by an alternating sum of ranks. (And in the first two cases, this is because we choose the symmetry isomorphism of the tensor product to involve minus signs sometimes, which we do because we want it to match what happens automatically in the third case.) But absent some categorical definition that specializes to it, just writing down an alternating sum of ranks doesn’t seem to me like a very sensible thing to do, especially if you have to play games with divergent series to even make sense of it.

Ad hoc computations and renormalization just suggest to me that there should be a general theory that hasn’t been found yet, and I don’t know anything about “Schanuel’s point of view”, so I can’t comment on that. But your second remark suggests that maybe “proper homotopy theory” is the place to look for such a general theory. I don’t know much of anything about proper homotopy theory, but is there any “proper homotopy category” in which your Euler characteristics can be computed as traces of identity maps? In particular, does the compactly supported (co)homology always give the answer you want?

Posted by: Mike Shulman on September 8, 2016 7:11 PM | Permalink | Reply to this

Re: Magnitude Homology

the real line gets Euler characteristic -1.

How does that work?

Didn’t you used to follow TWFs? John spoke about it a fair few number of times, such as TWF184.

It’s discussed in James Propp’s Exponentiation and Euler measure. If two open intervals and a point can be composed to make an open interval, the ‘cardinality’ of the open interval must be 1-1.

Posted by: David Corfield on September 8, 2016 8:14 PM | Permalink | Reply to this

Re: Magnitude Homology

the real line gets Euler characteristic -1.

How does that work? The only cell complex presentation of the real line that I can think of has infinitely many 0-cells and 1-cells.

The simplest answer is that the real line is a 1-cell.

In Schanuel’s set-up, you are allowed to decompose spaces in very general ways as disjoint unions of subspaces homeomorphic to cells, and the miracle is that there are classes of spaces for which this works in a consistent way, e.g. polyhedral spaces. That part forms a beautiful story. It’s the attempt to generalize it to spaces for which homotopy cardinality is also defined (such as BGBG) that leads to difficulties.

Ad hoc computations and renormalization just suggest to me that there should be a general theory that hasn’t been found yet […]. But your second remark suggests that maybe “proper homotopy theory” is the place to look for such a general theory.

I agree completely, but don’t know the solution. It would be great if the theory of categorical traces could provide a solution.

Posted by: Dan Christensen on September 8, 2016 8:58 PM | Permalink | Reply to this

Re: Magnitude Homology

Yes, now that you mention it, I’ve probably seen this before. A bit of poking around on the Internet leads me to this question and answers, which suggests that perhaps we should be looking at Borel-Moore homology, which I guess is dual in some sense to compactly supported cohomology.

Rather than “the right definition of Euler characteristic”, wouldn’t it be more reasonable to regard this as a different thing from the homotopical Euler characteristic, though it unfortunately has been given the same name? It seems perfectly reasonable to me for the Euler characteristic of any contractible space to be 1. Or, put differently, we can talk about the Euler characteristic of a topological space, or we can talk about the Euler characteristic of an \infty-groupoid, and when a topological space presents an \infty-groupoid there’s no reason to expect the two to have the same Euler characteristic.

Posted by: Mike Shulman on September 8, 2016 11:48 PM | Permalink | Reply to this

Re: Magnitude Homology

Has anyone ever defined an Euler characteristic of a topos?

Posted by: Mike Shulman on September 8, 2016 11:57 PM | Permalink | Reply to this

Re: Magnitude Homology

James Propp used the term “Euler measure” for Euler characteristic in contexts where it’s a finitely additive measure. That’s the context associated with Schanuel’s name, that’s what happens for compactly supported (co?)homology, and that’s what gets used in the Euler calculus (in which one integrates with respect to the finitely additive measure χ\chi). Propp wanted to distinguish it from the homotopy-invariant Euler characteristic. But as far as I know, his proposed terminology didn’t catch on.

Or, put differently, we can talk about the Euler characteristic of a topological space, or we can talk about the Euler characteristic of an \infty-groupoid, and when a topological space presents an \infty-groupoid there’s no reason to expect the two to have the same Euler characteristic.

I like that way of thinking about it.

Schanuel proposed that we regard Euler characteristic as analogous to cardinality. We can also observe that for many mathematical objects there’s a standard cardinality-like invariant. It’s not surprising if the same piece of terminology gets used for the cardinality-like invariant of different types of object (e.g. in this case, “Euler characteristic” for both topological space and \infty-groupoid). The same is true of terms like “isomorphism”.

It’s also a completely familiar phenomenon that in order to be clear about what an overloaded piece of terminology means, you have to say what structure you’re understanding your objects to carry (e.g. is that an isomorphism of Banach spaces or just their underlying vector spaces?). The meanings of terms like “isomorphism”, “map” and “Euler characteristic” depend on the amount of structure present.

Has anyone ever defined an Euler characteristic of a topos?

Perhaps you did what I just did and typed “Euler characteristic of a topos” into Google, in quotation marks, and got no results. I got none from Duckduckgo either.

Nevertheless, you can talk about the (co)homology of a topos \mathcal{E}, with coefficients in an abelian group object AAb()A \in Ab(\mathcal{E}). So you’d think it would be natural to talk about Euler characteristic.

Indeed, under mild conditions on \mathcal{E}, there’s a free abelian group ZZ on the terminal object of \mathcal{E}. (E.g. if =Set\mathcal{E} = Set then Z=Z = \mathbb{Z}.) I’d imagine defining

χ()= n0(1) nrank(H n(;Z)) \chi(\mathcal{E}) = \sum_{n \geq 0} (-1)^n rank(H^n(\mathcal{E}; Z))

subject to all the usual issues about whether this sum exists. E.g. if XX is a finite enough topological space then χ(Sh(X))=χ(X)\chi(Sh(X)) = \chi(X), and if XX is a finite enough category then χ([X op,Set])=χ(X)\chi([X^{op}, Set]) = \chi(X).

I think you know all this already, though!

Posted by: Tom Leinster on September 9, 2016 12:53 AM | Permalink | Reply to this

Re: Magnitude Homology

Rather than “the right definition of Euler characteristic”, wouldn’t it be more reasonable to regard this as a different thing from the homotopical Euler characteristic, though it unfortunately has been given the same name?

I agree completely. I usually call it “Euler-Schanuel characteristic” if I want to distinguish it.

I only mentioned it because the question arose of why the homotopical Euler characteristic wasn’t reproducing the 1/|G|1/|G| result that was expected. And my point is that if you want this result (or various other nice properties), you are better off switching to the Euler-Schanuel characteristic. Then you can separately study the question of when the Euler-Schanuel characteristic agrees with the homotopical Euler characteristic.

Another point to make is that there are many homology theories on spaces, and the usual Euler characteristic chooses one of them to play a special role. But when you consider infinite cell complexes, that choice is a real choice, so there’s no reason to think integral (or rational) homology plays a fundamental role. This is also why I think your work on traces could help us understand what’s going on in a more conceptual way.

Posted by: Dan Christensen on September 9, 2016 1:28 AM | Permalink | Reply to this

Re: Magnitude Homology

Thanks, both of you. It is curious, though, that in order to get the “expected” Euler characteristic for BGB G, we have to regard it as a space rather than an \infty-groupoid!

I did, indeed, google for “Euler characteristic of a topos” and got nothing. (-: I do of course know that we can compute the cohomology of a topos, and so of course we could define an Euler characteristic that way. But the cohomology of a topos in this sense, when applied to the topos of sheaves on a space, computes its ordinary cohomology, i.e. the cohomology of the \infty-groupoid it presents. I was imagining rather something that could include both the Euler-Schanuel characteristic when applied to a topos of sheaves on a space and the homotopical Euler characteristic when applied to the (\infty-)topos of presheaves on an (\infty-)groupoid. For instance, we can distinguish the topos Set GSet^G of GG-sets from the topos Sh(BG)Sh(B G) of sheaves on the classifying space of GG.

I see that Borel-Moore homology can be computed using sheaf cohomology, which is what we see naturally in topos-land. Perhaps the Borel-Moore construction can be generalized from spaces to (some) toposes.

Posted by: Mike Shulman on September 9, 2016 4:25 AM | Permalink | Reply to this

Re: Magnitude Homology

Perhaps this would be an appropriate moment for a confession (and request for help). My work with Kate Ponto on linearity of traces has led me to be rather suspicious of weightings and coweightings. Actually, to be honest, I was already suspicious of them, but the theory of linearity of traces has given me more concrete grounds for my suspicion. (-:

The linearity-of-traces picture tells us that if AA is a small category and VV is a good category (or (,1)(\infty,1)-category) to enrich over with the property that AA-colimits are absolute in VV, then (1) the AA-colimit of a diagram of dualizable objects in VV is again dualizable, and (2) there is a list of numbers called the coefficient vector such that if an endomorphism of colim A(D)colim^A (D) is induced by an endomorphism f:DDf:D\to D, then its trace can be calculated as a linear combination of the traces of the endomorphisms f df_d (modulo an important caveat — see below).

In particular, if we take ff to be the identity, then its traces are the categorical notion of Euler characteristic. And if we also take DD to be constant at 11 and VV to be an (,1)(\infty,1)-category, then colim A(1)colim^A (1) is a “VV-nerve” of AA. (If V=GpdV=\infty Gpd, for instance, then colim A(1)colim ^A (1) is the ordinary nerve of AA. Not many colimits are absolute for Gpd\infty Gpd, though, so usually we use something else like spectra, in which case we get the suspension spectrum of the nerve of AA.)

Thus, in this case the Euler characteristic of AA is the sum of elements of the coefficient vector, since the Euler characteristic of 11 is 11. Furthermore, under some technical conditions on AA (it is a finite EI-category whose automorphism groups act freely by postcomposition), the elements of the coefficient vector induce a weighting on AA, thereby “explaining” why the Euler characteristic can often be obtained by summing up a weighting.

My suspicion has to do with the caveat mentioned above. A weighting assigns a number to each object of a category AA. (Every object, not every isomorphism class; which already seems weird to me.) A coefficient vector, however, assigns a number to every element of the trace of AA. The trace of a category AA (Kate and I call it the “shadow”, a term she introduced, to avoid confusion between traces in categories and traces of categories) is the quotient of the set of all endomorphisms in AA by the relation gffgg f \sim f g. In particular, therefore, each object induces an element of tr(A)tr(A) (its identity morphism), but isomorphic objects induce the same element of tr(A)tr(A); and in general we also have elements coming from nonidentity endomorphisms.

If AA has no nontrivial endomorphisms (as in Tom’s “finiteness condition” above), then tr(A)tr(A) is exactly the set of isomorphism classes in AA, and so the coefficient vector is almost exactly the same as a weighting (with the redundancy coming from isomorphic objects removed). But in general, the coefficient vector contains more information.

For instance, if A=BGA= B G for a finite group GG, and we choose VV appropriately to make BGB G-colimits absolute (the order #G\#G of GG must be invertible in VV), then tr(BG)tr(B G) is the set of conjugacy classes in GG, and the coefficient vector assigns to each such class CC the fraction #C#G\frac{\#C}{\#G}. The weighting ignores all of this extra information and remembers only the case C={1}C=\{1\}, with weight 1#G\frac{1}{\#G}. But by keeping all the other numbers around in the coefficient vector, we get a nice linearity formula for traces, which in particular specializes to the orbit-counting theorem and also to the character of an induced representation.

Thus, with the coefficient vector arising so naturally from categorical considerations (it is itself actually a kind of trace), leading to such nice linearity theorems, and being free of weirdness related to isomorphism classes and skeletality, it’s hard for me to “believe in” weightings. But I would love it if someone can set me straight by explaining why weightings are a good thing to look at!

Posted by: Mike Shulman on September 8, 2016 5:11 AM | Permalink | Reply to this

Re: Magnitude Homology

That’s a substantial, interesting comment that I’m definitely not going to do justice to, I’m afraid, or at least not today. Time is short, so the following reply is scrappy, but if I don’t write it now then I’ll probably won’t reply at all.

I’ll begin by saying (again) things that you already know, but they represent my immediate answer to the question “why are weightings a good thing to look at?” I’m partly repeating myself from this post (below which you and I had a related discussion).

Weightings are a solution to the following problem. For finite subsets XX and YY of a set ZZ,

|XY|=|X|+|Y||XY|. |X \cup Y| = |X| + |Y| - |X \cap Y|.

For a finite set XX acted on freely by a finite group GG,

|X/G|=1|G||X|. |X/G| = \frac{1}{|G|}\cdot |X|.

In both cases, the set on the left-hand side is a colimit of some functor F:AFinSetF : \mathbf{A} \to FinSet. In both cases, the right-hand side is a \mathbb{Q}-linear combination of the cardinalities of the individual sets F(a)F(a) (aAa \in \mathbf{A}). So, it’s natural to want to unify and generalize.

Question:   given a finite category A\mathbf{A}, are there rational “weights” w(a)w(a) (aAa \in \mathbf{A}) such that

|colimF|= aAw(a)|F(a)| |colim F| = \sum_{a \in \mathbf{A}} w(a) |F(a)|

for all functors F:AFinSetF: \mathbf{A} \to FinSet?

Answer: Obviously not for arbitrary FF. But if we restrict ourselves to nice enough FF, the answer is that a family w=(w(a))w = (w(a)) fulfils this condition exactly when it’s a weighting in the sense I defined. (And since a weighting is a solution to nn equations in nn unknowns over a field, most categories admit a unique weighting.)

In slightly more detail: for this equation to hold when FF is representable is exactly equivalent to the definition of weighting. And if it holds for all representables, it automatically holds when FF is a coproduct of representables.

In the case of pushouts, that condition on FF means that the two maps that you’re pushing out along are injective; that is, it’s exactly the inc-exc formula. In the case of group actions, the condition on FF means exactly that the action is free. So we recover the original two motivating cases.

In other words, my answer to “why are weightings a good thing to look at?” is that they solve this problem.

I’m saying nothing about coefficient vectors, because I have nothing to say! In principle I don’t see why both concepts shouldn’t be useful. I’m slightly reminded of the situation with fine and coarse Möbius inversion: paper, post.

Posted by: Tom Leinster on September 9, 2016 3:51 AM | Permalink | Reply to this

Re: Magnitude Homology

Thank you for linking to that previous post, and particularly our conversation below it. I had forgotten that we had that conversation 5 years ago (June 2011).

Maybe this is clear, but I want to emphasize that the theory of linearity of traces is, in my opinion, the answer to the questions I was asking in that comment. Looking back through my email correspondence with Kate, we had the basic idea in November 2011, which must have been motivated by that conversation with you; but it took another 2122\frac{1}{2} years for us to write it up and 2 more years for it to be published.

At the end of the linked comment, I wrote:

This makes me want to ask some question like “Given a symmetric monoidal derivator DD, a category AA, and a diagram XD(A)X\in D(A) of dualizable objects, under what conditions can the Euler characteristic of hocolim AXhocolim_A X be computed in terms of the Euler characteristics of the objects in XX and a weighting of AA?”

The answer to that question supplied by linearity-of-traces is that if AA-colimits are absolute for DD, then AA has a coefficient vector (which is like a weighting but better), and the Euler characteristic of hocolim AXhocolim_A X can be computed as a linear combination (whose coefficients are those in the coefficient vector) of the Euler characteristics of the objects in XX and the traces of the actions of endomorphisms in AA. This explains my puzzlement from 5 years ago that

the Euler characteristic of a homotopy colimit is not in general determined by the Euler characteristics of the input spaces, without knowing anything about the morphisms involved in the colimit.

The linearity formula incorporates exactly the necessary information about the morphisms involved in the colimit: their traces, weighted by the appropriate elements of the coefficient vector. I also wondered that

homotopy colimits should be invariant under Morita equivalence, but the Euler characteristic of a category is not

and this is also solved by using coefficient vectors instead of weightings. Considering the universal example of the walking idempotent AA and its Cauchy completion A¯\overline{A}, the coefficient vector of AA assigns 0 to the identity and 1 to the idempotent, so the linearity formula says that the Euler characteristic of the splitting of an idempotent is equal to the trace of the idempotent. In A¯\overline{A}, the idempotent is identified in the shadow with the identity of the splitting object, and the coefficient vector again assigns 1 to them and 0 to the other identity. More generally, Cauchy completion never changes the shadow of a category or its coefficient vector.

So it seems to me that while weightings do partly solve the problem of calculating the cardinality of a colimit, coefficient vectors solve a more general problem and give a more general solution. The weighting formula is the special case of the linearity formula when (1) FF is a diagram of sets that is projectively cofibrant (= a coproduct of representables), so that its ordinary colimit in SetSet is also its homotopy colimit, and (2) AA has no nonidentity endomorphisms, so that we only need to know the Euler characteristics of the input objects and the coefficient vector degenerates to a weighting.

It’s true that there are some categories that have weightings but where we don’t know whether they are absolute in any reasonable monoidal derivator, such as the delooping of an arbitrary finite monoid. But the hypotheses of the colimit formula provided by weightings seem so restrictive, making the statement feel so tautological, that it’s hard for me to see these cases as very strong motivation.

Posted by: Mike Shulman on September 9, 2016 7:07 AM | Permalink | Reply to this

Re: Magnitude Homology

I haven’t properly read your last comment yet, Mike. This is another answer to your previous question — “why are weightings a good thing to look at?” — that occurred to me while I was offline.

It’s an answer of an uncategorical sort, but nevertheless I find it quite compelling. It’s about the maximum entropy (or maximum diversity) problem for finite metric spaces.

In brief:

  • For any finite metric space XX, there’s a one-parameter family (D q) q[0,](D_q)_{q \in [0, \infty]} of functions from {probability distributions on XX} to \mathbb{R}. This D qD_q is called the diversity of order qq, and its logarithm is a kind of entropy. By taking particular metrics and particular values of qq, then maybe applying some small transformation, we obtain lots of established quantities called “entropy”, including the Shannon, Rényi and so-called Tsallis entropies.

  • Given a space XX, one can seek the probability distribution(s) pp that maximize D q(p)D_q(p). In principle the answer depends on qq, but it’s a theorem that it doesn’t. Nor does the maximum value sup pD q(p)\sup_p D_q(p).

  • The maximizing distributions on XX are very closely related to the weightings on it. More exactly, every maximizing distribution is a weighting on some subspace on XX, extended by zero to the whole space and rescaled to sum to 11. The maximum diversity sup pD q(p)\sup_p D_q(p) is the magnitude of that subspace.

That’s a specific motivation for weightings in the case of metric spaces. The original cardinality-of-a-colimit motivation is only valid for unenriched categories, so it’s good to have a justification of the weighting concept for at least some enriched categories.

I should read your paper, but as I have your attention, maybe I can just ask directly: does this story of coefficient vectors work for enriched categories? You wrote:

A coefficient vector, however, assigns a number to every element of the trace of AA

where the trace of AA is aA(a,a)\int^a A(a, a). If AA is an enriched category then that’s not even a set, so this wouldn’t make sense. So I can’t guess how it would generalize to the enriched setting.

Posted by: Tom Leinster on September 9, 2016 9:19 AM | Permalink | Reply to this

Re: Magnitude Homology

does this story of coefficient vectors work for enriched categories?

It does. However, since the raison d’etre of a coefficient vector is to tell us something about colimits, and in the enriched setting we should talk about weighted colimits, to determine an enriched coefficient vector you need not only a small VV-category AA but a weight Φ:A opV\Phi : A^{op} \to V. If Φ\Phi is absolute for VV, then it has a “coefficient vector”, which is a map in VV from the unit object to the trace of AA as a VV-category, i.e. the coend aAA(a,a)\int^{a\in A} A(a,a) in VV.

If AA is freely generated as a VV-category by an unenriched category BB, then tr(A)=tr(B)Itr(A) = tr(B) \cdot I, the copower of the unit object II by the trace of BB as a SetSet-category (which is the trace I described above). If moreover VV is additive and tr(B)tr(B) is finite, then tr(B)II tr(B)tr(B) \cdot I \cong I^{tr(B)}, and so the coefficient vector is determined by one morphism III\to I (a “number” to the eyes of VV) for each element of tr(B)tr(B). If we furthermore take Φ\Phi to be freely generated by the constant functor 1:B opSet1 : B^{op}\to Set, then we are talking about “conical colimits”, and we recover the story that I described above.

Note that in order to apply this theory for a category like SetSet, we generally need to first map everything into some other category first by some colimit-preserving functor like free-abelian-group. This serves two purposes: (1) it makes more things dualizable, so that Φ\Phi can be absolute and the objects of the diagram can be dualizable, and (2) it lands us in an additive world, so that we can identify the abstract “coefficient vector” with an actual vector of numbers.

This is probably not interesting for V=[0,]V=[0,\infty]. However, it’s possible we might get something interesting by, say, mapping everything into Ab [0,] opAb^{[0,\infty]^{op}} by the Yoneda embedding and then the free-abelian-group. I need to think about that.

Posted by: Mike Shulman on September 9, 2016 6:14 PM | Permalink | Reply to this

Re: Magnitude Homology

Another uncategorical answer to the question “why are weightings a good thing to look at?” is that essentially everything we know about magnitudes of infinite metric spaces is based in part on weightings. That is, I don’t know how to prove any of the theorems about magnitude of infinite metric spaces without weightings making an appearance, whether or not they are used explicitly in the definition of magnitude.

Posted by: Mark Meckes on September 9, 2016 2:47 PM | Permalink | Reply to this

Re: Magnitude Homology

So that it doesn’t get lost in the details, I’ll briefly summarize how weightings are handled for infinite metric spaces, as discovered by Mark. This was the product of years of searching for the right approach, and in the end was much simpler than we’d expected.

I’ll phrase this in a way that might appal Mark.

Let XX be a compact, positive definite metric space. Write FM XFM_X for the free vector space on the set of points of XX. There is an inner product on FM XFM_X determined by x,y=e d(x,y)\langle x, y \rangle = e^{-d(x, y)}. Like any inner product space, FM XFM_X can be completed to a Hilbert space; call it W XW_X. This is the space where weightings for XX live. It contains the measures on XX, but also much more.

I’ll refer to Mark’s paper for the rest of the story (e.g. the actual definition of weighting), but the point I want to make is that with this approach, lots of things work out very cleanly. For instance, if wW Xw \in W_X is a weighting for XX then the magnitude of XX is simply w 2\|w\|^2. This makes everything fit very well into the framework of functional analysis.

Incidentally, I’m absolutely open to the possibility that weightings are not the last word and that there’s some better substitute. The recent comments I’ve made on what weightings do well are simply in response to your question “what are weightings good for?” (I paraphrase slightly.) I’m not suggesting they can’t be improved upon. If there’s a superior way of working with magnitude of enriched categories, then of course I’d like to know it.

As I keep saying, I should read that paper of yours and Kate Ponto’s. Among other things, I’m increasingly getting the feeling that when I instinctively compared the relationship between weightings and coefficient vectors to the relationship between coarse and fine Möbius inversion, that was a more apposite and accurate comparison than I initially realized. But I should learn more about what you did before attempting to hold forth on that.

Posted by: Tom Leinster on September 9, 2016 3:44 PM | Permalink | Reply to this

Re: Magnitude Homology

I’m genuinely perplexed at the suggestion that your phrasing might appall me. What about it do you imagine I might object to?

Also, for anyone who’d like more detail than Tom wrote but less than in the paper, here’s a blog post about weightings on compact metric spaces.

Posted by: Mark Meckes on September 9, 2016 7:13 PM | Permalink | Reply to this

Re: Magnitude Homology

I’m genuinely perplexed at the suggestion that your phrasing might appall me. What about it do you imagine I might object to?

Sorry, I was just kidding around, which on blogs/email goes wrong about 50% of the time. I should probably learn to either not do it or do it better.

What I had in mind was this. FMFM, as defined in your paper, is the space of finitely-supported finite signed/complex measures on XX, with inner product given by μ,ν= X Xe d(x,y)dμ(x)dν(y). \langle \mu, \nu \rangle = \int_X \int_X e^{-d(x, y)} \,d\mu(x)\,d\nu(y). I rephrased that definition without mention of measures or integration. I’m not claiming that’s any sort of feat, but it did make me feel slightly transgressive, because the way I understand this stuff (and the way you present it) is all about measures and integration. The rephrasing seemed to me to be in somewhat bad taste — though now that I reflect on it, I don’t think it is.

So the (ahem) “joke” was that you’d be appalled by me presenting something fundamentally analytic in a non-analytic way. I didn’t seriously imagine you’d be appalled. And there we are.

Posted by: Tom Leinster on September 9, 2016 7:47 PM | Permalink | Reply to this

Re: Magnitude Homology

I could tell you were kidding around. I just couldn’t what the joke was, because to me FMFM doesn’t really feel like a fundamentally analytic thing, probably because it’s not complete in any reasonable topology. I actually think of it as the free real vector space on XX. In the paper I identify FMFM as the space finitely-supported measures because (a) that’s what it corresponds to under a natural analytic representation of its completion with respect to a convenient topology and (b) I was writing for an audience of analysts and thought that finitely-supported measures would feel more familiar to some analysts with more delicate sensibilities than mine. The measure-theoretic perspective also gives a convenient notation for elements of FMFM that doesn’t need additional explanation (at least for an audience of analysts).

Posted by: Mark Meckes on September 9, 2016 8:09 PM | Permalink | Reply to this

Re: Magnitude Homology

Thanks for those answers involving diversity and infinite metric spaces! It’s possible that coefficient vectors or something like them might have something to say about those areas, but I sure don’t immediately see it. So while, as you said, we should stay open to the possibility of better ways to do things, for the moment I will now concur with your remark that

In principle I don’t see why both concepts shouldn’t be useful.

However, this means that I don’t really understand what a weighting means (which was Mark’s question that started this thread). I thought that a weighting was just a degenerate sort of coefficient vector, which made me suspicious of them but at least made me think I understood them. But now I guess that I only understand weightings insofar as they coincide with coefficient vectors, which is to say nothing more than that I understand coefficient vectors and don’t understand weightings. (-:

Posted by: Mike Shulman on September 10, 2016 4:26 AM | Permalink | Reply to this

Re: Magnitude Homology

I am starting to feel like weightings for infinite metric spaces may also be an instance of coefficient vectors, but I’m stuck at trying to make it precise. In general, the coefficient vector story usually involves two monoidal categories VV and WW, a VV-category XX (and a weight, although for the moment we can take that to be constant at 11), and a monoidal functor Σ:VW\Sigma:V\to W. We apply Σ\Sigma to XX (and to the weight) to obtain a WW-category ΣX\Sigma X, and then the coefficient vector is an element of the trace of ΣX\Sigma X.

My paper with Kate concentrates on the case V=SetV=Set, in which case interesting choices of WW include abelian groups, chain complexes over \mathbb{Z} or \mathbb{Q} (or anything else), and spectra. On the other hand, of course the case V=WV=W and Σ=Id\Sigma=Id formally subsumes all others by simply applying Σ\Sigma first, but I find it conceptually helpful to keep Σ\Sigma in mind, because often the data we’re given lives in a VV but we have to choose a WW and a Σ\Sigma in order to make the weight absolute.

So suppose we take V=[0,]V=[0,\infty]. Is there a WW and a Σ\Sigma such that the trace of ΣX\Sigma X is the space W XW_X, so that the coefficient vector has a chance to be the weighting? A natural guess for a category theorist would be something like W=W= Ban, the “isometric category” of Banach spaces and short linear maps, with its “projective” tensor product.

Now the trace of ΣX\Sigma X is the coequalizer of the two maps

x,yXΣX(x,y)ΣX(y,x) xXΣX(x,x). \sum_{x,y\in X} \Sigma X(x,y) \otimes \Sigma X(y,x) \to \sum_{x\in X}\Sigma X(x,x).

Since X(x,x)=0X(x,x)=0 (for a classical metric space), the right-hand object is just the sum of XX copies of Σ(0)\Sigma(0). If W=Vect W=Vect_{\mathbb{R}} and Σ(0)=\Sigma(0)=\mathbb{R}, then this is exactly your FM XFM_X. If we are in BanBan instead, then this is something like 1(X)\ell^1(X). It feels like W XW_X ought to be some kind of “completed quotient” of this, such as we could get from a clever choice of Σ\Sigma; but I can’t make it work out. Any ideas, anyone?

Posted by: Mike Shulman on September 15, 2016 10:02 PM | Permalink | Reply to this

Re: Magnitude Homology

Delurking to ask a non-mathematical question: is there any way to adjust the height of the vertical boxes that contain the LaTeX-like formulas? (I’m trying to print this post for later reading and the large vertical spacing is leading to a worrying number of pages.)

Posted by: Yemon Choi on September 7, 2016 5:33 PM | Permalink | Reply to this

Re: Magnitude Homology

Yemon, is it the same problem as I described here (see screenshot)? If so, I think other people discovered a fix. I didn’t implement it, so I still have this problem — I just learned to live with it.

If you just want a printout without extraneous vertical space, here’s one. (I did it on the browser rekonq, which I don’t normally use but doesn’t have this particular problem.)

Posted by: Tom Leinster on September 7, 2016 6:00 PM | Permalink | Reply to this

Re: Magnitude Homology

Thanks Tom. It does seem to be a related problem — I’ll look into it later. In the meantime, thanks very much for producing the PDF printout.

Posted by: Yemon Choi on September 7, 2016 6:51 PM | Permalink | Reply to this

Re: Magnitude Homology

I’ve been thinking a bit about the option of “blurring” coefficients. While the interval coefficients δ J\delta_J do introduce some blurring, it’s “non-uniform” in a somewhat funny way: points in the middle of the interval are blurred more than points near the ends, because there’s more “room” around them for nonzero differentials. Maybe it will nevertheless turn out to be useful (e.g. for recovering or approximating magnitude), but it seems worthwhile to explore other ways of blurring too. Here’s one possibility.

First notice that we can collect up all the homologies H *(X;δ )H_\ast(X;\delta_\ell) into a single homology by just taking the direct sum δ= δ \delta = \bigoplus_\ell \delta_\ell. Homology preserves coproducts in its coefficients, so H *(X;δ)= H *(X;δ )H_\ast(X;\delta) =\bigoplus_\ell H_\ast(X;\delta_\ell). The chain groups C *(X;δ)C_\ast(X;\delta) are generated by all sequences (x 0,,x n)(x_0,\dots,x_n), but the face maps are set to zero whenever they reduce the total distance d(x 0,x 1)++d(x n1,x n)d(x_0,x_1)+\cdots +d(x_{n-1},x_n).

Of course, summing up all the homologies like this is not useful if our goal is to recover magnitude, since for that we need to put the ranks as the coefficients of different powers of qq. But Tom’s observation about convexity implies that XX is Menger convex iff H *(X;δ)=0H_\ast(X;\delta)=0, so this single homology group also detects convexity.

Now this δ\delta is something that we can blur in other ways. For instance, fix some ϵ>0\epsilon\gt 0 and consider the coefficients Δ ϵ\Delta_\epsilon defined by Δ ϵ()=\Delta_\epsilon(\ell)=\mathbb{Z} for all \ell, but for which the transition map =Δ ϵ( 2)Δ ϵ( 1)=\mathbb{Z} = \Delta_\epsilon(\ell_2) \to \Delta_\epsilon(\ell_1) = \mathbb{Z} is the identity if 2 2<ϵ\ell_2-\ell_2\lt \epsilon and the zero map otherwise. Then the chain groups are again generated by all sequences (x 0,,x n)(x_0,\dots,x_n), but the face maps are set to zero if they reduce the total distance by ϵ\epsilon or more.

I haven’t worked out exactly what H 1(X;Δ ϵ)H_1(X;\Delta_\epsilon) looks like, but it seems to detect some kind of “approximate convexity modulo ϵ\epsilon”. The cycles are the pairs (x 0,x 1)(x_0,x_1) whose distance is ϵ\ge\epsilon. One kind of boundary arises from triples (x 0,x 1,x 2)(x_0,x_1,x_2) such that d(x 0,x 1)d(x_0,x_1), d(x 1,x 2)d(x_1,x_2), and d(x 0,x 2)d(x_0,x_2) are all ϵ\ge\epsilon while d(x 0,x 1)+d(x 1,x 2)d(x 0,x 2)+ϵd(x_0,x_1)+d(x_1,x_2) \le d(x_0,x_2)+\epsilon, i.e. x 1x_1 is “almost between x 0x_0 and x 2x_2”; in this case (x 0,x 2)(x_0,x_2) is a boundary. Another kind of boundary arises from triples (x 0,x 1,x 2)(x_0,x_1,x_2) where d(x 0,x 1)<ϵd(x_0,x_1)\lt\epsilon while d(x 1,x 2)d(x_1,x_2) and d(x 0,x 2)d(x_0,x_2) are ϵ\ge\epsilon; in this case I think we get (x 0,x 2)=(x 1,x 2)(x_0,x_2) = (x_1,x_2), so that the homology is “insensitive to small perturbations”.

Posted by: Mike Shulman on September 8, 2016 4:17 AM | Permalink | Reply to this

Re: Magnitude Homology

fix some ϵ>0\epsilon\gt 0 and consider the coefficients Δ ϵ\Delta_\epsilon defined by Δ ϵ()=\Delta_\epsilon(\ell)=\mathbb{Z} for all \ell, but for which the transition map =Δ ϵ( 2)Δ ϵ( 1)=\mathbb{Z} = \Delta_\epsilon(\ell_2) \to \Delta_\epsilon(\ell_1) = \mathbb{Z} is the identity if 2 2<ϵ\ell_2-\ell_2\lt \epsilon [presumably a typo for 2 1<ϵ\ell_2-\ell_1 \lt \epsilon] and the zero map otherwise.

I don’t get it. The coefficient system Δ ϵ\Delta_\epsilon is meant to be a functor VAbV \to Ab, which in our case means ([0,],)Ab([0, \infty], \geq) \to Ab. But Δ ϵ\Delta_\epsilon doesn’t seem to preserve composition.

Posted by: Tom Leinster on September 8, 2016 4:46 AM | Permalink | Reply to this

Re: Magnitude Homology

Hmm… drat. I guess I need to get more sleep. Or maybe I’ve been thinking too much about lax functors and am out of practice for strict ones. Yeah, that’s a better excuse. (-:O

I feel like there ought to be some kind of “uniform blurring” possible, though. Any ideas?

Posted by: Mike Shulman on September 8, 2016 5:25 AM | Permalink | Reply to this

Re: Magnitude Homology

No, I’ve been thinking about it a little bit and haven’t got anywhere.

Persistent homologists use the term “persistence module”, which I think means a functor ( +,)Vect(\mathbb{R}^+, \leq) \to Vect. Here I do mean \leq as opposed to \geq, which (if I’m right) means it’s the opposite variance of our coefficients functor for homology. And I think functors from +\mathbb{R}^+ to VectVect (or AbAb) play a different role in persistent homology than they do in your homology theory. Nevertheless, maybe the literature on persistent homology will provide inspiration on possible coefficient systems that we haven’t thought of yet.

Posted by: Tom Leinster on September 8, 2016 1:07 PM | Permalink | Reply to this

Re: Magnitude Homology

Sorry for this irrelevant intrusion, but I smiled at the phrase “persistent homologists”, which makes me think of homologists who never give up.

Posted by: Todd Trimble on September 8, 2016 1:35 PM | Permalink | Reply to this

Re: Magnitude Homology

Do you think they should get together with pointless topologists?

Posted by: Tom Leinster on September 8, 2016 1:39 PM | Permalink | Reply to this

Re: Magnitude Homology

Perhaps they could get together with noncommutative geometers. Or noncommutative geometers could get together with them…

Posted by: Yemon Choi on September 9, 2016 12:48 PM | Permalink | Reply to this

Re: Magnitude Homology

Let me write out all the steps of the formal decategorification argument, to see which of them depend on what.

sum(Z X 1) = n0(1) nsum(Z xI) n = n0(1) n [0,)rank(MC n,(X))e = [0,) n0(1) nrank(MC n,(X))e = [0,) n0(1) nrank(H n,(X))e \begin{aligned} sum(Z_X^{-1}) &= \sum_{n\ge 0} (-1)^n sum(Z_x-I)^n\\ &= \sum_{n\ge 0} (-1)^n \sum_{\ell\in[0,\infty)} rank(MC_{n,\ell}(X)) e^{-\ell}\\ &= \sum_{\ell\in[0,\infty)} \sum_{n\ge 0} (-1)^n rank(MC_{n,\ell}(X)) e^{-\ell}\\ &= \sum_{\ell\in[0,\infty)} \sum_{n\ge 0} (-1)^n rank(H_{n,\ell}(X)) e^{-\ell} \end{aligned}

The first step is meaningful in that we can sum the RHS to the LHS by “varying 1-1” as in the theorem Tom quoted at the end of the post.

The second step seems to me to be unproblematic (for finite metric spaces). Namely, for any fixed nn, the sum [0,)rank(MC n,(X))e \sum_{\ell\in[0,\infty)} rank(MC_{n,\ell}(X)) e^{-\ell} has only finitely many nonzero terms, doesn’t it? So we are just equating two formal series whose terms are pointwise equal numbers; thus any sense in which one of them converges ought also to apply to the other.

The third step seems to me to be where the real problem is, since we are rearranging the order of summation, which I am accustomed to regard with great trepidation when series do not converge absolutely. I don’t know very much about the various kinds of trickery for summing divergent series, but it seems likely that they would be at least as sensitive to the ordering and grouping of terms as is the ordinary sum of a convergent (but not absolutely convergent) series, and in that case as is well known you can rearrange the terms to make the sum come out to be anything you like. Is there any reason to believe this step would be justified?

The fourth step, moving from ranks of chain groups to ranks of homology, also seems unproblematic if the chain complexes are all bounded, since then for a fixed \ell, both n0(1) nrank(MC n,(X))e \sum_{n\ge 0} (-1)^n rank(MC_{n,\ell}(X)) e^{-\ell} and n0(1) nrank(H n,(X))e \sum_{n\ge 0} (-1)^n rank(H_{n,\ell}(X)) e^{-\ell} have only finitely many nonzero terms and so are just numbers. Thus whatever sense of [0,)\sum_{\ell\in[0,\infty)} can be given to one of them ought also to apply to the other.

We need to do something like the fourth step in order to claim that magnitude homology, and not just the magnitude chain groups, categorifies magnitude. But is there any way to do this without the questionable third step? For instance, could we show that

n0(1) n [0,)rank(MC n,(X))e = n0(1) n [0,)rank(H n,(X))e ? \sum_{n\ge 0} (-1)^n \sum_{\ell\in[0,\infty)} rank(MC_{n,\ell}(X)) e^{-\ell} = \sum_{n\ge 0} (-1)^n \sum_{\ell\in[0,\infty)} rank(H_{n,\ell}(X)) e^{-\ell}\quad?

Maybe not – as Dan pointed out, the alternating sums of homology ranks can be obtained formally by rebracketing the alternating sums of chain ranks, so an equality like this is also implicitly trying to do some rearranging of a non-absolutely-convergent series.

Let’s see if I understand correctly what happens to make it work for graphs. In that case we talk about formal power series with q q^\ell rather than real numbers with e e^{-\ell}. Formal power series are a complete local ring, so we can add up an infinite series of formal power series as long as the degrees of their leading terms go to infinity, and the sum is pointwise on coefficients. Then in the analogous step

n0(1) n rank(MC n,(X))q = n0(1) nrank(MC n,(X))q \sum_{n\ge 0} (-1)^n \sum_{\ell\in \mathbb{N}} rank(MC_{n,\ell}(X)) q^{\ell} = \sum_{\ell\in\mathbb{N}} \sum_{n\ge 0} (-1)^n rank(MC_{n,\ell}(X)) q^{\ell}

the LHS is an infinite sum of formal power series (actually, unless I’m confused, an infinite sum of polynomials). Since there is a minimum distance between distinct points, as nn\to\infty the smallest value of \ell such that MC n,0MC_{n,\ell}\neq 0 also goes to infinity. Thus we can sum it coefficient-wise and get the RHS. (This is Theorem 8 of the Hepworth-Willerton paper.) And I guess we only ever try to plug in actual numbers for qq on the matrix side? Tom mentioned that the matrix inversion can happen in the field of rational functions, hence also works for almost any real value of qq, and the resulting rational magnitude function happens to formally equal a power series.

If this is right, then it seems to me that essentially the same approach should work for finite metric spaces, using Hahn series instead of power series. In case anyone is too lazy to follow links, Hahn series (with value group \mathbb{R}) are like power series but allow an arbitrary well-ordered subset of \mathbb{R} as the exponents instead of just {0,1,2,3,}\{0,1,2,3,\dots\} like in a power series (or {n,,1,0,1,2,}\{-n,\dots,-1,0,1,2,\dots\} like in a Laurent series). Well-orderedness allows us to multiply them without ever needing to add up more than finitely many coefficients.

Hahn series have a complete metric topology, which is more or less what you would expect — it’s not an adic topology determined by an ideal, since Hahn series are a field and have no nonzero ideals, but it is induced by a valuation and a corresponding absolute value: the valuation of a Hahn series is the smallest exponent of the variable appearing in it. I think this means that once again we can add up an infinite series of Hahn series pointwise as long as their valuations go to ++\infty. And as Tom mentioned, the set 𝕃 X\mathbb{L}_X of possible exponents is not only well-ordered, but has order type ω\omega and goes to ++\infty. So it seems to me that the formal argument should again be perfectly valid; am I missing something?

The only point of difference I see is that with non-integer powers of qq in the matrix, the matrix inversion can’t happen in the field of rational functions. It should be able to happen in the field of Hahn series, but then in order to directly plug in an actual number for qq we would need to worry about convergence. However, the “varying (1)(-1)” theorem should also bridge this gap with at least something precise and meaningful.

It won’t be exactly saying that magnitude homology determines the numerical magnitude, though: rather, there is some “intermediate” formal series that can be summed to both of them in two different ways (by varying 1-1 or by interpreting it as a formal Hahn series). This seems like a bit of a weaker statement than in the case of graphs, where the magnitude homology determines a formal power series, which is then formally equal to a rational function giving the magnitude, even if that formal equality doesn’t translate into convergence and an actual equality of numbers upon plugging in something for qq. Unless there is a kind of “Hahn rational function” that can be formally embedded into Hahn series.

Posted by: Mike Shulman on September 10, 2016 6:58 AM | Permalink | Reply to this

Re: Magnitude Homology

As you can tell, I’m also losing the battle to stop thinking about magnitude homology. I probably ought to swear off this blog altogether for a few weeks at least. But let me at least get this out there first… (-:O

I don’t know whether it will be possible to prove in the generality of any semicartesian VV that magnitude homology decategorifies to give magnitude. As Tom noted, the convergence problems are already quite tricky for metric spaces. However, here is an informal argument that the construction of the magnitude nerve is, in a reasonably precise sense, a direct categorification of the construction of the magnitude.

To define the magnitude of a VV-category XX, we choose a monoid homomorphism from ob(V)/ob(V)/\cong to the multiplicative monoid of a rig kk, apply it to the matrix of hom-objects of XX to get a matrix ZZ over kk, invert this matrix, then sum the entries of Z 1Z^{-1}. Ignoring convergence, we can write Z 1=1I(IZ)= n(IZ) nZ^{-1} = \frac{1}{I-(I-Z)} = \sum_n (I-Z)^n; for the rest of this comment I will consider the magnitude of XX to be this formal sum nsum((IZ) n)\sum_n sum ((I-Z)^n).

Now, to define the magnitude nerve of a VV-category XX, we choose a strong monoidal functor from VV to a cocomplete closed monoidal category (whose tensor product therefore preserves colimits in each variable) — a categorified kind of rig, with colimits categorifying addition. In fact, we choose the universal such functor, namely the Yoneda embedding of VV into Set V opSet^{V^{op}} under Day convolution. (For ordinary magnitude, we could always choose the universal “monoid rig” on ob(V)/ob(V)/\cong as well.) We apply this functor to the matrix of hom-objects of XX to get a matrix of objects of Set V opSet^{V^{op}}, which I’ll call Z\mathbf{Z}.

Let’s postpone the question of what it means to “invert” such a matrix, and instead consider the formal expression n(IZ) n\sum_n (I-Z)^n. Multiplication of square matrices over a rig categorifies to a monoidal structure on square matrices of objects of a cocomplete closed monoidal category. (This is an endo-hom-category in a bicategory of matrices, just as square matrices are the endomorphisms in a category of finitely generated free modules.) The n thn^{th} power of the matrix Z\mathbf{Z} in this monoidal structure is an ob(X)×ob(X)ob(X)\times ob(X) matrix of presheaves on VV defined by

Z n(x,y)()= x 1,,x n1V(,X(x,x 1)X(x n1,y)) \mathbf{Z}^{\otimes n}(x,y)(\ell) = \sum_{x_1,\dots,x_{n-1}} V(\ell,X(x,x_1)\otimes \cdots \otimes X(x_{n-1},y))

If we “sum the entries” of this by taking coproducts, we get exactly the nn-simplices of the magnitude nerve

N(X) n()= x,yZ n(x,y)()= x 0,x 1,,x n1,x nV(,X(x 0,x 1)X(x n1,x n)) N(X)_n(\ell) = \sum_{x,y} \mathbf{Z}^{\otimes n}(x,y)(\ell) = \sum_{x_0,x_1,\dots,x_{n-1},x_n} V(\ell,X(x_0,x_1)\otimes \cdots \otimes X(x_{n-1},x_n))

Then we “sum over nn” by assembling these into a simplicial object, getting N(X)N(X). This is not ridiculous, since a simplicial set qua homotopy type is the homotopy colimit of itself qua diagram of sets (discrete homotopy types), and colimits are, as already remarked, a sort of categorification of addition.

But wait, I hear you say: what about the “II-” in the formal expression for Z 1Z^{-1}?

To explain this, note that it’s really coproducts that categorify addition most directly. More general colimits categorify “addition with some subtraction thrown in” since we have to cancel out things from different input sets that get identified in the colimit. This can be made precise with the theory of linearity of traces that was mentioned elsewhere: for any sufficiently finite diagram shape BB, there is a linearity formula telling us how to compute the “size” (more precisely, the categorical Euler characteristic) as a linear combination of the sizes of the inputs.

Now Δ op\Delta^{op} is not sufficiently finite for this theory: it has infinitely many objects, for one thing. But if we throw away all the objects above mm, we get a category Δ m op\Delta_m^{op} that is sufficiently finite. And it turns out that the linearity formula for a truncated simplicial object Y:Δ m opWY:\Delta_m^{op} \to W is (but see below):

|colimY|= 0nm 0kn(1) k(n k)|Y k|. |colim Y| = \sum_{0\le n\le m} \sum_{0\le k\le n} (-1)^k \begin{pmatrix} n \\ k \end{pmatrix} |Y_k|.

Hmm, how about that. Here’s the binomial theorem:

(IZ) n= 0kn(1) k(n k)Z k. (I-Z)^n = \sum_{0\le k\le n} (-1)^k \begin{pmatrix} n \\ k \end{pmatrix} Z^k.

The “II-” in the formal series expression for magnitude is doing exactly the same thing to numbers that “gluing together nn-simplices along the faces and degeneracies” does to the Euler characteristic! (In fact, it’s mainly the degeneracies that have this effect; for a semisimplicial object the formula is |colimY|= 0nm(1) n|Y n||colim Y| = \sum_{0\le n\le m} (-1)^n |Y_n|. In some sense, this is a more abstract version of the point in the argument for graphs and metric spaces where IZI-Z is observed to have 0s on the diagonal (because ZZ has 1s on the diagonal), so that we need only sum over sequences of adjacently distinct points, which turn out to be exactly the nondegenerate simplices.)

(Actually, to be honest, I haven’t actually proven that that’s the linearity formula for Δ m op\Delta_m^{op}. My paper with Kate gives a combinatorial formula for the coefficients, and computing it for small values of mm, as well as the known connections to magnitude, suggests strongly that this is the formula; but I haven’t proven it in general.)

We can apply this to the magnitude nerve by noting that it is the sequential colimit of (the colimits of) its restrictions to Δ m op\Delta_m^{op}, i.e. like any simplicial object it has a skeletal decomposition

sk 0N(X)sk 1N(X)sk 2N(X)N(X). sk_0 N(X) \to sk_1 N(X) \to sk_2 N(X) \to \cdots \to N(X).

Now suppose we apply a functor A:VWA:V\to W as in the construction of magnitude homology, where I’ve replaced AbAb (or more precisely ChCh) by some more general (\infty-)category WW — it should be monoidal and the image of AA should consist of dualizable objects, for the linearity formulas to apply. We obtain a simplicial object of WW, whose colimit I will call C(X;A)C(X;A). The above calculations tell us that the Euler characteristic of sk mC(X;A)sk_m C(X;A) is

0nm 0kn(1) k(n k)|A(X(x 0,x 1)X(x n1,x n))|. \sum_{0\le n\le m} \sum_{0\le k\le n} (-1)^k \begin{pmatrix} n \\ k \end{pmatrix} |A(X(x_0,x_1)\otimes \cdots \otimes X(x_{n-1},x_n))|.

This looks just like the m thm^{th} partial sum of the magnitude series, except that our morphism obVkob V \to k has been replaced by AA composed with Euler characteristic in WW, and pulled out of the tensor product since the latter may not be monoidal. This suggests that for a general VV, the way to compare magnitude homology to magnitude would be to find k,A,Wk,A,W such that these two functions agree, or are at least related. There would still be the problem of convergence, since in general C(X:A)C(X:A) itself may not be dualizable. But even without working this out, the strong analogy between the constructions makes me feel more justified in calling it “magnitude homology” in the general case (something that I was a little worried about before).

Finally, we can even sort of categorify the fact that Z 1Z^{-1} is the inverse of ZZ. Note that one way to express this is to say that ZZ 1v=vZ Z^{-1} v = v and wZ 1Z=ww Z^{-1} Z = w for any column vector vv and row vector ww. To categorify this, suppose F:X opVF:X^{op}\to V is a VV-functor, which we can regard as a “column vector”. If we tensor Z n\mathbf{Z}^{\otimes n} on the left by Z\mathbf{Z} and on the right by FF, we get the object of nn-simplices in the simplicial two-sided bar construction:

B n(Z,Z,F)(x)()= x 1,,x nV(,X(x,x 1)X(x n1,x n)F(x n)) B_n(\mathbf{Z},\mathbf{Z},F)(x)(\ell) = \sum_{x_1,\dots,x_n} V(\ell,X(x,x_1)\otimes \cdots X(x_{n-1},x_n) \otimes F(x_n))

If we “sum these over nn” by assembling them into a simplicial object, then we can use the classical fact that any two-sided simplical bar construction of the form B *(X,X,F)B_\ast(X,X,F) is augmented by a map to FF and has an “extra degeneracy” consisting of the identities mapping into the first copy of XX; and thus it is simplicially homotopy-equivalent to FF itself. This is a categorified version of ZZ 1v=vZ Z^{-1} v = v, and the case of row vectors is dual.

Posted by: Mike Shulman on September 10, 2016 8:47 AM | Permalink | Reply to this

Post a New Comment